- The Brainyacts
- Posts
- 227 | 🤥🚨 A Cautionary Tale: Toyota, DC Lobbying & AI Hype
227 | 🤥🚨 A Cautionary Tale: Toyota, DC Lobbying & AI Hype
Brainyacts #227
It’s Tuesday. Dude in a robot suit or a true robot?
More info here.
Onward 👇
In today’s Brainyacts:
Toyota’s AI lobbying lesson
Grok-2 versus GPT-4o
Alexa’s new brain and other AI model news
3 California AI bills take shape and more news you can use
👋 to all subscribers!
To read previous editions, click here.
Lead Memo
🤥 🚨 A Cautionary Tale: Toyota, DC Lobbying, and the Perils of AI Hype
The recent case of Toyota’s brief engagement with LobbyMatic, a so-called AI-driven lobbying firm, serves as a cautionary tale for corporations, policymakers, and legal professionals alike.
The Allure of AI in Lobbying
Lobbying, a complex and often opaque industry, is ripe for technological disruption. The idea of using AI to navigate the intricate web of legislative processes, predict political trends, and enhance lobbying effectiveness is undeniably appealing. For companies like Toyota, which operate on a global scale and must engage with various regulatory environments, AI offers the potential to cut through the noise and focus on what really matters in Washington, D.C. However, as Toyota’s experience with LobbyMatic reveals, the promise of AI can sometimes mask underlying issues that, if not carefully scrutinized, can lead to significant legal and ethical pitfalls.
The Deceptive Facade of LobbyMatic
LobbyMatic was pitched as an innovative solution to modern lobbying challenges, using AI to track legislation, summarize Congressional hearings, and schedule meetings with lawmakers. Toyota, intrigued by the potential of this technology, entered into a three-month trial with the firm. However, what seemed like a promising venture quickly unraveled when it was discovered that LobbyMatic was not the sophisticated AI company it purported to be.
Unbeknownst to its clients, LobbyMatic was founded by Jacob Wohl and Jack Burkman. It has been reported that they are notorious conspiracy theorists and convicted felons. Operating under pseudonyms, they had concealed their true identities while marketing LobbyMatic as a groundbreaking AI service. This revelation not only shocked the firm’s clients but also highlighted the dangers of jumping on the AI bandwagon without thorough due diligence.
The Risks of AI Hype
Toyota’s experience underscores the importance of approaching AI with a healthy dose of skepticism. The hype surrounding AI often leads companies to adopt new technologies without fully understanding their capabilities or the ethical implications of their use. In the case of LobbyMatic, the AI service might not only be ineffective but is linked to individuals with a history of fraudulent behavior. This raises critical questions about the credibility and integrity of AI vendors, especially in high-stakes industries like lobbying where transparency and trust are paramount.
Lessons for the Future
The cautionary tale of Toyota and LobbyMatic offers several key lessons for companies considering the adoption of AI. First, it is essential to conduct thorough due diligence on AI vendors, including investigating their backgrounds, business practices, and the actual performance of their technology. Second, companies must be aware of the ethical implications of AI and ensure that their use of technology aligns with their corporate values and legal obligations. Finally, it is crucial to remember that AI, while powerful, is not a panacea. The effectiveness of AI tools depends not only on their technical sophistication but also on the integrity of the people and organizations behind them.
As AI continues to permeate various sectors, the story of Toyota and LobbyMatic serves as a reminder that the pursuit of innovation must be balanced with caution, responsibility, and a commitment to ethical business practices. The promise of AI is immense, but without proper safeguards, it can also become a vehicle for deception and fraud.
Stay vigilant!
Spotlight
🤺 🥊 Grok-2 v GPT-4o
Many of you likely have not tried Elon’s Grok2 AI model yet. Here I run a quick comparison between Grok2mini, the full Grok2 and GPT-4o.
For those new to Grok 2, here are some basic details:
Is it a new foundational model? Yes, Grok-2 can be considered a new foundational model in the sense that it's built to be adaptable to a wide range of tasks, much like other foundational models. However, it's uniquely integrated with real-time information from the X platform, which sets it apart in terms of its data freshness and relevance.
Differences from OpenAI and Anthropic:
Training Data: While specific details on Grok-2's training data aren't fully disclosed, it's known that xAI focuses on creating AI that accelerates human scientific discovery, suggesting a possibly different emphasis in its training material compared to general-purpose models from OpenAI or the safety-focused models from Anthropic. Grok-2 also leverages real-time data from X, which is a unique feature not commonly highlighted by OpenAI or Anthropic models.
Purpose and Tone: Grok-2 aims to be maximally helpful with a dash of outside perspective on humanity, potentially giving it a different conversational style compared to the more neutral or safety-first approach of models like those from Anthropic.
Strengths:
Real-Time Information: Access to current data from X posts makes it exceptionally timely.
Helpfulness: Designed to provide detailed, helpful answers, often from an non-mainstream perspective which can be refreshing or insightful.
Versatility: Like other foundational models, it's built for a broad set of tasks, from answering questions to potentially aiding in coding or complex problem-solving.
Weaknesses:
Bias and Accuracy: Real-time integration with X might expose it to misinformation or biased content if not properly filtered.
Limited Public Information: As with many new models, there's less community testing and feedback compared to more established models like those from OpenAI, potentially leading to unforeseen issues.
Dependency on X: Its strength in real-time data could also be a limitation if X's data quality or user base significantly changes or if there are platform-specific biases.
Ok here is the video:
AI Model Notables
► What is Elon’s Colossus? It is claimed to be the world’s most powerful AI training system, with plans to double its capacity soon. Why it matters? Grok2 was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.
► AI models companies making chips. Chip companies making AI models:
OpenAI rumored to develop first custom AI chip.
Nvidia's Eagle is an advanced AI model designed to understand and process high-resolution images much better than many existing models. It can handle images up to 1024x1024 pixels, which means it captures very fine details. This capability makes it particularly useful for tasks where seeing the small details matters, like in document processing, medical imaging, or enhancing photos.
► Amazon’s new Alexa voice assistant will use Claude AI and expected to launch in October.
► OpenAI toys with shifting not for profit structure to, you guessed it, profit.
News You Can Use:
➭ Some serious activity in California AI legislation:
California Assembly Bill 2602 is on Governor’s desk.: The use of artificial intelligence to create digital replicas is a major concern in the entertainment industry, and AI use was a point of contention during last year’s Hollywood strike.
Related: SAG-AFTRA has lauded the passage of new California bill AB 1836, which restricts the usage of artificial intelligence replicas of dead performers.
California Assembly passed AB 3211, requiring AI image watermarking that could effectively ban most AI image generation in the state.
➭ Who is right?:
Enterprises double their generative AI deployment efforts, Bloomberg survey says.
Gartner says a third of all generative AI projects will be abandoned.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8