- The Brainyacts
- Posts
- 204 | 💨🏋️ AI bots on elections ballots + Harvey.ai (my take)
204 | 💨🏋️ AI bots on elections ballots + Harvey.ai (my take)
Brainyacts #204
It’s Friday. Remember Yahoo? Did you know Yahoo News is the biggest news platform in the US, with more than 185 million monthly visitors? They just revamped the Yahoo News app - driven by AI. It’s free and on Apple and Android. I will be checking it out as I need better ways to find news I am interested in
Let’s jump in.
In today’s Brainyacts:
AI political candidates and parties
Harvey.ai
Brazil using OpenAI to flag meritless lawsuits and other AI model news
If AI scanned your face, you get equity in the AI company and more news you can use
👋 to new subscribers!
To read previous editions, click here.
Lead Memo
🗳️ 🏛️ AI as Political Candidates and Parties: The Intersection of Technology and Democracy
The emergence of AI-powered political candidates represents a fascinating and contentious development in modern politics. AI candidates, such as "Steve" in the United Kingdom, "Virtual Integrated Citizen" (VIC) in Wyoming, and the Synthetic Party in Denmark, challenge traditional notions of political representation by leveraging LLMs to engage with voters and advocate for policies. I love this concept as it is innovative. It does raise several critical questions and potential issues that need careful consideration.
Technological Capabilities and Limitations
AI's processing power is its most significant asset in the political arena - for good or bad. AI candidates like Steve can hold thousands of simultaneous conversations, potentially providing a more comprehensive understanding of voter concerns than human politicians. This ability to process and analyze vast amounts of data quickly can lead to more informed and responsive policy-making.
However, this technological prowess comes with limitations. AI models still struggle with hallucinations and biases, leading to potentially misleading or harmful outputs. These hallucinations could unknowingly undermine the credibility and effectiveness of AI candidates.
Legal and Ethical Considerations
The legality of AI candidacy remains a significant hurdle. Current laws generally do not recognize non-human entities as eligible political candidates. This legal gray area presents challenges for AI campaigns, as seen with VIC in Wyoming, where local election laws prevent nonhumans from running for office. The ethical implications of using AI to represent human voters are equally complex, particularly when considering accountability and transparency in AI decision-making processes.
Practicality in Governance
While AI candidates can theoretically enhance democratic processes, practical implementation remains challenging. AI candidates like VIC require human proxies (or “meat puppets” as the human candidate, Victor Miller, refers to himself) to perform physical tasks such as attending meetings and casting votes. Translating AI-generated policies into actionable governance necessitates human intervention, raising questions about the fidelity of AI recommendations in practice.
Moreover, the dynamic nature of political duties—requiring negotiation, debate, and human interaction—presents a significant challenge for AI candidates. AI currently cannot replicate the person-to-person discussions and deal-making critical for political effectiveness.
The Synthetic Party: A Case Study
The Synthetic Party in Denmark offers an intriguing example of AI in politics. Founded by the artist collective Computer Lars and the MindFuture Foundation, the Synthetic Party's public face is the AI chatbot Leader Lars. The party's platform is derived from the policies of Danish fringe parties since 1970, aiming to represent the values of the 20% of Danes who do not vote in elections. Leader Lars interacts with people on Discord, collecting new perspectives and information to refine its policies continually.
The Synthetic Party highlights the potential for AI to enhance democratic engagement by involving a broader spectrum of voices in policy creation. However, it also underscores the challenges of integrating AI into the political process, including the need for human proxies and the difficulty of gaining public trust.
Postscript: Generative AI in Legislative Processes
A vivid example of the potential utility of generative AI in politics can be recalled from the introduction of the Affordable Care Act in the United States. The bill spanned over 2,000 pages, and legislators were given an astonishingly short time to read and understand its contents before voting. This scenario underscores the absurdity of our current legislative process, where complex and lengthy draft legislation is produced with little regard for the practicalities of a thorough review.
Generative AI could significantly aid in this process by quickly ingesting and summarizing extensive legislative documents, highlighting key points, and flagging critical issues for lawmakers. Even if the legislative process itself remains unchanged, AI can help politicians better understand what they are voting on, ensuring more informed and rational decision-making.
Spotlight
💨🏋️ Harvey.ai: Stealth, Vapor, or Hard Yards?
Recent coverage of Harvey.ai has sparked both fascination and frustration due to its so-called uncooperative stance on transparency. While much attention is given to Harvey's high-profile clients and positive use cases, the industry's doubts and skepticism are fueled by the company's secretive approach, lack of transparency, and heavily VC-backed strategy. But is this skepticism warranted?
Critics have labeled Harvey as potential vaporware (or just an OpenAI wrapper) and penalized it for its stealth mode—avoiding press and demos. I don't know much about it myself, but I will share insights based on actual users and their interactions with the Harvey team.
My perspective is this: the team at Harvey is working diligently to find product-market fit in the legal industry (the hard yards)—a notoriously challenging market for new technology. The legal press can often harm startups by overhyping them, leading to fatigue or unrealistic expectations by potential customers.
Has Harvey fueled the fire in ways that created frustration? Certainly. But I don't believe that is a key part of their strategy. Rather, it could speak to their focus—moving fast and deliberately to apply generative AI in a meaningful and safe manner. They might not have it all figured out yet.
I don't know where they are in their journey, but I respect their effort, and so should we. They don't owe us anything.
Here is what I know.
Underlying Models: Harvey.ai utilizes several different models to power its platform, including:
GPT-4
GPT-4 Turbo
GPT-3.5
Custom GPT model(s) developed in conjunction with OpenAI
Model Integration: Harvey is continuously testing new models to integrate them effectively into the platform. Each query processed by Harvey involves between 20 to 500 different model calls, depending on the complexity of the documents and the query itself.
Backend Processes: These model calls are distributed across different foundational models and then pieced together to produce a cohesive output. This multi-model approach ensures optimal performance as different models excel at different tasks. For instance, they have found that GPT-3.5 is better at certain analytical tasks and GPT-4 is more suitable for more complex and nuanced tasks.
Recent Developments: Harvey has recently partnered with Mistral to enhance its capabilities further. Read more here.
Accessing firm-specific knowledge and documents through Integration with Document Management Systems (DMS)
Integration Methods:
Basic Integration:
Objective: Enable seamless document uploads.
Technical Feasibility: Technically straightforward.
Current Status: In progress, working directly with DMS providers to develop a highly functional integration.
RAG Integration (Retrieval-Augmented Generation):
Objective: Link directly into customer databases to provide RAG-influenced outputs.
Current Implementation: Already in use with select customers, such as PwC.
Future Availability: Not currently offered broadly, but plans for broader availability exist if commercially viable.
Considerations for RAG Integration:
Complexity: More complex due to factors like firm conflict of interest walls and other unique challenges in law firm databases compared to other types.
Commercial Viability: Future offerings will depend on commercial sense and customer interest.
🚨🚨 If any of you or anyone at Harvey wants to share more specifically, I am all ears - can be anonymous or on the record. But I am not looking for gossip or sniping - just information and insight. Hit me up!
AI Model Notables
► Brazil’s government is hiring OpenAI to leverage AI for analyzing and flagging lawsuits, aiming to reduce court-related costs and reduce that impact on the federal budget.
► Whatever happened to Amazon’s Alexa big AI update?
► Microsoft will give you full control over which apps can use generative AI on Windows 11.
► LinkedIn tests AI assistant for Premium members.
► OpenAI's revenue is reportedly booming to about $3.4billion annualized - most of this revenue will come from ChatGPT subscriptions and $200M is projected to come from Microsoft, for giving Azure users access to its AI model.
► Danish media outlets want a group deal with OpenAI amid fears that smaller countries will become marginalized as chatbots become prolific - ifthey don't get one, they are threatening to sue.
► Elon Musk drops lawsuit against ChatGPT-maker OpenAI without explanation.
► Tesla investors sue Elon Musk for launching a rival AI company
► Apple to ‘pay’ OpenAI for ChatGPT through distribution, not cash.
► Former NSA head and retired Gen. Paul Nakasone is joining the OpenAI board of directors as well as the company's newly formed Safety and Security Committee
► Amazon says it’ll spend $230 million on generative AI startups
News You Can Use:
➭ If Clearview AI scanned your face, you may get equity in the company - link to the settlement filing.
➭ AI will drive M&A pipeline, says Citigroup's Drury 👇👇 video below - see also: More AI founders looking to sell startups, Hugging Face CEO says.
➭ More than 90% of deepfakes are nonconsensual porn, mostly depicting women.
➭ ASU Law launches AI focus across multiple degree programs.
➭ SEC Charges Founder and former CEO With Fraud aka “AI washing.”
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8