- The Brainyacts
- Posts
- 233 | 🚨🚨 BEWARE: OpenAI is getting hacked (sorta)
233 | 🚨🚨 BEWARE: OpenAI is getting hacked (sorta)
Brainyacts #233
It’s Tuesday. This app - https://socialai.co/ - gives each user a private, Twitter-like social network populated exclusively by chatbots. That’s right it is a Twitter with no other humans but yourself. The free app looks like X or Threads. You post what's on your mind, and your bots immediately respond. You choose what kinds of bots you want to interact with, using categories like supporters, fans, trolls, "brutally honest," haters, "doomers" and so forth.
Have fun!
Onward 👇
In today’s Brainyacts:
Combatting deepfake images
OpenAI gets hacked
OpenAI’s Advanced Voice Mode is finally here and other AI model news
Judge says AI training copyright claim is “nonsensical” and more news you can use
👋 to all subscribers!
To read previous editions, click here.
Lead Memo
🧑🎨 🕵️ Combatting Deepfake Images
Google will begin flagging AI-generated images in Search later this year. It will use the opt-in C2PA metadata method.
C2PA metadata is information embedded in an image (or other digital content) to provide transparency about how that image was created and edited. Specifically for AI-generated images, C2PA metadata helps users understand if and how an AI was involved in making the image, ensuring trust and authenticity. Here’s a plain-language breakdown:
What is C2PA?
C2PA stands for Coalition for Content Provenance and Authenticity. It’s a group formed by major tech companies and organizations to tackle issues of misinformation, deepfakes, and manipulated content. Their goal is to create a standard for tracking the origin and editing history of digital media.
What is Metadata?
Metadata is like a set of labels or tags that describe information about a file. For images, metadata can include things like when the image was created, who created it, or what software was used.
What Does C2PA Metadata Do in AI-Generated Images?
When an image is generated by an AI tool, C2PA metadata is embedded within the file. This data records important details, such as:
Who or what generated the image (for example, the specific AI tool used).
When the image was created.
Any edits or transformations applied (like color adjustments or resizing).
Who has handled or published the image.
Why Is This Important?
C2PA metadata adds transparency, helping viewers and platforms verify the source of an image. This is crucial in combating misinformation, as it can show whether an image was fully AI-generated, modified using AI, or authentic.
Example
Let’ s say you see an image that looks realistic but might be AI-generated. By checking the C2PA metadata, you could find out:
The image was made using an AI tool on a specific date.
The image has not been altered by anyone since.
This helps you trust the image, knowing its origin and creation process.
In short, C2PA metadata in AI-generated images ensures people can see how the image came to be, making it easier to spot AI content and reduce the spread of misleading or fake media.
On their website, there is a terrific 8 minute explainer.
Spotlight
🚨🚨 BEWARE: OpenAI is getting hacked
🚨The pic below is from OpenAI’s news Twitter/X account. It was hacked at least twice over the last 24 hours.
Cryptocurrency scammers, who were promoting a fake cryptocurrency called "$OPENAI" hacked into the account. The hackers described the fake currency as “the gap between Al and blockchain technology,” which “all OpenAI users” were eligible for, and those who held tokens would also get access to future OpenAI beta programs.
The post linked to a fake website that emulated OpenAI’s actual website. This fake site asked people to connect their crypto wallets to access the new currency.
The fake site is till up at https://openai-token.com/.
It’s worth checking it out to see the length they went to in order to get the look and feel of the real site right. Obviously with every link pointing to Claim your OpenAI token, one would hope most visitors would get suspicious.
Sharing this with you dear reader so that you always stay vigilant.
Note: nothing here suggests that OpenAI’s underlying user or training data have been breach. This wa only an Twitter/X hijacking and fake website incident.
AI Model Notables
► OpenAI’s Advanced Voice Mode (AVM) is coming to all Plus subscribers today or this week. AVM provides a more interactive, conversational experience in real time. NOTE: It will not be available in the EU, the UK, Switzerland, Iceland, Norway, and Liechtenstein.
► UK META USERS!!: Meta announced plans to use UK Facebook and Instagram posts for AI training, starting next week, while allowing users to opt out.
► UPDATE: LinkedIn suspended AI training on UK users’ data following privacy concerns raised by the Information Commissioner’s Office, halting the practice pending further engagement.
► Goodbye CAPTCHAs? An AI can beat CAPTCHA tests 100 per cent of the time
► Microsoft says OpenAI's ChatGPT isn't "better" than Copilot; you just aren't using it right, but Copilot Academy is here to help and they are giving it away as part of 365 license.
► Amazon is working on its own ChatGPT competitor. Meet project Olympus.
► China’s Alibaba launches over 100 new open-source AI models, releases text-to-video generation tool
► Meta's AI chatbot to start speaking in the voices of Judi Dench, John Cena, others, source says.
► Jony Ive (famed Apple iPod/Phone designer) confirms he’s working on a new device with OpenAI
News You Can Use:
➭ AI-Powered M&A: UBS tool analyzes 300K companies in seconds. UBS sees a broader application for its AI tool in legal tasks and managing data rooms during complex transactions.
➭ “This is nonsensical.” Sarah Silverman’s lawyers got a stern rebuke from the judge in a class action case they seek to bring against Meta over AI training.
➭ Here is what’s illegal under California’s 9 (and counting) new AI laws.
➭ Middle Eastern sovereign wealth funds significantly increased investments in AI startups, with funding up fivefold in the past year.
➭Prison inmates in Finland are being employed as data labelers to improve accuracy of AI models.
➭ Software development jobs are down more than 30 percent since the pandemic, leaving thousands of tech workers desperate for new work.
➭ How the next generation of managers is using Gen AI.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8