077 | Merging Bing & ChatGPT

Brainyacts #77

In today’s Brainyacts we:

  1. get faked out and ask if WE are the problem, not AI

  2. hear how a firm tests and uses ChatGPT

  3. get Microsoft Bing and Copilot updates

  4. read OpenAI’s take on AI regulation

  5. dig into unique privacy concerns on the ChatGPT iOS app

  6. validate that Elon will use Twitter and Tesla to compete in the AI space

👋 A special hello to NEW SUBSCRIBERS.
To reach previous posts, go here.

🤹‍♀️ Fake Image Fakes Out Wall Street but Gives Us Clicks, Baby!

In a world where the Internet is already the Wild West, a plethora of venerable news outlets and keen-minded individuals found themselves tricked by a crafty counterfeit. An explosion near the US Pentagon, as portrayed by a cunningly doctored image, sent shockwaves through cyberspace and Wall Street yesterday.

Anyone with even a passing familiarity with the Pentagon could've picked out the blatant forgery in a heartbeat. Alas, the frenzied desire to leap onto the news-breaking bandwagon overshadowed such rudimentary fact-checking.

Now, the age-old question arises: is this a chilling testament to the terrifying power of AI, or a stark reflection of our collective lust for juicy headlines, clickbait, and the adrenaline-rush of doom scrolling? Although there is a certain convenience in blaming our AI overlords for such mishaps, it seems the answer leans more towards the latter.

While we're so engrossed in wrangling unruly AIs into submission and concocting ingenious ways to tackle deepfakes, it appears we're neglecting a rather significant piece of the puzzle. The ever-increasing haste of our click-first, react-first society is amplifying the potential for disaster.

It's high time we took a step back, put down our digital pitchforks, and embarked on a journey of introspection. Because, in the end, it's not the AI that will make a mess of things – it's our own eagerness to jump on the next viral bandwagon without a moment's hesitation.

📍A Blueprint for Smart GenAI Adoption

I had the privilege to sit down with Quinn Ross, managing partner of The Ross Firm in Canada. Quinn has been sharing the journey he and his firm has been on in terms of learning, playing with, and testing ChatGPT and other conversational AI tools.

Quinn is a Bencher. A Bencher is essentially a board member or director for the provincial or territorial law societies. These societies are responsible for regulating the legal profession in their respective regions. He is also the Chair of the Law Society of Ontario’s Futures Committee.

In this 30-minute conversation we dig into:

  • How do we train new lawyers if AI consumes the work that these lawyers used to learn on?

  • How do you create autonomous or organic change management in a law firm?

  • The ROI for tackling GenAI.

  • How to train through play first, work second.

  • and more.

Tools you can Use: Bing Chat Updates

1. Bing is now ChatGPT’s default browser

This morning, Microsoft announced that ChatGPT will use Bing as the default web browser. The company says Bing will roll out to ChatGPT Plus users today and will be available soon to free users who enable a plugin.

2. Bing Chat Doubles the # of Characters You Can Enter

  • Microsoft has quietly increased the number of characters possible to enter in the input box on Bing Chat.

  • Previously you were limited to a maximum of 2,000, the limit is now double at 4,000.

  • It's the latest improvement to Bing Chat following features such as Chat History and the export option that have starting to roll out.

3. Bing will also use same plugins as ChatGPT so there will be interoperability

Here is a short video showing you some of the above.

4. Copilot is now part of Windows 11

Watch this demo and notice how you can now drop a pdf into the chat window and as for a summary among other things - unsure what size limitations are. Will be digging into that.

News you can Use: 

OpenAI Shares Its AI Policy Recommendation

Yesterday I shared Google’s. Late yesterday OpenAI posted its broad take. Using the interesting word of “super intelligence,” they share some broad concepts they believe are essential.

Three key ideas to navigate the development of superintelligence:

  1. Coordination among leading development efforts is necessary to maintain safety and ease the integration of superintelligence into society. Options could include government-led projects or limitations on AI capability growth rates.

  2. An international authority similar to the IAEA (International Atomic Energy Association) may eventually be needed to inspect, audit, and enforce safety standards for superintelligence efforts exceeding certain capability thresholds. The agency should focus on existential risks rather than issues left to individual nations.

  3. We need the technical capability to make a superintelligence safe, which is currently an open research question with considerable effort being dedicated to it.

Concerning the scope:

  • Companies and open-source projects should be allowed to develop models below a significant capability threshold, without heavy regulation.

  • Today's AI systems, while bearing risks, are deemed manageable and commensurate with other Internet technologies.

  • The focus should be on systems with power beyond any technology yet created, not on lower-level technologies.

On public input and potential:

  • Governance of the most powerful systems and decisions regarding their deployment should involve strong public oversight.

  • OpenAI believes superintelligence will lead to a vastly improved world and is an inevitable part of our technological path.

  • The idea of halting the creation of superintelligence is considered risky and likely unfeasible, hence, the emphasis is on navigating its development correctly.

Privacy Concerns for the ChatGPT App

The recently released ChatGPT iOS app raises concerns about privacy risks associated with using generative AI apps. While the app does anonymize chat data, there are fears that anonymized data could still be re-identified. Despite OpenAI’s transparency about using user data for training its AI models, several experts stress the importance of exercising caution, reviewing the privacy policy and practices, and understanding the potential privacy threats.

Guidance for consumers:

  • Be cautious about getting too personal with AI chatbots, as there might be a risk to privacy.

  • Understand that anonymized chats could potentially be re-identified by combining them with other information sources.

  • Be mindful that if personal information like names, workplace details etc. are typed into a ChatGPT query, that data won't be anonymized.

  • Always review the specific privacy policies and practices of any service you use to understand how your data is handled and what protections are in place.

  • Exercise caution and consider the necessity of sharing sensitive information to ensure data security.

  • Use built-in security features on mobile phones to curb privacy incursions by apps.

  • Realize that built-in measures like app permissions, privacy settings, and app store regulations may not be sufficient to mitigate all privacy threats.

  • Understand that while certain settings may be available on a desktop platform, they may not be available on a mobile platform, e.g., opting out of chat being used for AI improvement is only available on the ChatGPT website, not on the iOS app.

  • Be aware that the privacy information found in app stores may not be reliable, and one should conduct their own research.

Elon will likely use Tesla & Twitter to compete with OpenAI/Microsoft & Google

I’ve been talking about this in Brainyacts and it seem to be coming to fruition. We shall see.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8