088 | 🧱📊 AI Strategy Vectors

Brainyacts #88

In today’s Brainyacts we:

  1. point you to a Free AI tool with awesome features

  2. outline the 4 AI strategy vectors for business

  3. give you a test to learn if you are interacting with AI or human

  4. listen to deadly AI and the AI arms race realities

  5. get the latest overreach from a US Congressman’s AI Bill

👋 A special Welcome! to NEW SUBSCRIBERS.
To reach previous posts, go here.

🆓🧰 A Free Tool to Try 4 Different LLMs

Forefront.ai has been providing free access to 4 different LLMs since April. This is a great way for novices and enthusiasts alike to play, test, and learn how different LLMs perform in terms of speed, robustness, creativity, and accuracy.

You can test GPT-3.5, GPT-4, Claude Instant, and Claude+.

Not only do you get access to these models but you also get:

Personas

Personas are chatbots with unique personalities and capabilities. They offer over 88 unique personas - from role based like a Marketing Manager or Legal Translator to unique individuals like Ben Franklin to Taylor Swift. Feel free to add your own.

Chat Session History & Folders

Your chats are automatically named and organized into folders based on the subject. We set up a default folder called “My Chats”. You can add some others now or later.

Ability to Share Chats

Think a friend or colleague would find a chat interesting? Click the “Share” button above your chat bar to copy a shareable link and instantly share your chats with the world or whoever you choose.

Switch Internet Access On or Off

With each model you can have it access the internet or not by the toggle of a switch.

Image Generation

It is possible for each model and persona to generate an image by using the #imagine command.

Be sure to watch the video on the website as they are helpful to guide you on how to use everything.

🧱📊 Four Vectors to Help Organizations Operationalize their AI Strategy

In yesterday’s edition, I provided a guide to 4 basic approaches/versions of LLMs to consider for your business purposes: Consumer-Grade/Public-Facing LLMs; Public Access/Open Source Models; LangChain Models; and Proprietary Models.

Today I delve into the four critical vectors of consideration for selecting an LLM approach: Performance, Hosting, Output/Prompting, and Engineering Debt.

In approaching the exploration and definition of their LLM strategy, each organization will need to understand each vector and determine what mix or level of certainly and risk they desire, based on what their use cases are for the AI model. In other words, there is no singular approach. Each team’s strategy must be dictated by their specific needs, resources, and strategic goals. Understanding their unique strategic and operational landscape is crucial in determining which aspects to prioritize.

For instance, a large law firm that plans to comingle client data with firm data will likely prioritize Hosting and Output/Prompting. They'd require an LLM with secure hosting options to ensure client confidentiality and superior output quality to handle complex legal language that is consistent and mitigates hallucinations.

In contrast, a legal tech startup aiming to develop an AI-driven legal research tool might focus more on Performance and specialized engineering. A high-performing LLM would enable quick and accurate data processing, while specialized engineering would ensure long-term scalability and lower maintenance costs.

In another scenario, a non-profit legal aid organization might choose a balanced approach, considering all four vectors equally. As they provide a wide range of services and operate under budget constraints, they'd need an affordable, well-performing LLM that generates high-quality output and doesn't accrue significant Engineering Debt.

The Four Vectors:

1. Performance

Performance is the cornerstone of an LLM's efficacy, directly influencing its speed, accuracy, and overall ability to generate valuable output. In assessing LLMs, the ability to process and understand language, produce coherent responses, and efficiently handle tasks are all considered.

Consumer-Grade/Public-Facing LLMs, like GPT-3.5, provide a general-purpose language model with decent performance for broad applications, but may falter in highly specialized areas like legal language interpretation.

Public Access/Open-Source Models, on the other hand, offer the freedom of customization, but their performance varies greatly depending on community input and support.

LangChain Models, leverage decentralized resources to deliver performance comparable to consumer-grade models. Their performance, however, may fluctuate due to the variable nature of the blockchain.

Finally, Proprietary Models, specifically designed for unique applications, can offer stellar performance in their area of expertise, though their use-case is typically narrower.

2. Hosting

Hosting pertains to where the LLM is stored and run, determining aspects like response times, data privacy, and ongoing costs.

Consumer-Grade/Public-Facing LLMs are usually hosted on the provider’s servers, offering simplicity at the expense of control over data privacy.

Public Access/Open-Source Models offer more flexibility, with the option to host on-premise or via a chosen cloud provider, but require substantial technical skill to manage.

LangChain Models, utilizing decentralized hosting, provide an innovative alternative, though the associated costs and technical know-how required for hosting such models are higher.

Proprietary Models, meanwhile, offer hosting options tailored to a client's specific needs but may come with higher costs and increased reliance on the provider for support.

3. Output/Prompting

The quality of an LLM’s output and its ability to respond to prompts accurately is vital. Parameters like variability, factualization, toxicity, and bias play a significant role here.

Consumer-Grade/Public-Facing LLMs generally offer consistent and high-quality output but struggle with domain-specific language.

Public Access/Open-Source Models, while customizable, can require extensive calibration to achieve desired output.

LangChain Models, with their decentralized learning approach, have the potential for diverse output but can face challenges in maintaining consistent quality.

Proprietary Models, being highly specialized, can excel in their area of focus but may lack versatility.

4. Engineering Debt

Engineering debt refers to potential long-term technical problems that arise from compromising on a perfect solution for a quicker or easier one. In LLMs, this could manifest as inefficient code, scalability issues, or challenges in customization. It is worth noting, that in the legal market, engineering debt is a significant challenge as many legal tech software products are point-solutions or fortified in highly controlled non-versatile environments making it hard for independent maintenance or integration.

Consumer-Grade/Public-Facing LLMs often come with minimal engineering debt due to their well-tested and standardized nature.

Public Access/Open-Source Models can accumulate significant debt if not carefully managed due to their need for customization.

LangChain Models, with their unique infrastructure, can present complex problems in integration and support, adding to engineering debt.

Proprietary Models, while minimizing debt in their designed functionality, can present debt in terms of adaptability and scalability.

News you can Use: 

An Easy Test to Distinguish What is Human or AI

bY usInG a ComBInAtiOn Of lOwER caSE LeTtErs AnD uPpER CasE lEttErS, SoME Ai moDEls gEt COnFuSEd.

Or

By using a combination of lower case letters and upper case letters, some AI models get confused.

Take 14 Minutes to Watch/Listen to a Riveting Interview

A Bloomberg interview with Dr. Alex Karp, CEO of Palantir, on the AI market and competition (it's infinite), how we already have deadly AI, and why adversarial AI is the new arms race.

I will paraphrase one of the intimidating realities he presents: The biggest risk is not AI dominating us, it is our adversaries dominating us through AI.

House Democrat Seeks to Mandate AI Disclosure

As governments seek to wrap their heads (and hands) around AI, overreach and broad bills are becoming the norm. Here is the latest one.

For a copy of the proposed bill, click HERE.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8