• The Brainyacts
  • Posts
  • 117 | šŸ˜“šŸ¤·ā€ā™‚ļø Boring Boilerplate AI Use Policies

117 | šŸ˜“šŸ¤·ā€ā™‚ļø Boring Boilerplate AI Use Policies

Brainyacts #117

Itā€™s Tuesday. Chickens be gone!

In todayā€™s Brainyacts:

  1. Beyond boilerplate AI Use policies

  2. Microsoft is making generative AI everywhere all the time

  3. Judges get free access to my course

  4. OpenAI might go bankrupt and other AI model news

  5. A new Legal Ethics newsletter comes onto the scene and other news you can use

  6. A trends chart

šŸ‘‹ to new subscribers!

To read previous editions, click here.

Lead Story

šŸ˜“šŸ¤·ā€ā™‚ļø Beyond the Boilerplate AI Use Policy: 7 Things to Add

Crafting an effective AI Use policy is essential. And my bet is that most of you have either had a hand in writing yours or are part of an organization that has one. While having one is critical right now, I keep reading and seeing many that are grossly simplified or incomplete. Based on my research and work, I want to share 7 things you should consider addressing but likely havenā€™t yet.

Initiate the Policy Writing Process

But first, if your organization hasnā€™t formulated an AI Use policy yet, now is the time to start. AI technologies are rapidly becoming integral in business operations. A well-defined policy isnā€™t just a precautionā€”itā€™s a necessity that can govern responsible AI usage and mitigate risks. 

Move Beyond Broad or Narrow Policies

Iā€™ve observed a recurring issue in AI Use policies, whether prepared by law firms or in-house teamsā€”theyā€™re often either too vague or overly restrictive. A successful policy should strike a balance, providing clear guidelines without stifling innovation or absolving users and organizations of accountability.

Transcend the Boilerplate

While most policies include necessary clauses, such as data privacy, user responsibility, and compliance with laws, these alone are insufficient. A policy should be more than a defensive document; it should be a strategic guide that integrates with your organizationā€™s broader mission and values.

Know this, your people will use AI and are using it. The question is to what extent will you guide them in their use of it for work purposes? You should want to help them use it wisely.

Ok, here are 7 things you should definitely consider addressing explicitly in your AI Use Policy

1. Employee Training and Education

A crucial yet overlooked component is the training and education of employees. They need to understand not only the ā€˜howā€™ but also the ā€˜whyā€™ behind responsible AI usage. Regular, mandatory training should be instituted, aimed at various aspects of AIā€”ethical considerations, potential risks, and the specifics of your companyā€™s policies. This is not hard; it just requires that someone is responsible for making it happen.

Need a foundational course? I happen to have one but regardless, get and keep people educated. This does not have to be an elaborate affair. And please don't make it horrible force video watching like so many compliance and new employee training!

2. Auditing and Accountability Mechanisms

Your policy should clearly outline how compliance will be monitored and enforced and by who. This should consider the use of regular audits, reporting mechanisms, and a clear delineation of consequences for violations - not that they must be punitive. In fact, but for egregious misuse, corrective and supportive action is likely a better course of action.

Remember we are all on a steep learning curve. These steps send a strong message: adherence to the AI use policy is non-negotiable but people are supported, not trapped.

3. External Collaboration and Standardization

To maintain integrity and stay ahead of regulatory curves, consider collaboration with industry groups, regulatory bodies, or other entities. Such partnerships can help in crafting policies that are not just compliant but are also contributing to shaping future regulations and industry standards. It also provides legitimacy and credibility to yours.

Check-in with state bar associations. I have at least one conversation a week with various ethics commissions. Industry associations like the Association for Corporate Counsel or the Association of Legal Administrators might be good places to start. Another option might be any practice or area of law-focused associations and AI Use considerations will be different based on what types of services and clients one works with.

4. Impact Assessment and Reporting

Regular assessments of AI use are vital, examining impacts on data privacy, potential discrimination, and more. This process should be ongoing, transparent, and should involve both internal and, where appropriate, external reporting to regulators, clients/customers, and the public.

Environmental, Social, and Governance (ESG) considerations are becoming increasingly significant within the global corporate landscape, and AI plays a role across all three domains:

  • Environmental: AI can be a double-edged sword. On one hand, it can contribute to environmental goals by optimizing resource use, reducing waste, and enabling more sustainable operations (e.g., through smart energy management systems). On the other hand, large AI models require significant computing power, which has considerable energy demands and potential environmental impact (see vast water consumption). Assessments should evaluate and report on these aspects, aiming to minimize negative impacts.

  • Social: AI systems can inadvertently reinforce existing biases, leading to potential discrimination in various contexts such as hiring, lending, and law enforcement. Regular assessments should actively look for such biases and inaccuracies in the AIā€™s outputs and take corrective actions as needed. This is essential to maintain the social license to operate and to meet increasing investor and consumer demands for socially responsible behavior.

  • Governance: The use of AI raises numerous governance issues, including accountability, data privacy, and ethical use of technology. Impact assessments should include an evaluation of how the organizationā€™s use of AI aligns with its governance principles and policies. For instance, who within the organization is responsible for AI? How are decisions about AI use made and reviewed?

5. Customer and Stakeholder Engagement

Involving external stakeholders, particularly customers, can yield important insights. It opens channels for feedback, which can be instrumental in refining your AI use policy. This engagement demonstrates that your organization values transparency and is willing to adjust its practices based on stakeholder input. 

6. Addressing Specific Risks of AI

AI is not a monolithic technology and comes with varied and complex risks. I have reviewed too many that refer to AI as just that AI ā€“ which encompasses vast technologies. Many AI experts cannot agree on how to define AI so it is best to list the specific tools or services that your organization uses.

Also, be specific as to generative AI. There is conversational or text-based (ChatGPT), image generation (Midjourney Bing Chat, and OpenAIā€™s Code Interpreter). There are voice and video generators. For example, if a member of your team is preparing a slide deck and uses an image generation tool, how does your AI Use Policy apply? Does it depend on how and where that slide deck will be used? Or is the simple act of prompting trigger something in your use policy?

Your policy should delve into specifics where it can by listing the category of generative tools, not necessarily the proprietary tool name as those change and new models are developed frequently.

7. Global Considerations

For internationally operating organizations teams, AI Use policies must consider the intricate web of different countries' laws and regulations regarding AI. This includes data protection laws and regulations concerning fairness, transparency, and accountability.

Conclusion

Crafting an AI Use policy is a significant, necessary undertaking. It must be a living document, evolving as technology and societal norms change. Remember, a good policy not only protects your organization but also guides itā€”helping to harness the power of AI responsibly, ethically, and effectively.

Spotlight Story

Microsoft just did something that will make generative AI ubiquitous in many lawyersā€™ lives.

Microsoft just launched Azure OpenAI Service and Virtual Agents.

TL;DR

  1. Confirm whether you are a Microsoft Azure customer or not. If you are a Microsoft365 customer you can access Azure for free.

  2. You now have the ability to mesh OpenAIā€™s models with your own business data in a secure and private manner.

  3. This is not a simple ā€œplug-n-playā€ deployment and will take some technical capabilities

  4. This is evidence that generative AI will creep/surge into our daily lives in many ways considering how many of the global legal services organizations and teams are running Microsoft services.

Think of this as a private version of ChatGPT tailored for the enterprise that has absolute Data Privacy - Controlled Security - Seamless Integration with YOUR internal data - and the ability to create your teamā€™s own chatbots.

In todayā€™s digital age, engaging with data is more than reading charts and spreadsheets; it is about interactive and intuitive communication. Imagine being able to ask a computer system a question in plain English and receiving a detailed, specific answer based on your own company's data. That is the world that Azure OpenAI on Your Data and Power Virtual Agents are creating for businesses and legal teams.

A Chatbot Like No Other

Power Virtual Agents is a platform by Microsoft that enables you to create chatbots. But these arenā€™t your ordinary chatbots; they are supercharged with Azure OpenAI capabilities - using either GPT-3.5 Turbo or GPT-4 (see pricing). This means that they don't just follow a scriptā€”they can understand and process natural language, responding intelligently to a wide variety of inquiries.

Imagine you are a part of a legal team, and you need to quickly find the most relevant clause from a past contract. Instead of trawling through files or databases, you could simply ask your chatbot, "Can you find the liability clause from the Johnson contract last year?" and receive an accurate and immediate response.

Tailored to Your Data

Hereā€™s where all of this becomes a game-changer: your chatbot doesnā€™t just draw from general informationā€”it uses your own data to generate responses. That means the insights and answers you receive are tailored, precise, and grounded in your business's reality. This is like having a super-smart assistant that knows your business inside and out. And it is using the underlying model (GPTs) to contextualize and respond in a natural and intuitive way!

Seamless Deployment and Integration

Making this even more powerful is the ease with which this can be set up. This means that this intelligent chatbot capability can be brought right into tools that your team already uses every dayā€”like Microsoft Teams, websites, and various products and solutions.

Consider the legal team again: they could have this chatbot right in their Microsoft Teams channel, where they already communicate about projects or cases. When a new client query comes in, they can immediately ask the bot for relevant information and precedents, all without leaving their Teams window.

The Future: Generative AI in Our Daily Tools

The integration of services like Azure OpenAI on Your Data into Microsoft's product line is a likely sign that the future of our daily work tools will be closely intertwined with generative AI. Imagine a future where your word processor not only corrects your grammar but can also suggest whole sections of a legal brief based on case law in your firm's database, or where your spreadsheet can predict future budget spend based on your practice teamsā€™ current and past performance data. These are not far-off possibilities, but rather the emerging reality.

What About Our Private and Secure Data?

For more info, go here.

The Good News and Bad News

I hesitated to share this for one main reason, working in the Microsoft ecosystem is not for the faint of heart. It is not intuitive and presumes a certain level of technical know-how. That is the bad news.

The good news is that many of us are Microsft365 customers. If we are, we can open a free Azure OpenAI Service account or we are part of a team that has IT support and know-how. This means we can begin testing, learning, and building with this.

Sharing this again for max distribution

Free Access to My Course for Any Sitting Judge

In light of invaluable discussions with state ethics committees and inspired by insights from Ralph Artigliere, a revered retired judge from within the Brainyacts community, I am excited to present a special offer: free access to the "Generative AI Blueprint" course for every sitting judge, not just in the U.S. but globally.

If you are an active judge or are acquainted with one, I encourage you to reach out. Simply send an email to [email protected], confirming your current judicial position. This is a decent opportunity to get up to speed and understand the power (and risks) of conversational generative AI in the legal realm.

AI Model Notables

ā€¢ ChatGPT costs $700,000 daily, and may go bankrupt in 2024

ā€¢ Google says AI systems should be able to mine publishersā€™ work unless companies opt-out

ā€¢ Will OpenAIā€™s GPT-5 be released by end of year?

ā€¢ Google-backed Anthropic (Claude) raises $100 mln from South Korea's SK Telecom

ā€¢ Anthropic also release Claude Instant 1.2 - a faster lower-priced (but highly capable model)

News You Can Use:

āž­ Judicial ethics powerhouse, Prof. Renee Knake Jefferson launches Legal Ethics Roundup newsletter. She is covering AI-related topics:

āž­ The New York Times Updates Terms of Service to Prevent AI Scraping Its Content

āž­ California Innocence Project harnesses generative AI for work to free wrongfully convicted

āž­ US Department of Defense Launches Generative AI Task Force

āž­ FTC Shows Willingness to Use Extreme Measures to Tame AI

āž­ Professors want to 'ChatGPT-proof' assignments and are returning to paper exams and requesting editing history to curb AI cheating

āž­ Google and Universal Music working on licensing voices for AI-generated songs

āž­ The design of AI impacts human emotions.

āž­ AI Empowers Legal Firms: New Thomson Reuters Reports Highlight Transformative Impact

āž­ Elon Musk says Tesla cars now have a mind, figured out ā€˜some aspects of AGIā€™

āž­ AI will be at the center of the next financial crisis, SEC chair warns

šŸ¤”šŸ§ A smart chart

Search on ā€œAI Capability Controlsā€ continues to surge. AI capability control encompasses the supervision and governance of AI systems, with a specific focus on establishing clear boundaries, constraints, and standards. This is aimed at ensuring the safe, ethical, and responsible operation of AI technologies.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8