• The Brainyacts
  • Posts
  • 109 | ☣️ ⚖️ Harmful and dangerous technology work

109 | ☣️ ⚖️ Harmful and dangerous technology work

Brainyacts #109

In today’s Brainyacts we:

  1. share a recent webinar on practical AI from Clio

  2. see the SEC signalling more AI regs

  3. listen in on a sales call between real human and AI

  4. unpack legal injury of “harmful and dangerous technology work”

  5. play with new AI sketch tool

  6. unveil how a top AI image boss admits using images without consent 

  7. explore the landscape of AI regulation around the globe 

  8. investigate how more small firms are harnessing AI to compete

  9. navigate through the Indian legal profession's use of AI 

  10. spotlight recent AI regulatory efforts in key nations 

  11. reveal how courts, not politicians, are shaping the first AI regs 

  12. discover how ChatGPT can match the top 1% of creative thinkers

  13. learn media titan, Barry Diller, is preparing to sue AI companies

  14. listen to Gene Simmons (KISS bassist) and other rockers' opinions on AI.

  15. affirm OpenAI's stance that ChatGPT and GPT-4 are not dumber

👋 to new subscribers!

To read previous posts, click here.

💻👩‍🏫 Over 800 lawyers joined this webinar on AI

Why? It was filled with practical and pragmatic uses rather than platitudes and future-telling.

Watch the replay for free here and get access to the slide deck.

Disclaimer: I was part of the panel.

💰 🛂 SEC signals maturing AI = maturing AI regulation

US SEC Chair, Gary Gensler, warns of the need for regulatory changes in the face of the rapid adoption of AI in the financial sector, hinting at a potential revision of existing rules and the possible introduction of new ones.

Key Points:

  • AI's proliferation could heighten financial instability due to herding behavior.

  • Current risk management models need updating to adapt to the new wave of data analytics.

  • The SEC is looking closely at how businesses use AI, potentially implicating securities rules.

  • Misleading corporate statements about AI-related risks and opportunities could lead to an SEC crackdown.

  • The SEC may leverage more AI tools for better surveillance, analysis, and enforcement.

  • A potential new rule on conflicts of interest when using AI to steer clients towards products is being considered.

Why This Matters: Companies need to be mindful not only of how AI technology can advance their business and also of the new regulatory implications that might be in play, but also of what they are saying about their AI capabilities. This call for stronger oversight indicates a maturing perspective on AI's role in the economy, aligning AI growth with the broader goal of fair and efficient markets.

🏷️ 🤖 Remarkable live AI sales call

This shows you how advanced and fast things are moving. Take a listen to this call between a real person and an AI. The AI is totally responsive and drives the conversation.

Yes, there is a pause after each exchange but it today’s world I think most people would think this is a time delay due to tech, not an AI taking time to process and respond.

☣️ 🧭 Navigating the Complex Terrain of AI Development

The Emerging Legal Paradigm of "Harmful and Dangerous Technology Work"

The recent petition from Kenyan workers who engage in content moderation for OpenAI's ChatGPT, via the intermediary company Sama, has opened a new discussion in the global legal market. This dialogue surrounds the concept of "harmful and dangerous technology work" and its implications for laws and regulations within the expansive field of AI development and deployment.

The petition, asking for investigations into the working conditions and operations of big tech companies outsourcing services in Kenya, is notable not just for its immediate context but also for the broader legal issues it raises. Central among these is the notion of "harmful and dangerous technology work," a term that encapsulates the uniquely modern form of labor involved in developing and ensuring the safety of AI systems.

The term resonates with our historical understanding of hazardous work but places it within the digital age's context. The content moderation undertaken by these workers entails exposure to graphic and psychologically damaging material, akin to the emotional and mental toll experienced by workers in physically hazardous industries. While the dangers in this digital arena may not be physical, they are no less real.

The economic underpinnings of this "harmful and dangerous technology work" provide a stark view of the disparities within the global AI industry. Contracts reviewed by TIME between OpenAI and Sama, signed in late 2021, were valued at about $200,000 in total for content moderation work that exposed workers to distressing and harmful content. Despite the psychologically taxing nature of the job, the wages these workers received were disproportionately low. The most junior data labelers, or agents, earned a basic salary of 21,000 Kenyan shillings ($170) per month, with monthly bonuses and commissions for meeting performance targets that could elevate their hourly rate to just $1.44 - a far cry from the $12.50 hourly rate that OpenAI paid Sama for their work. This discrepancy raises crucial questions about the fair distribution of economic benefits in the AI value chain.

The legal challenges associated with this type of work are manifold. First, there's the question of defining the concept. What exactly constitutes "harmful and dangerous technology work?" Where do we draw the line in the continuum of discomfort, distress, and harm? Clarity on these questions is crucial for devising legal solutions and protective measures for workers.

Next comes the challenge of jurisdiction. With AI work being outsourced across borders, whose responsibility is it to ensure worker safety and welfare? Is it the outsourcing country, the country where the work is being done, or a cooperative global effort? The answer to this question will have far-reaching implications for legal accountability and the enactment of protective legislation.

Moreover, worker protection must extend beyond traditional mechanisms. Ensuring adequate remuneration, psychosocial support, and clear communication channels for airing grievances are just the starting points. The advent of "harmful and dangerous technology work" necessitates the development of new protective frameworks that recognize the unique stressors associated with this line of work.

With AI becoming increasingly integrated into our society, it is also pertinent to consider how this term can influence the development and use of generative AI. Algorithms that inadvertently generate harmful content, or those which demand a high level of human moderation, could potentially fall under the category of "harmful and dangerous technology," influencing how they are legally treated and controlled.

✍️ 🎨 Stable Doodle, is an innovative tool for AI-enhanced visual creation.

Developed by Stability, this tool allows for unprecedented control over image creation, challenging traditional text-only AI workflows with a smart and intuitive interface. And it is fun!

Stable Doodle converts simple sketches into vibrant, fully developed images. It's part of the Stability AI’s Clipdrop generative AI tool suite and available for free. Whether you're sketching a basic object shape or a professional-level pitch or presentation deck, Stable Doodle can turn your concept into a colorful image in seconds, mirroring the feeling of working alongside a professional art team.

Here's a quick guide on how to use it:

  1. Draw your desired object in black and white on the Stable Doodle canvas.

  2. Write a prompt that directs Stable Diffusion on how to interpret your drawing.

  3. Click the "generate" button.

Instantly, four different variations of your concept will appear, impressively rendered and detailed. For example, drawing a basic chair and using the prompt "Revolving door - photorealism", brings forth unique, detailed renderings of your design.

News you can Use: 

AI regulation around the world (useful resource)

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8