
Hello to all ~10,000 of you from around the globe. Thank you for reading and sharing!
TL;DR
Tools like Claude Cowork and OpenClaw are early examples of AI systems that update their own memory and refine how they operate over time. This is not just smarter output. It’s self-improving workflow. That shifts your role from AI user to supervisor of an evolving system.
BREAKING: As I was finishing this edition, Anthropic rolled out enterprise updates to Cowork that explicitly included legal integrations with Thomson Reuters CoCounsel, Harvey, LegalZoom, and Lexis+. That is not a side experiment. It is a signal.
The operator model I dig into below is being formalized for enterprise deployment. Private plugin marketplaces. Role-specific agents. Cross-application orchestration. Persistent memory.
Your organization may not adopt this tomorrow. But this is how these systems will be embedded when they arrive.
Welcome to My New Reality (yours soon)👇
Let me show you something.

Right now, on my machine, this is my master directory for Claude Cowork. Inside it are subfolders for memory files, workflow instructions, persistent notes, session logs, drafting templates, and task definitions. It looks nothing like an LLM Chat tool. It’s more like I am managing a database.
Because that is what I am doing . . . sort of.
Welcome to my new reality. I am no longer “chatting with” or “using” AI. I am operating it.
That means I can review what it writes about my preferences (yes, it does that). I am able to check what it decides to remember (yes, it does that too). I refine its working instructions (“we” do this together). I watch how it plans multi-step tasks before it executes them. I pay attention as it updates its own understanding of my context and work patterns. Think about all of this for a second . . .
I know that screenshot is not sexy. Nor is it elegant. Not yet. It requires organizational discipline. I am not naturally wired for that, which makes this a humbling exercise.
But here is the punchline. My productivity has 10x’ed easily even with the added burden of managing files. My prompts do not need me to add context all the time. The model remembers. But it doesn’t have just one memory like Google Gemini or OpenAI when you allow it to “remember” you. No, this is stratified memory. Controllable memories. Context-specific memories.
I can control whether it is working in my “professor” or “strategy advisor” contexts. And these contexts are not just information but instructions for how to behave, think, plan out its work, and sequence its thinking.
If you are a Marvel Avengers fan, think of Jarvis - the all knowing Tony Stark computer he talks to. Now this is definitely NOT that Jarvis. But it is like having micro-baby Jarvis’s living in my different professional worlds. Staying up to date on what I am working on, how I am thinking, and what I need to do, all without having to tell it every time or to make sure it remembers things. It does this.
The point is, this is the front edge of where AI is going. By the time this workflow reaches you in polished form, it will be smoother, smarter, and easier to manage. What feels like manual oversight today will become structured supervision tomorrow.
The important shift is not the interface. It is the identity.
You too will soon be moving from user to operator.
And that shift is happening because AI systems are learning to improve themselves.
What Is Actually Changing

We are entering a phase where AI does not just answer questions. It adjusts how it answers them. It refines its own processes. It compounds small improvements over time.
Along with Jimmy’s statement above, at Davos this year, Google DeepMind CEO Demis Hassabis publicly questioned whether the self-improvement loop they are building can eventually close without human involvement. That was not philosophical speculation. It was a description of an active engineering target across major labs.
Every frontier company is working toward some version of recursive self-improvement. AI that contributes to the design, evaluation, and refinement of the next generation.
That sounds abstract. Academic. Hype. It is not.
Let me break this down for you in a pragmatic way.
What is recursive self-improvement?
Best way to say it . . . Recursive self-improvement is when an AI system not only produces answers but also improves the way it produces answers, then repeats that cycle over time.
The Three Layers of Self-Improvement
First, models are learning to think harder, not just faster. Instead of simply producing a fast answer, they allocate compute (think “power”) dynamically. They reason step by step. They explore alternatives before committing to answering your prompt. A growing share of AI capability now happens at the moment of inference (when you prompt them), not just during training (when they are born). The system chooses how much effort to apply while it’s interacting with you!
Second, models are beginning to teach themselves. They generate their own practice problems, attempt solutions, score those solutions, and then adjust how they approach the next round. The system is not waiting for humans to label every example. It is creating feedback for itself.
In simple terms, it writes the test, takes the test, grades the test, and then studies the results.
Third, labs are openly pursuing systems that assist in building future systems. AI helping with AI research. AI accelerating AI development. That loop compounds capability at the foundation level.
Yes, AI is building AI. In fact, Anthropic built Cowork largely using its own AI coding tools. As Anthropic EVP Simon Smith posted publicly the day it launched: “Claude Code wrote all of Claude Cowork. Can we all agree that we’re in at least somewhat of a recursive improvement loop here?”
That’s not marketing copy. That’s this loop in action.
The Agent on Your Desktop
When I point Claude Cowork at a folder, it does not just summarize documents. It plans tasks. It creates files. It maintains persistent notes about how I prefer things structured. Those notes survive across sessions.
OpenClaw goes even further in transparency. Its memory lives in files too. It writes to them. It edits them. It builds a durable record of what it has learned about how to operate.
This is bounded self-improvement.
What is OpenClaw?
OpenClaw is an open-source AI agent framework that gained rapid attention in early 2026 because it demonstrated something tangible: a locally run agent that could modify its own memory and workflow instructions in plain sight.
Unlike most cloud tools, OpenClaw stores its memory, preferences, and skills as editable files on your machine. The agent reads them, updates them, and builds on them over time. You can inspect every change.
What made it important was not virality. It was visibility. It showed what bounded recursive self-improvement looks like when it leaves the lab and lands on a professional’s desktop.
That is the part lawyers should pay attention to.
All of this is to say that is why my file structure above looks like I am managing a brain.
Because I am supervising a system that updates its own playbook.
Why This Matters for Legal Professionals
Soon, the tools in your workflow will no longer be static capability layers. They will be compounding systems.
The version you evaluate today will not behave identically six months from now. It will reason more effectively because upstream models improved. It will execute workflows more cleanly because product scaffolding matured. It may even refine its internal memory more intelligently based on your interactions.
If governed well, this compounds leverage. It remembers phrasing that worked. It internalizes jurisdiction preferences. It preserves structural choices you repeat. Small efficiencies stack into meaningful advantage.
If governed poorly, it compounds error. A flawed instruction persists. A careless preference becomes encoded. A drafting shortcut becomes habit.
This is why the operator mindset matters.
You are not typing prompts. You are defining constraints, reviewing memory persistence, validating outputs, and intervening when the loop drifts.
Two lawyers using the same tool will produce very different results. The difference will not be subscription tier. It will be systems thinking and supervisory discipline.
The Bottom Line
The future of legal AI is not just smarter answers. It is smarter systems that refine how they produce those answers.
My folder structure today looks messy because I am operating the early version. It requires attention and oversight. It requires me to think like a systems designer, not just a lawyer.
By the time this reaches you in mature form, it will be easier. But the identity shift will remain.
You are moving from user to operator.
The loop is already turning. The only real question is whether you are directing it or letting it run unattended.
That is it for now. Talk soon again.

To read previous editions, click here.
Was this newsletter useful? Help me to improve!
Who is the author, Josh Kubicki?
I am a lawyer, entrepreneur, and teacher. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.
