- The AI Optimist
- Posts
- AI Adoption's Real Problem: You're Thinking Too Small
AI Adoption's Real Problem: You're Thinking Too Small
Claude Cowork, healthcare's trust test, deepfake laws, and why your teams need permission to be more ambitious.
Friends,
Your weekly AI briefing is here - designed to help you respond to AI, not react to the noise. No curveballs. No chaos. Just clarity.
📰 This Was the Week That Was
Trust became the story. Whether we're learning to trust AI with our healthcare, our creative work, or simply catching escaped monkeys - this week laid bare the central challenge of AI adoption: can we learn to let go?
Healthcare made headlines as both OpenAI and Anthropic launched dedicated health platforms, each betting that trust in AI will unlock healthcare's most stubborn bottlenecks. Meanwhile, Anthropic released Claude Cowork - extending agentic AI to non-coders and asking a profound question: can business leaders trust AI to do actual work, unsupervised?
The flipside? Grok's deepfake scandal prompted the UK government to bring criminal law into force this week, while St. Louis officials hunting escaped monkeys were buried in AI-generated fake sightings. Trust works both ways - and this week showed us what happens when it breaks.
Let's get into it.
🔥 Urgent Priorities
✅ Nothing urgent this week - use the time to explore Claude Cowork if you're on a Max plan.
✅ If you're building AI strategy, this week's Anthropic controversy (see Strategic Insight) demands your attention.
🎯 Strategic Insight
Trust No One (Provider)
Tension: Anthropic gave us a masterclass in cognitive dissonance this week. On Sunday, they released Claude Cowork - a genuinely transformative tool that lets non-coders delegate real work to AI. On the same week, they blocked third-party coding tools from using Claude subscriptions, breaking workflows for thousands of developers overnight.
The irony is sharp. Claude is truly awesome. It's changing how we work. And yet even that is fragile - dependent on a provider's commercial decisions that can shift without warning.
The lesson: Your AI strategy cannot depend on a single provider. Trust no one provider completely.
The fallout was immediate. OpenAI responded within days, officially welcoming developers to use their subscriptions with competing tools. One analyst wrote that Anthropic "just found itself in a classic prisoner's dilemma with OpenAI - and OpenAI defected."
But here's the deeper strategic point: The real trust challenge isn't with providers. It's with your own teams.
I sat down with someone last week to show them Claude Code. Within minutes, their expression changed. Not confusion - recognition. They realised they hadn't been ambitious enough in what they were asking AI to do.
This is the adoption challenge nobody talks about. The technology is ready. The capability is there. But most teams are still thinking too small - asking AI to edit a paragraph when it could draft the entire strategy document.
Why it matters now: The organisations that pull ahead won't be the ones with the best AI tools. They'll be the ones whose people learn to trust AI with bigger tasks - and who build that trust on a foundation of provider independence, so no single vendor's terms-of-service change can crater their workflows.
✅ Takeaway: Two questions for your next leadership meeting: (1) How many of our critical AI workflows would break if our primary provider changed terms tomorrow? (2) Are we being ambitious enough in what we're delegating to AI - or are we still thinking like it's 2024?
🤓 Geek-Out Stories
1. Healthcare's Trust Test - OpenAI & Anthropic
Both AI giants launched healthcare platforms this week. OpenAI's ChatGPT Health connects to 2.2 million US healthcare providers via b.well, letting users sync medical records and wellness apps. Anthropic's Claude for Healthcare targets clinical workflows - prior authorisation, insurance appeals, patient messaging triage.
Why it matters: Healthcare is highly regulated and deeply personal. If consumers trust AI here, they'll trust it anywhere. Over 230 million people already ask ChatGPT health questions weekly. These launches signal that AI companies believe the trust threshold has been crossed.
2. Claude Cowork: The Trust Leap - Simon Willison
Simon Willison's first impressions of Claude Cowork cut to the heart of the matter: this is "Claude Code for the rest of your work." Point it at a folder, describe what you want done, walk away. Come back to finished work. The security model is impressive - a full Linux VM sandbox - but the psychological shift is bigger. As one early user put it: "The interaction model shifts from synchronous conversation to asynchronous delegation. This shift requires trust."
Why it matters: One of the hardest parts of using AI is simply trusting it can do the job. Cowork was built in just 14 days - using Claude Code. That recursive capability is either terrifying or thrilling, depending on your perspective.
3. AI Code Reviews Cut Incident Risk by 22% - AI News
Datadog ran AI code reviews against historical outages that had already bypassed human review. The result: their AI agent identified issues in 22% of examined incidents that would have prevented failures. This is the kind of concrete evidence that shifts AI from "nice to have" to "operationally essential."
Why it matters: For those asking "where's the ROI in AI?" - here it is. Preventing incidents. Avoiding downtime. Building reliability. Trust is earned through outcomes, not promises.
🎨 Weekend Playground
If you're on a Claude Max plan (macOS only for now), spend some time this weekend with Cowork. Start small - point it at your Downloads folder and ask it to organise and rename files intelligently. Or give it a folder of receipts and ask for an expense spreadsheet.
The real experiment? Notice how it feels to delegate work to AI and walk away. That discomfort is the trust barrier. The more you practice, the lower it gets.
If this briefing helped you think more clearly about AI, forward it to a colleague who's struggling to separate signal from noise.
Stay strategic, stay generous.
Hugo & Ben
