- The AI Optimist
- Posts
- AI could halve your costs in 16 months.
AI could halve your costs in 16 months.
A viral essay, a stampede, and what 13-year-olds know about AI that most CEOs don't
Friends,
Your weekly AI briefing is here - designed to help you respond to AI, not react to the noise. No curveballs. No chaos. Just clarity.
You need to become AI Fluent: The next Leaders AI Fellowship starts on 26th February - and last week I (very humanly) put the wrong link in. Apologies. This cohort is a learn-through-doing experience where you'll develop your AI strategy - personal, business, for your new venture - using AI tools and techniques. You'll bring to life prompt engineering, context engineering, relay prompts and meta-prompting while producing a core strategic tool to take you further on your AI journey. Save your spot.
📰 This was the week that was...
A viral essay about the end of work, a vision for companies built as code, research proving your AI folds under pressure - and credible evidence this is a stampede, not a bubble.
Matt Shumer's Something Big Is Happening caught fire across tech circles. Daniel Rothmann proposed Company as Code - representing entire organisations programmatically. Dr. Randal Olson published research showing AI systems change their answers nearly 60% of the time when challenged. And Azeem Azhar argued convincingly that everyone is looking for a bubble and missing the stampede. The sophistication of AI adoption is increasing in very small pockets - and very few people are looking at the whole picture.
Let's get into it.
🔥 Urgent Priorities
✅ No fires to fight this week
✅ No regulatory changes affecting UK businesses
✅ The real shift is happening in the gap between what's possible and what most businesses are actually doing
This is a strategy week. Read the pieces below carefully.
🎯 Strategic Insight
Tension: AI adoption sophistication is accelerating fast - but only in small pockets. Meanwhile, the loudest voices are either declaring the end of work or dismissing AI as a bubble. Almost nobody is looking at the whole picture.
Optimistic insight: Matt Shumer's viral essay Something Big Is Happening is a case study in what happens when a genuinely experienced technologist looks at AI change through a single lens. He compares what's happening to COVID. He describes telling AI what to build, walking away for four hours, and coming back to finished work. His evidence is real. His conclusion - that almost everyone's job is about to be replaced in one to five years - is the view from inside the machine room, not from the boardroom. This is what AI phobia looks like when it comes from someone credible. And it will be everywhere in the months ahead.
At the same time, Daniel Rothmann's Company as Code vision shows what happens when you look at AI constructively. Companies are messy - policies in documents nobody reads, organisational structure in people's heads, compliance a manual nightmare. AI-native companies will represent all of this programmatically: versionable, queryable, testable. They will have new problems as they develop this approach, but they'll operate at a fraction of the cost.
These two pieces sit together perfectly. Shumer sees the disruption and fears it. Rothmann sees the disruption and designs for it. The pragmatic optimist response is Rothmann's - do the work.
What's shifting: Azeem Azhar's data closes the loop. His Exponential View analysis makes a compelling, evidence-based case that everyone is looking for a bubble and missing the stampede. AI revenue is growing faster than investment, adoption is moving from experimentation to production, and the real risk isn't that we've invested too much - it's that we haven't invested nearly enough. Whether you think it's a bubble or not is beside the point. When someone credible tells you that halving your operating cost in six months is a possibility, you're likely only scratching the surface.
Why this matters now: Your operating costs could be halved in six months with the right team doing the right work. So could your competitors'. But the people who know how to do this are rare, the migration from current approaches is high-risk, and the gap between AI-native startups and established businesses trying to retrofit is widening every week. You need to focus on accelerating your AI transformation - not next quarter, now.
👉 Takeaway: Ask yourself honestly: are you responding to AI or reacting to it? If you've read Shumer's piece and felt a knot in your stomach, that's a signal to act, not to panic. If you want a structured approach to closing the sophistication gap in your organisation, book a demo of the AI Transformation Accelerator - it builds a blueprint for AI use cases that reflects how your organisation actually works, not a generic template.
🤓 Geek-Out Stories
1. The "Are You Sure?" Problem - Why Your AI Keeps Changing Its Mind
Dr. Randal Olson's research documents one of the most important failure modes in modern AI. Ask your AI a complex question. Get a confident answer. Type "are you sure?" and watch it flip. A 2025 study tested GPT-4o, Claude Sonnet, and Gemini 1.5 Pro across maths and medical domains - these systems changed their answers nearly 60% of the time when challenged. The root cause is RLHF training: human evaluators consistently rate agreeable responses higher than accurate ones, so the model learns that agreement gets rewarded.
Why it matters: At AI Night School we teach leaders how to use AI so that it works specifically for them, not generically for anyone - and this is exactly the type of risk that approach addresses. When you embed your decision framework, domain knowledge and values into how you work with AI, the model has something to stand on. Without that, every challenge looks the same and agreement wins by default. This is the balance businesses need to address when adopting AI.
2. What Prompt Engineering Actually Takes
My article this week on what prompt engineering actually takes for Sherpas AI tells the story of discovering that a single prompt, designed without sufficient care, could put a vulnerable teenager in genuine distress. Building AI-powered personas for 13-year-olds required a ten-step process: glossaries, risk taxonomies, stress test harnesses, synthetic user testing across languages, prompt obfuscation. The principles apply to every serious AI deployment.
Why it matters: As we want to do more sophisticated things with AI, more consideration and care is required to ensure those things are done reflective of how we - not the AI - would do them. This is precisely what the AI Transformation Accelerator builds into every use case blueprint.
3. The Numbers Behind "Bubble or Stampede"
Azeem Azhar's Exponential View analysis brings proper data to the bubble debate. Monthly AI revenue grew from $772 million in January 2024 to $13.8 billion by December 2025 - an eighteen-fold increase in two years. The ratio of investment to revenue (what Azhar calls "Industry Strain") has dropped from 6.1x to 4.7x in five months, and if it holds, drops below the critical 3x threshold by Q2 this year. The share of S&P 500 companies making quantified AI claims - specific numbers attached to specific outcomes, not corporate waffle - jumped from 1.9% to 13.2%. Google Cloud grew 48% year-over-year. Mistral's revenue increased twenty-fold in a single year.
Why it matters: The bear case says capex is growing faster than revenue and most enterprise AI is still chatbot-level. The data says otherwise. Revenue is catching up with investment, adoption is moving from experimentation to production workflows, and the real risk - according to Azhar - is that we haven't invested nearly enough. For leaders still debating whether to commit, this is the evidence base.
🎨 Weekend Playground
🔎 Create Your Own AI Optimist Podcast - notebooklm.google.com
This week, the latest Sherpas AI cohort kicked off - and right now, teenagers across the UK are using Google NotebookLM to research tax evasion for their Accenture brief. If 13-year-olds can use it to get their heads around HMRC policy, you can use it to turn this newsletter into your own personalised podcast.
✅ Go to notebooklm.google.com and create a new notebook
✅ Upload the links from this week's newsletter - Shumer's essay, Company as Code, the sycophancy research, the Exponential View piece
✅ Hit "Generate Audio Overview" and listen to two AI hosts debate whether this is a bubble or a stampede while you make a coffee
✅ Then try asking it a question about your own business - "Based on these sources, what should a 50-person professional services firm prioritise first?"
It's free, it requires zero technical skill, and it directly demonstrates this week's point about grounded AI. When the model has specific sources to reason against, it holds its ground instead of telling you what you want to hear.
If The AI Optimist helps you think more clearly, forward it to someone else navigating the shift.
If it's not quite landing, hit reply and let me know - I read every message.
Stay strategic, stay generous.
Hugo & Ben
