2026 is the year our comfort ends

Alex Smith on ego death, my research on the 94 building blocks of knowledge work, and why 43% of what we do is irreducibly human.

Friends,

your weekly AI briefing is here - designed to help you respond to AI, not react to the noise. No curveballs. No chaos. Just clarity.

🎬 This Week's Promo

If you have a teenager: Get them to learn AI skills, Get work Experience and get cash

If you know a teenager who's curious about AI, Katherine Walker at Sherpas AI is running paid AI work experience placements in February. They work on a real AI project with Accenture , learn real AI skills, and get paid real money. Only a couple of places left, so if you have a teenager in your life who you want to thrive in our new AI world of work get on this ASAP.

I recorded a quick video explaining why this matters - especially for parents navigating what "career readiness" looks like now.

*Sherpas is the social enterprise I founded to teach teenagers the skills they need. We've already trained over 5000 teenagers. Anyone who knows about it, will rave about it... For transparency, 'm still on the board and have an interest in the business.

📰 This was the week that was...

Congratulations. You survived Blue Monday. And just as the most depressing week of the year ended, Alex Smith published a piece that stopped me in my tracks. His argument: 2026 is the year our comfort ends.

Not because AI is taking jobs. But because it's demanding something harder: ego death. As Alex puts it, we think we're adaptable, but what we really mean is "I like travelling to new places" or "I like redecorating my home." That's not actual change. Real change means admitting we were wrong, that our current capabilities are sub-adequate, that we need to unravel our ways of doing things. And that's painful.

To balance the weight of that, my friend Henry Coutinho-Mason (author of The Future Normal) sent me something that made me smile. He's vibe-coded a glossary of new words for the AI age. "Draftgravitas" - the unearned authority of whatever the AI wrote first. "Pilotpurgatorium" - forever testing, never deploying.

Let's get into it.

🔥 Urgent Priorities

✅ No fires to fight this week ✅ Strategic foundations shifting - Alex is right: coasting is becoming more dangerous every passing day ✅ Time to look at what you scorn - that's where change must start

This isn't a week for panic. It's a week for honest self-examination.

🎯 Strategic Insight

Tension: Alex nailed it: "For decades, this natural human tendency has been fine - because things have been stable enough for most of us to 'find our groove' and stick to it." We could coast on a certain qualification, a certain skill, a certain customer base, a certain market position. Not anymore. The opportunities to be proven wrong are exponentially more numerous than they were before.

The research behind this: I've spent the past month building something I call the Knowledge Work Primitives Ontology - a systematic map of every atomic unit of knowledge work. It identifies 94 primitives (the irreducible building blocks of cognitive, relational, and accountability work) across 12 categories.

The methodology was rigorous: validated against 500 real-world jobs to be done across 12 professional contexts - from millisecond hedge fund trading decisions to multi-year artistic development, from solitary cheesemaking to holding space for the dying in hospice chaplaincy.

Here's what the data shows:

Actor

Primitives

Share

AI-Dominant

18

19%

Hybrid

36

38%

Human-Dominant

32

34%

Human-Exclusive

8

9%

Read that carelessly and you might conclude "AI can do 57% of knowledge work." That's exactly the wrong takeaway.

The AI Optimist lens: Here's what's actually happening: knowledge work today is heavily weighted towards doing. Classifying, extracting, summarising, scoring, monitoring, restructuring - these are the tasks that fill our days. AI is brilliant at them. And that's wonderful news.

Because the 43% that remains human-dominant or human-exclusive? That's where the meaning lives.

The human primitives include: Mentoring. Negotiating. Storytelling. Confronting. Nurturing. Witnessing. Celebrating. Grieving. Discerning. Trusting.

Not a single one of them is about doing. Every single one is about being.

I've also mapped 67 human capabilities across eight domains - the attributes we uniquely bring to work. Many are flagged as "Protected" (stable 10+ years from AI encroachment): Physical Vulnerability. Embodied Empathy. Moral Courage. Dignity Recognition. Meaning Making. Hope Maintenance. Resilience Building.

What's shifting: The cost of doing is collapsing to near zero. When everyone can do, doing no longer differentiates. The value of being - judgement, taste, connection, context, care, courage - is rising.

Work will be much more enjoyable when your value comes from the truly human traits rather than from grinding through cognitive tasks that drain you.

Why this matters now: Alex's advice is perfect: "Look to what you scorn. What do you look on with disdain? What do you think is stupid? What do you wish would just go away? That's the ego-defence. That's where your change must start."

As he puts it: "We are living through a winter, not a spring." But here's the optimistic reframe: the gate of the cage is open. And as we stretch our limbs and step out into the sunlight, the instinct will spark again.

👉 Takeaway: This week, audit your work through the doing/being lens:

  • List your top 10 weekly activities

  • For each, ask: "Is this doing or being?"

  • For the "doing" tasks: Could AI handle 80% of this? (Probably yes)

  • For the "being" tasks: Am I investing enough time here? (Probably no)

  • Ask yourself: What do I scorn? That's your starting point for growth.

If you'd like help mapping your team's work through this framework, reply and we'll set up a call.

🤔 Geek-Out Stories

This week's theme: AI Optimism - three perspectives on why the future might be brighter than the headlines suggest.

The MIT researcher and author of More from Less argues that AI is the most powerful tool we've ever built for fighting climate change. His point: the efficiencies AI enables vastly outweigh its energy consumption. One nuclear fusion company told him AI has improved their reactor simulations by 31 orders of magnitude - not 31%, not 310%, but 31 orders of magnitude.

Why it matters: If you're worried about AI's environmental footprint, McAfee offers a different lens. The question isn't "how much energy does AI use?" but "what does AI enable us to stop doing?"

👉 Action: Next time AI's energy consumption comes up in conversation, reframe it: "What inefficiencies could AI eliminate that would dwarf its own consumption?"

The former OpenAI exec makes a compelling case: AI could finally give us back our most precious resource - time. His "Happiness Function" research shows that once basic needs are met, happiness tracks with relationships, purpose, creativity, and community. Not productivity.

Why it matters: We've conflated doing with being. As AI takes on more cognitive drudgery, we'll face a choice: fill the reclaimed time with more distractions, or invest it in what actually makes life meaningful.

👉 Action: Block 30 minutes this week for something that falls firmly in the "being" category - a proper conversation, a creative project with no deliverable, a walk without a podcast.

For the security-conscious, cost-focused, or sovereignty-minded: you can now run Claude Code with open-source models on your own machine. Ollama v0.14.0 is compatible with the Anthropic Messages API, meaning you get the Claude Code workflow without sending data to external servers.

Why it matters: This is frugal AI in action. Not everyone needs GPT-5 for every task. Sometimes a well-configured local model is exactly right - cheaper, private, and good enough.

👉 Action: If you've been curious about Claude Code but hesitant about cloud dependencies, this is your entry point. Install Ollama, configure the environment variables, and try it on a small project.

🎨 Weekend Playground

This weekend, try Perplexity's Agentic Browsing - if you haven't experienced an AI that browses the web for you, this is the moment.

Open the assistant, give it a task that requires research across multiple sites, and watch what happens. Then compare it to Claude for Chrome.

Why this matters: Agentic browsing is shifting from "AI that answers questions" to "AI that does research." Understanding the difference will help you spot opportunities in your own workflows.

👉 Mission:

  • Ask Perplexity to research something you'd normally spend 30 minutes Googling

  • Note what it gets right and what it misses

  • Try the same task in Claude for Chrome

  • Ask yourself: "Where else in my work am I doing research that an agent could handle?"

📢 Share the Optimism

If The AI Optimist helps you think more clearly, forward it to someone else navigating the shift. If it's not quite landing, hit reply and let me know - I read every message.

Stay strategic, stay generous.

Hugo & Ben