The Monk Outlasts the Empire

More riffs on the thing that shouldn't exist but does - and why it might be the most important signal in the age of AI

I’ve been writing about MiroFish - a prediction engine built by a 20-year-old in ten days that simulates entire societies to forecast human behaviour.

I connected it to Donald Hoffman's Fitness Beats Truth theorem: the idea that evolution shaped our senses not to see reality but to see fitness payoffs. That what we experience as the world is an interface - really useful, but not really true.

I said the thesis was one of the most compelling I'd encountered. I also said it didn't sit with me completely.

This is my attempt to say why (and forgive if not quite grabbing it yet as I try to grab and give voice to my instincts here).

The case Hoffman makes

To be clear: Hoffman's work isn't handwaving. It's grounded in evolutionary game theory and backed by computer simulations. His team ran competition after competition between organisms with different perceptual strategies - those that see truth versus those that see only fitness payoffs - and the result was consistent. Fitness wins. Truth goes extinct. Every time. (which I initially found super disturbing).

The implications cascade. If our perceptions are an interface shaped for survival, then physical objects are icons. Space is a desktop. Colour, shape, texture - all compressed symbols designed to help us act, not to describe what's actually there. We don't see reality. We see what evolution decided we needed to see.

It's elegant. It's provocative. And when you look at the world through this lens, it explains a lot.

The tech founders optimising for scale at any cost? Fitness payoffs.

The platforms engineered to capture attention regardless of what that does to human wellbeing? Fitness payoffs.

The venture logic that rewards extraction speed over depth? Fitness payoffs.

The entire machinery of late capitalism - move fast, ship product, capture value, repeat - maps almost perfectly onto Hoffman's framework.

The organisms that see fitness win. The ones that pause to ask whether any of it is true get outcompeted.

Which is exactly where my discomfort starts.

The thing that shouldn't exist

If Hoffman is right - if evolution ruthlessly selects for fitness and drives truth to extinction - then there are some features of human experience that have no business being here.

Contemplative practice.
Mystical experience.
The felt sense that there's something deeper behind the interface.
The drive toward meaning that has no reproductive payoff.
The pull to sit in silence, to go inward, to ask questions about the nature of consciousness itself.

These aren't marginal phenomena.

They show up in every culture, in every era, across every geography. Indigenous knowledge systems. Buddhist meditation. Sufi poetry. Christian mysticism. Jungian depth psychology. Plant medicine traditions that are thousands of years old.

On pure fitness logic, all of it should have been selected out long ago.

The person in the cave meditating is burning calories and not reproducing. The shaman going deep into ceremony isn't gathering resources or defending territory. The mystic contemplating the nature of the self is doing precisely nothing that evolutionary game theory would predict or reward.

And yet these traditions haven't just survived. They've persisted with extraordinary resilience - often outlasting the empires and systems that tried to suppress them.

That persistence is a signal.

And Hoffman's model, for all its rigour - and at least for me -doesn't explain it.

Two possible readings

I've been mulling this over and I keep landing on two possibilities.

The first is that Hoffman's model is simply incomplete. Fitness-for-reproduction may be a powerful selection pressure, but maybe it isn't the only one. There may be something else operating - a pull toward coherence, integration, wholeness - that runs deeper than survival logic.

Something that doesn't show up in evolutionary game theory because it operates on a different axis entirely. Not fitness. Not even truth. Something more like… directionality.

A current that runs beneath the interface.

The second possibility is stranger and, I think, more interesting.

What if the wisdom traditions are a fitness strategy - just one that operates on a timescale Hoffman's simulations can't capture?

Think about it.

The extractors move fast.

They dominate in the short term. They accumulate resources, capture markets, build empires.

But they also burn through everything in their path. The fitness-maximising organism consumes its environment and then collapses.

Rome anyone.

Or the USA of today maybe?!

The contemplative traditions play a different game.
Slower.
Deeper.
Less visible.
But they're still here.

The monks - they’re still here.
The indigenous knowledge systems — battered, suppressed, colonised — are still here.
The meditation lineages that started two and a half thousand years ago are still here.

The people who see deeper, not just faster, turn out to be the ones still standing when the extractors have eaten everything in sight.

Maybe meaning is a fitness payoff.

Just not one that shows up in a fifty-round simulation.

Maybe it shows up across centuries.

Maybe the organisms that develop the capacity for depth, for inner coherence, for seeing through the interface rather than just reacting to its icons - maybe they're the ones that actually persist.

Maybe the monk really does outlast the empire.

Why this matters now

I'm not writing this as philosophy. It’s all part of the doctoral thesis-in-progress.

And I think it has direct, practical implications for anyone building, creating, or leading in the age of AI.

The dominant narrative right now is pure Hoffman. Fitness payoffs everywhere.
Optimise.
Automate.
Scale.
Move faster than the next person.

The tools are free, the models are commodity, the barriers to building are approaching zero.

And the people winning - visibly, loudly, measurably - are the ones playing the fitness game hardest.

But I keep coming back to MiroFish - a beautiful piece of engineering. Open source. Built in ten days. Capable of simulating entire populations.

And completely dependent on the human who decides what question to ask it.

The simulation doesn't have purpose. It doesn't have meaning. It doesn't have the felt sense that one question matters more than another. It doesn't know why you're asking. It processes seed material and runs agents and produces outputs. Brilliantly. Efficiently. At scale.

The part that can't be automated - the part that makes the whole thing useful - is the person who brings something the simulation doesn't have. Judgement born from experience. Meaning born from the kind of inner work that has no fitness payoff on paper but somehow produces the questions nobody else is asking.

That's not software. That's not hardware.

That's the thing that shouldn't exist but does.

The deeper game

I've spent the last twenty-five years working in creativity and innovation - building frameworks, running programmes, helping organisations and individuals develop their creative capabilities.

And I've spent those years in a parallel process of deep personal work - consciousness, plant medicine, practices that sit firmly outside any professional playbook.

I used to think those were separate tracks.
The professional and the personal.
The strategic and the spiritual.
The part of my life that made money and the part that made meaning.

I don't think that anymore.

I think they're the same track. I think the capacity to ask better questions - to direct simulations, to see through interfaces, to bring something genuinely new to a world that's drowning in optimisation - comes from exactly the kind of inner development that Hoffman's model says shouldn't exist.

The becoming is the competitive advantage.

Not because it makes you faster.

Because it makes you deeper.

And depth is what generates the ideas that no model can produce on its own.

Hoffman proved that we live inside an interface. Fair enough. But someone had to look at that interface and ask: what's behind it?

That question didn't come from fitness.

It came from something older, stranger, and more persistent than anything natural selection can account for.

And I think that same impulse - the refusal to accept the interface as the whole story - is exactly what the age of AI is going to demand of us.

Not more simulation. More depth.

Not faster tools. Deeper humans.

I don't have this figured out.

I'm writing my way toward it — in public, in practice, in a PhD that I've titled Story as Method.

The research goes on.

The questions are live.

And I'm increasingly suspicious that the most important capability we can develop right now isn't technical at all.

It's the willingness to go deeper than the interface wants us to go.

What do you think - is meaning a fitness payoff on a longer timescale?

Or is there something else going on entirely?

I'd love to hear from anyone who's sitting with these questions too (and you’re my hero if you made it this far down the scroll :)

I've been writing about Humanware - why mapped human creative capabilities are the irreplaceable edge in the age of AI. The download is here if you’re yet to see it.

Next
Next

The Simulation Inside the Simulation