Matt Hart Matt Hart

The Monk Outlasts the Empire

Today it’s less about the idea you can build, and more about who has the idea and the question it’s solving.

More riffs on the thing that shouldn't exist but does - and why it might be the most important signal in the age of AI

I’ve been writing about MiroFish - a prediction engine built by a 20-year-old in ten days that simulates entire societies to forecast human behaviour.

I connected it to Donald Hoffman's Fitness Beats Truth theorem: the idea that evolution shaped our senses not to see reality but to see fitness payoffs. That what we experience as the world is an interface - really useful, but not really true.

I said the thesis was one of the most compelling I'd encountered. I also said it didn't sit with me completely.

This is my attempt to say why (and forgive if not quite grabbing it yet as I try to grab and give voice to my instincts here).

The case Hoffman makes

To be clear: Hoffman's work isn't handwaving. It's grounded in evolutionary game theory and backed by computer simulations. His team ran competition after competition between organisms with different perceptual strategies - those that see truth versus those that see only fitness payoffs - and the result was consistent. Fitness wins. Truth goes extinct. Every time. (which I initially found super disturbing).

The implications cascade. If our perceptions are an interface shaped for survival, then physical objects are icons. Space is a desktop. Colour, shape, texture - all compressed symbols designed to help us act, not to describe what's actually there. We don't see reality. We see what evolution decided we needed to see.

It's elegant. It's provocative. And when you look at the world through this lens, it explains a lot.

The tech founders optimising for scale at any cost? Fitness payoffs.

The platforms engineered to capture attention regardless of what that does to human wellbeing? Fitness payoffs.

The venture logic that rewards extraction speed over depth? Fitness payoffs.

The entire machinery of late capitalism - move fast, ship product, capture value, repeat - maps almost perfectly onto Hoffman's framework.

The organisms that see fitness win. The ones that pause to ask whether any of it is true get outcompeted.

Which is exactly where my discomfort starts.

The thing that shouldn't exist

If Hoffman is right - if evolution ruthlessly selects for fitness and drives truth to extinction - then there are some features of human experience that have no business being here.

Contemplative practice.
Mystical experience.
The felt sense that there's something deeper behind the interface.
The drive toward meaning that has no reproductive payoff.
The pull to sit in silence, to go inward, to ask questions about the nature of consciousness itself.

These aren't marginal phenomena.

They show up in every culture, in every era, across every geography. Indigenous knowledge systems. Buddhist meditation. Sufi poetry. Christian mysticism. Jungian depth psychology. Plant medicine traditions that are thousands of years old.

On pure fitness logic, all of it should have been selected out long ago.

The person in the cave meditating is burning calories and not reproducing. The shaman going deep into ceremony isn't gathering resources or defending territory. The mystic contemplating the nature of the self is doing precisely nothing that evolutionary game theory would predict or reward.

And yet these traditions haven't just survived. They've persisted with extraordinary resilience - often outlasting the empires and systems that tried to suppress them.

That persistence is a signal.

And Hoffman's model, for all its rigour - and at least for me -doesn't explain it.

Two possible readings

I've been mulling this over and I keep landing on two possibilities.

The first is that Hoffman's model is simply incomplete. Fitness-for-reproduction may be a powerful selection pressure, but maybe it isn't the only one. There may be something else operating - a pull toward coherence, integration, wholeness - that runs deeper than survival logic.

Something that doesn't show up in evolutionary game theory because it operates on a different axis entirely. Not fitness. Not even truth. Something more like… directionality.

A current that runs beneath the interface.

The second possibility is stranger and, I think, more interesting.

What if the wisdom traditions are a fitness strategy - just one that operates on a timescale Hoffman's simulations can't capture?

Think about it.

The extractors move fast.

They dominate in the short term. They accumulate resources, capture markets, build empires.

But they also burn through everything in their path. The fitness-maximising organism consumes its environment and then collapses.

Rome anyone.

Or the USA of today maybe?!

The contemplative traditions play a different game.
Slower.
Deeper.
Less visible.
But they're still here.

The monks - they’re still here.
The indigenous knowledge systems — battered, suppressed, colonised — are still here.
The meditation lineages that started two and a half thousand years ago are still here.

The people who see deeper, not just faster, turn out to be the ones still standing when the extractors have eaten everything in sight.

Maybe meaning is a fitness payoff.

Just not one that shows up in a fifty-round simulation.

Maybe it shows up across centuries.

Maybe the organisms that develop the capacity for depth, for inner coherence, for seeing through the interface rather than just reacting to its icons - maybe they're the ones that actually persist.

Maybe the monk really does outlast the empire.

Why this matters now

I'm not writing this as philosophy. It’s all part of the doctoral thesis-in-progress.

And I think it has direct, practical implications for anyone building, creating, or leading in the age of AI.

The dominant narrative right now is pure Hoffman. Fitness payoffs everywhere.
Optimise.
Automate.
Scale.
Move faster than the next person.

The tools are free, the models are commodity, the barriers to building are approaching zero.

And the people winning - visibly, loudly, measurably - are the ones playing the fitness game hardest.

But I keep coming back to MiroFish - a beautiful piece of engineering. Open source. Built in ten days. Capable of simulating entire populations.

And completely dependent on the human who decides what question to ask it.

The simulation doesn't have purpose. It doesn't have meaning. It doesn't have the felt sense that one question matters more than another. It doesn't know why you're asking. It processes seed material and runs agents and produces outputs. Brilliantly. Efficiently. At scale.

The part that can't be automated - the part that makes the whole thing useful - is the person who brings something the simulation doesn't have. Judgement born from experience. Meaning born from the kind of inner work that has no fitness payoff on paper but somehow produces the questions nobody else is asking.

That's not software. That's not hardware.

That's the thing that shouldn't exist but does.

The deeper game

I've spent the last twenty-five years working in creativity and innovation - building frameworks, running programmes, helping organisations and individuals develop their creative capabilities.

And I've spent those years in a parallel process of deep personal work - consciousness, plant medicine, practices that sit firmly outside any professional playbook.

I used to think those were separate tracks.
The professional and the personal.
The strategic and the spiritual.
The part of my life that made money and the part that made meaning.

I don't think that anymore.

I think they're the same track. I think the capacity to ask better questions - to direct simulations, to see through interfaces, to bring something genuinely new to a world that's drowning in optimisation - comes from exactly the kind of inner development that Hoffman's model says shouldn't exist.

The becoming is the competitive advantage.

Not because it makes you faster.

Because it makes you deeper.

And depth is what generates the ideas that no model can produce on its own.

Hoffman proved that we live inside an interface. Fair enough. But someone had to look at that interface and ask: what's behind it?

That question didn't come from fitness.

It came from something older, stranger, and more persistent than anything natural selection can account for.

And I think that same impulse - the refusal to accept the interface as the whole story - is exactly what the age of AI is going to demand of us.

Not more simulation. More depth.

Not faster tools. Deeper humans.

I don't have this figured out.

I'm writing my way toward it — in public, in practice, in a PhD that I've titled Story as Method.

The research goes on.

The questions are live.

And I'm increasingly suspicious that the most important capability we can develop right now isn't technical at all.

It's the willingness to go deeper than the interface wants us to go.

What do you think - is meaning a fitness payoff on a longer timescale?

Or is there something else going on entirely?

I'd love to hear from anyone who's sitting with these questions too (and you’re my hero if you made it this far down the scroll :)

I've been writing about Humanware - why mapped human creative capabilities are the irreplaceable edge in the age of AI. The download is here if you’re yet to see it.

Read More
Matt Hart Matt Hart

The Simulation Inside the Simulation

Today it’s less about the idea you can build, and more about who has the idea and the question it’s solving.

What a 20-year-old's prediction engine and a cognitive scientist's radical theory tell us about the real bottleneck in AI

Last week I posted on Linkedin about this thing called MiroFish (mirofish . homes) - a next-gen prediction engine that topped GitHub's global trending list last week, racked up 28,000+ stars, and landed its creator a $4 million investment within days of a rough demo video.

The creator is a 20-year-old undergraduate in Beijing. The build time was ten days. The tools were AI coding assistants and a laptop.

MiroFish inspired me to go deeper - because I think there's something happening here that goes well beyond a cool open-source project.

First, what MiroFish actually does

Most prediction tools crunch numbers. MiroFish builds societies.

You feed it seed material - news articles, social data, policy documents, financial reports, even fiction - and describe your prediction question in plain language. Something like: ‘How might public sentiment shift over the next three months?’ or ‘What happens if we launch this product into that market?’

From there, the system constructs a knowledge graph from your inputs, then generates hundreds or thousands of AI agents - each with distinct personalities, backgrounds, attitudes, and decision logic. It drops them into a simulated environment and lets them interact. They argue. They influence each other. They form factions, shift positions, change their minds. Their memories persist across rounds.

The output isn't a probability score. It's an emergent forecast - what the simulated population collectively arrived at through social dynamics playing out over time.

And here's the part that matters: you can step into the simulation after it runs. Query individual agents. Introduce new variables. Test counterfactual scenarios. It's not a black box. It's an interactive world you can navigate.

One kid.
Ten days.
Open source.
A $4 million cheque -

from one of China's original internet billionaires.

It immediately made me think:

‘Today is no longer about what you can build - but more who has the ideas - and more, who has the best question.’

I've been sitting with ever since.

Now here's where it gets strange

There's a cognitive scientist at UC Irvine named Donald Hoffman who has spent decades developing a thesis that should, by rights, unsettle anyone working in innovation, technology, or creative strategy.

His argument - backed by evolutionary game theory and computer simulations - is that our senses did not evolve to show us reality. They evolved to keep us alive. Evolution doesn't reward truth. It rewards fitness. And in every simulation Hoffman and his team have run, organisms that perceive the world accurately go extinct when competing against organisms whose perceptions are tuned purely for survival utility.

He calls this the Fitness Beats Truth theorem - fitness being the 4 f’s: fighting, fleeing, feeding, and well… fcking (mating :).

The implication is radical: what we experience as reality - space, objects, colour, time - is not the world as it is. It's an interface. As he says it; ‘a species-specific operating system shaped by natural selection to give us useful icons, not accurate representations.’ The same way a file icon on your desktop is a helpful symbol but bears no resemblance to the voltage patterns on a circuit board, the ‘reality’ we navigate is a compressed, simplified dashboard designed for one purpose: keeping us in the game long enough to reproduce (you know, mate).

Evolution, in Hoffman's framing, purposefully hides the truth from us. Not as a flaw. As a feature.

Two illusions, stacked

Now hold both ideas at the same time.

Hoffman says we're already operating inside an interface that masks objective reality. Our entire experience of the world is, in a meaningful sense, a simulation — shaped not for accuracy but for utility.

MiroFish builds a second simulation on top of that. Artificial agents with synthetic personalities, running through social dynamics in a digital environment, producing emergent predictions about how people - who are themselves navigating a perceptual interface that hides the truth - will behave.

It's a simulation inside a simulation.

An illusory world modelling an illusory world.

And it works. The investor wrote the cheque. The demos produced coherent, structured forecasts. The use cases - market sentiment, election outcomes, policy impact, product launches, competitor response - are real and multiplying.

So what does that tell us?

Maybe this: that "truth" was never the point

If Hoffman is right, we've never had access to objective reality anyway. What we've had is a set of perceptual tools that are good enough - good enough to find food, avoid predators, raise children, build civilisations. Not true. But very bloody useful.

And now we're building a new layer of tools digital twins, agent-based simulations, swarm intelligence engines - that do exactly the same thing at a different scale. They don't need to model ‘truth’ to be valuable. They need to model fitness. Utility. What's likely to happen when a complex population of actors with different motivations responds to a change in conditions.

This isn't a departure from how we've always operated. It's an extension of it. Evolution gave us a perceptual interface to navigate a reality we can't see. Now we're building computational interfaces to navigate complexities our perceptual interface can't handle.

Layer on layer on layer.

The bottleneck doesn't move

Here's what keeps landing for me.

MiroFish is open source. The code is free. The AI tools that helped build it are available to anyone. The LLM backends it runs on are commodity infrastructure. A 20-year-old built it in ten days.

Hoffman's work tells us that even our experience of the world is constructed - a user interface, not a window.

So where does advantage sit?

Not in the software.
Not in the hardware.
Not in access to tools,
because access is approaching zero cost.

It sits in the human who knows which question to ask.

Get it? The human who knows the best question to ask?!

The one who selects the right seed material. Who reads the simulation output and can distinguish signal from noise. Who understands that no model - perceptual or digital - shows the truth, and navigates anyway.

The tools are going to keep getting cheaper, faster, and more powerful. We're heading toward a world where anyone can spin up a thousand-agent simulation of market behaviour or cultural dynamics for the cost of a coffee. The architecture of prediction is being democratised in real time.

But the ability to direct it? To bring twenty years of lived experience to bear on which scenario to test? To ask the question nobody else is asking? To look at the emergent output and see what it actually means for a brand, a market, a culture?

That's not in the code. That's in the person.

The question I can't shake

If we're already living inside one interface - a perceptual operating system shaped by evolution for fitness, not truth - and we're now building second-order interfaces to simulate behaviour within that first one…

What happens when the second layer gets good enough to feed back into the first?

When the simulated world starts shaping the real one? When prediction engines don't just forecast sentiment but influence it? When the model and the territory start to merge?

Are we building better tools for navigating reality - or are we adding another layer to the illusion?

I don't have a clean answer.

Does anyone does yet?

But I'm increasingly convinced that the people who'll matter most in this next era aren't the ones building the simulations. It's the ones who understand what simulations can and can't tell you - and who bring something to the table that no agent, no model, and no algorithm can generate on its own.

Lived experience.
Judgement.
The ability to ask a question that hasn't been asked before.

That's Humanware.

And I think it's the only durable advantage left.

If you've been following this thread, I've been writing about why mapped human creative capabilities are the irreplaceable edge in the age of AI. Grab the Humanware download here

Read More