The Simulation Inside the Simulation
What a 20-year-old's prediction engine and a cognitive scientist's radical theory tell us about the real bottleneck in AI
Last week I posted on Linkedin about this thing called MiroFish (mirofish . homes) - a next-gen prediction engine that topped GitHub's global trending list last week, racked up 28,000+ stars, and landed its creator a $4 million investment within days of a rough demo video.
The creator is a 20-year-old undergraduate in Beijing. The build time was ten days. The tools were AI coding assistants and a laptop.
MiroFish inspired me to go deeper - because I think there's something happening here that goes well beyond a cool open-source project.
First, what MiroFish actually does
Most prediction tools crunch numbers. MiroFish builds societies.
You feed it seed material - news articles, social data, policy documents, financial reports, even fiction - and describe your prediction question in plain language. Something like: ‘How might public sentiment shift over the next three months?’ or ‘What happens if we launch this product into that market?’
From there, the system constructs a knowledge graph from your inputs, then generates hundreds or thousands of AI agents - each with distinct personalities, backgrounds, attitudes, and decision logic. It drops them into a simulated environment and lets them interact. They argue. They influence each other. They form factions, shift positions, change their minds. Their memories persist across rounds.
The output isn't a probability score. It's an emergent forecast - what the simulated population collectively arrived at through social dynamics playing out over time.
And here's the part that matters: you can step into the simulation after it runs. Query individual agents. Introduce new variables. Test counterfactual scenarios. It's not a black box. It's an interactive world you can navigate.
One kid.
Ten days.
Open source.
A $4 million cheque -
from one of China's original internet billionaires.
It immediately made me think:
‘Today is no longer about what you can build - but more who has the ideas - and more, who has the best question.’
I've been sitting with ever since.
Now here's where it gets strange
There's a cognitive scientist at UC Irvine named Donald Hoffman who has spent decades developing a thesis that should, by rights, unsettle anyone working in innovation, technology, or creative strategy.
His argument - backed by evolutionary game theory and computer simulations - is that our senses did not evolve to show us reality. They evolved to keep us alive. Evolution doesn't reward truth. It rewards fitness. And in every simulation Hoffman and his team have run, organisms that perceive the world accurately go extinct when competing against organisms whose perceptions are tuned purely for survival utility.
He calls this the Fitness Beats Truth theorem - fitness being the 4 f’s: fighting, fleeing, feeding, and well… fcking (mating :).
The implication is radical: what we experience as reality - space, objects, colour, time - is not the world as it is. It's an interface. As he says it; ‘a species-specific operating system shaped by natural selection to give us useful icons, not accurate representations.’ The same way a file icon on your desktop is a helpful symbol but bears no resemblance to the voltage patterns on a circuit board, the ‘reality’ we navigate is a compressed, simplified dashboard designed for one purpose: keeping us in the game long enough to reproduce (you know, mate).
Evolution, in Hoffman's framing, purposefully hides the truth from us. Not as a flaw. As a feature.
Two illusions, stacked
Now hold both ideas at the same time.
Hoffman says we're already operating inside an interface that masks objective reality. Our entire experience of the world is, in a meaningful sense, a simulation — shaped not for accuracy but for utility.
MiroFish builds a second simulation on top of that. Artificial agents with synthetic personalities, running through social dynamics in a digital environment, producing emergent predictions about how people - who are themselves navigating a perceptual interface that hides the truth - will behave.
It's a simulation inside a simulation.
An illusory world modelling an illusory world.
And it works. The investor wrote the cheque. The demos produced coherent, structured forecasts. The use cases - market sentiment, election outcomes, policy impact, product launches, competitor response - are real and multiplying.
So what does that tell us?
Maybe this: that "truth" was never the point
If Hoffman is right, we've never had access to objective reality anyway. What we've had is a set of perceptual tools that are good enough - good enough to find food, avoid predators, raise children, build civilisations. Not true. But very bloody useful.
And now we're building a new layer of tools digital twins, agent-based simulations, swarm intelligence engines - that do exactly the same thing at a different scale. They don't need to model ‘truth’ to be valuable. They need to model fitness. Utility. What's likely to happen when a complex population of actors with different motivations responds to a change in conditions.
This isn't a departure from how we've always operated. It's an extension of it. Evolution gave us a perceptual interface to navigate a reality we can't see. Now we're building computational interfaces to navigate complexities our perceptual interface can't handle.
Layer on layer on layer.
The bottleneck doesn't move
Here's what keeps landing for me.
MiroFish is open source. The code is free. The AI tools that helped build it are available to anyone. The LLM backends it runs on are commodity infrastructure. A 20-year-old built it in ten days.
Hoffman's work tells us that even our experience of the world is constructed - a user interface, not a window.
So where does advantage sit?
Not in the software.
Not in the hardware.
Not in access to tools,
because access is approaching zero cost.
It sits in the human who knows which question to ask.
Get it? The human who knows the best question to ask?!
The one who selects the right seed material. Who reads the simulation output and can distinguish signal from noise. Who understands that no model - perceptual or digital - shows the truth, and navigates anyway.
The tools are going to keep getting cheaper, faster, and more powerful. We're heading toward a world where anyone can spin up a thousand-agent simulation of market behaviour or cultural dynamics for the cost of a coffee. The architecture of prediction is being democratised in real time.
But the ability to direct it? To bring twenty years of lived experience to bear on which scenario to test? To ask the question nobody else is asking? To look at the emergent output and see what it actually means for a brand, a market, a culture?
That's not in the code. That's in the person.
The question I can't shake
If we're already living inside one interface - a perceptual operating system shaped by evolution for fitness, not truth - and we're now building second-order interfaces to simulate behaviour within that first one…
What happens when the second layer gets good enough to feed back into the first?
When the simulated world starts shaping the real one? When prediction engines don't just forecast sentiment but influence it? When the model and the territory start to merge?
Are we building better tools for navigating reality - or are we adding another layer to the illusion?
I don't have a clean answer.
Does anyone does yet?
But I'm increasingly convinced that the people who'll matter most in this next era aren't the ones building the simulations. It's the ones who understand what simulations can and can't tell you - and who bring something to the table that no agent, no model, and no algorithm can generate on its own.
Lived experience.
Judgement.
The ability to ask a question that hasn't been asked before.
That's Humanware.
And I think it's the only durable advantage left.
If you've been following this thread, I've been writing about why mapped human creative capabilities are the irreplaceable edge in the age of AI. Grab the Humanware download here