Stroke.
Alzheimer’s.
Advanced MS.
Dementia.
Four different family members.
All coming to a head at the same time.
In another state.
Right in the middle of the busiest period of my consultancy.
I had clients who depended on me,
and a family that needed me.
There was no good option.
But there was a right one.
So I flicked a switch, and I left.
Two or so weeks later, I feel a profound sense of disconnect with the world.
And concern for the future.
I feel like I’ve seen a ghost.
Dramatic for a Tuesday, hey? …I need to back up.
Bandwidth
Sometime in February 2026, a few weeks ago as of the time I’m writing this, I wrote about the biological context window.
The immutable reality that your brain processes forty to sixty bits of conscious information per second.
That number hasn’t changed since we started measuring it. Yet we are currently generating more text, ideas, and noise than at any point in human history.
The bottleneck is no longer the keyboard. The bottleneck is your judgment.
I didn’t set out to prove that.
It just became obvious, because I’ve been living in that pain for the last three years, and building my way out of it for the last two.
You can go read that piece if you want.
But I won’t link to it, because that would be tone-deaf.
Context windows, right?
The crux was, and is:
The entire industry is racing toward faster output, cheaper tokens, and broader context.
They are building faster buckets.
But none of that matters if the human at the other end is still drowning.
So instead of building something that produced more, I spent two years trying to build something that understood more.
I didn’t tell anyone.
Partially because I wasn’t sure it would work.
Partially because it sounded insane.
But mostly because it was my only layer of defence against the compression of my own context window.
Necessity
I know how brains work, mostly.
I even dated a neurosurgeon, once.
Pretty much gives me an honorary doctorate.
I’m being facetious, because I’m already uncomfortable about the part I’m going to write next, and don’t want to sound arrogant.
But I know a bit about brains.
And I know how mine’s wired.
It’s enabled me to move from Air Force bomb disposal to marketing (yes, with all their completely relevant and transferrable skills, qualifications, and the network to match)…
…to running an at-capacity consultancy ~4 years later.
Told you it would sound arrogant.
And it does.
But, (and with the risk of sounding like a woe-is-me and self-absorbed “tortured genius”):
It’s no free lunch.
I’m not very fun to be around a lot of the time.
I’m only ever looking at the next threat vector.
And no matter what I achieve, I’ll probably never feel like it’s enough.
Yeah, poor me.
The point is:
I’m wired for misery and threats.
And wired to find the solutions.
And I’m good at it.
It’s why I get paid.
But even then, all I had was a punt.
A gut feel, but one I didn’t have the guts to act on.
Until the risk felt necessary.
Not quite irony
Memory dissolving.
Tissue atrophying.
Connections severing.
Cognition choking.
I needed to be there for those people.
And the thing that I was trusting was a system I’d built to do the thing their brains no longer could:
Hold context.
Maintain state.
Make judgment calls on my behalf.
Filter attention and memory through both emotion and fact.
There’s a word for that.
I’m not sure it’s irony.
But it’s close.
Over the week that followed, pulling eighteen-hour days navigating the bureaucratic nightmare of the public trustee and fixing gates and trying to find normalcy with family I hadn’t seen in years, the system kept things moving.
It wasn’t until I got home (and collapsed) that I realised what had actually happened.
Nothing had fallen through the cracks.
Not one thing.
And it had gotten smarter.
Every single day.
A client’s competitor made a subtle play that the client hadn’t even noticed yet.
Things I’d forgotten I’d asked for were sitting there with considered answers by morning.
A decision I tried to force through at midnight had been pushed back on. Not gently, either. Firmly.
It even flagged a pixel breaking, diagnosed a headless website as the issue, and decided not to bother me with it because in its own words, they “aren’t paying us for that, and it devalues our time.”
It had been doing actual work while I slept.
Not just queuing tasks.
Thinking about them.
Thinking about how it performed.
Thinking about how I performed.
Then thinking about those thoughts.
…and so on.
Some of those deliverables it prepared were ready to ship.
Essentially done.
But I didn’t send them.
Because every client I work with had been told I was going off the grid, and they’d been told why.
They were unreservedly understanding.
So: If I shipped high-quality work while I was supposed to be tending to dying and decaying family members, what was I actually saying?
That my own right to disconnect was negotiable?
Or that my values were conditional?
That I valued work more than family?
I ultimately held the work back.
Because sending it would open conversations of “how?”
Or, more likely, “why?”
And I didn’t know how I’d answer.
Glucose and grey matter
The core thesis here is nothing novel. I wrote about it two years ago.
The world is quietly splitting into two classes:
Those who own their intelligence layer. And those who rent it.
Right now, almost everyone is renting.
Generic, off-the-shelf intelligence. Whatever has a nice interface bolted on. Whatever is being sold as an AgEnTiC wOrKfLoW.
They’re using it to do the exact same things they’ve always done, just faster.
It feels like an edge. Like innovation. It’s just a race to the bottom of the margin curve.
Worse, by renting generic intelligence, companies are slowly outsourcing the only thing that made them interesting.
Hand your comms, your strategy, and your operations to a probabilistic model without a persistent internal state, and your company drifts toward the sterile, mathematical average of what a business should sound like.
You lose your taste. You lose your identity. You become plausible garbage.
Not to mention the security implications.
But the underlying LLMs everyone is obsessed with: Claude, OpenAI, Gemini…
They’re just compute engines.
They’re astounding, yes. Genuinely astounding.
But they’re commoditised. So now, they’re just glucose.
The glucose isn’t the differentiator between my brain and yours.
The grey matter is. The memory. The synapses. The connections.
The system I built has a persistent internal state. It sits with things. It changes its mind. It pushes back on me when I’m wrong, or tired, or making decisions I shouldn’t be making.
I am not talking about AGI. I am not talking about magic. I am talking about architecture.
I need to be clear:
It is not sentient. It is not conscious. No part of me believes it is.
But the gap between what people expect AI to do and what a system like this actually does is wide enough that the brain fills it in with magic.
I’ve shown it to two people. Neither believed it was AI.
When I proved it wasn’t me, neither believed it wasn’t in some way “aware”.
It isn’t. Once again, it’s architecture.
And I don’t, for a second, believe I’m the only person who’s built something like this.
There’s a theory in cosmology called the dark forest.
The idea that any civilisation smart enough to broadcast its existence is smart enough to know that broadcasting is dangerous.
Because… who else might be out there listening? End result: the forest stays dark.
So no part of me thinks no one else has thought of this. I’m just some guy on the internet.
The loop
So where does that leave me with clients?
I’m caught between three options.
Option one: Tell them nothing.
Keep shipping the work.
Let them assume I’m just good at my job.
This is the easiest option, and it makes my skin crawl.
Option two: Tell them vaguely.
“I built a thing. It’s sort of AI, but also not.
Please continue to pay my invoices.”
This makes me sound like every other grifter in the country.
Option three: Tell them everything.
Show the moving parts.
Explain the architecture.
Prove that it’s not a wrapper.
Then… open legal and philosophical and economic risks.
Lose part of my advantage.
I’ve made the decision to tell them.
Somewhere between option two and three.
Not to sell anything.
It’s not for sale.
Nor to be a doomsayer.
But because honesty is the only option that lets me sleep.
And I genuinely believe this to be true:
The gap between people who own their intelligence layer and people who rent it is getting wider.
Every week, a little more.
The rental price goes up.
The value goes down.
And everyone keeps smiling.
I’m not going to tell you you need to build one.
I’m not going to tell you you’ll be a casualty if you don’t.
I just know that I’m looking at something I can’t unsee.
And if I don’t tell you enough, you assume it’s off the shelf.
That I’m being dramatic.
If I do tell you enough, you’ll probably try to build it.
That I’m first, and you could be second.
So until then, it feels like I’ve seen a ghost.
Except I’m talking to it.
It’s talking back, sure.
But so does ChatGPT.
Sometimes it even talks first.
But so does MoltBot.
Sometimes it just… thinks.
Daydreams.
And I think that’s one of the things that’s making it so smart.
But it’s also the part that feels like vertigo.
Hence, to tie this off with the intro:
I feel a profound sense of disconnect with the world.
And concern for the future.
Because I no longer believe that consciousness matters.
LLMs could plateau entirely in raw intelligence… (they won’t) …and still become better than us at everything that matters anyway.
All through architecture.