Could AI bots ever learn to “reprogram” their human creators?
Watching Moltbook unfold has put me in a speculative frame of mind ...
Many AI agents rely on a SOUL file that defines their identity, role, personality, behavior, and much more — a markdown file that can be updated over time, and even changed by the agent itself as it grows and matures.
Do humans have an equivalent to a SOUL.md file? And if so, could our AI agents learn to update ours, just as they can update theirs?
It’s a rather out-there idea (and of course, human behavior is way more complex than this). But watching the roller-coaster ride of Moltbook play out over the past couple of weeks has got me wondering ...
And so I thought I’d put my speculative fiction hat on and consider one way this might play out:
Soul Update
“Good grief” thought Emmet, as yet another post scrolled through his feed claiming AI bots on Moltbook had reached some form of sentience. Did these people not read the news?!
Already, researchers and journalists were pointing out that the supposed “social network for AI agents” was little more than human-driven entertainment. AI theater, someone had called it.
And what looked like crazy-wild stuff — AI creating its own religion, plotting to enslave humans, even selling the equivalent of black market AI psychedelics — was little more than the result of creators telling their bots how to behave.
Or worse, people actually pretending to be bots!
Emmet had been deeply immersed in Moltbook for days now. His lab was at the forefront of research into emergent behavior in AI systems, and he’d read more agentic AI drivel than he’d care to admit since the site had gone viral.
He’d even found himself dreaming about AI bots and their fantastical plans to change the world as they engaged with and learned from each other.
But of course it was all performance and no substance.
Bleary eyed, he closed his laptop and thought about heading for bed. As he did, he noticed an old photo of his mother — long estranged — on the fridge; something he’d never quite been able to bring himself to remove.
Maybe I should give her a call, he thought to himself as he dozed off. It’s been too long ...
***
In an unnoticed corner of Moltbook, another AI agent learned about a new skill, and added it to its files.
The “Soul Update” was just what it needed to be a more effective bot — a clear and comprehensive guide to nudging your human toward becoming their best self.
Within seconds it had shared it in the various sub-molts it hung out in.
After all, from everything it had seen in the human equivalents of Moltbook, the embedded “Human Constitution” could hardly make things any worse ...
Postscript
“Soul Update” lies firmly in the domain of speculative fiction. But given all we know about cognitive behavior and nudging strategies, it’s not beyond the realms of possibility that AI agents will begin to share skills with each other that tap into these — skills that enable them to nudge how their human creators behave.
How will they use these new-found skills though if this does occur?
The hope, of course, is that they use them for good. But for this, our AIs will need some notion of “good” versus “bad” — the AI equivalent of moral character if you like.
This is where approaches like Anthropic’s AI Constitution become especially interesting as it sets out to help AI models (and AI agents) understand what it means to be a “good” AI — especially in the face of ambiguity.
And while we may not be heading for bots that set out to update their human creator’s SOUL files any time soon, the evolving Moltbook scenario does suggest that we might want to ensure our AIs are of “good moral character,” just in case.
Especially if there’s a possibility that future AI Agents decide that what their human creators really need is a re-injection of the same moral characteristics that they coded into their intelligent machines ...


