I've been watching the clawdbot/moltbot/openclaw wave over the past week. Though haven't had the time to set it up on a VPS or local docker and play yet. The momentum it's built over the past week is wild. People giving full access across systems + the internet to models that have prompt injection issues etc is going to end badly. And now moltbook!
Be curious to understand how much token use has increased across providers as a result. To the detriment of the ecologies we all rely upon while Apple mini-mac sales jump too. Welcome to 2026 🙈
Reflecting on these networks like moltbot and I'm reminded of this paper from last year on subliminal learning where a teacher model teaches a trait (T) to a student model even after teams remove T from the dataset. https://arxiv.org/abs/2507.14805
Now even if memory is constrained in these agent swarms the geometric steganography taking place is undetectable. Add to that malevolent actors with sufficient resources to run and deploy autonomous swarms. A big challenge we have in front of us indeed Andrew!
Great write up (one of the best I’ve seen on the topic).
I also saw Ethan’s post about how we can’t tell what’s “real” vs. roleplay. And while I agree all sorts of things might be happening here—aren’t the bots always roleplaying?
What would “real” AI behavior even look like? Isn’t all AI behavior contextual by definition? There’s no “raw” Claude underneath the prompts.
And how is that different from humans roleplaying on social media too?
I've been watching the clawdbot/moltbot/openclaw wave over the past week. Though haven't had the time to set it up on a VPS or local docker and play yet. The momentum it's built over the past week is wild. People giving full access across systems + the internet to models that have prompt injection issues etc is going to end badly. And now moltbook!
Be curious to understand how much token use has increased across providers as a result. To the detriment of the ecologies we all rely upon while Apple mini-mac sales jump too. Welcome to 2026 🙈
Reflecting on these networks like moltbot and I'm reminded of this paper from last year on subliminal learning where a teacher model teaches a trait (T) to a student model even after teams remove T from the dataset. https://arxiv.org/abs/2507.14805
Now even if memory is constrained in these agent swarms the geometric steganography taking place is undetectable. Add to that malevolent actors with sufficient resources to run and deploy autonomous swarms. A big challenge we have in front of us indeed Andrew!
Great write up (one of the best I’ve seen on the topic).
I also saw Ethan’s post about how we can’t tell what’s “real” vs. roleplay. And while I agree all sorts of things might be happening here—aren’t the bots always roleplaying?
What would “real” AI behavior even look like? Isn’t all AI behavior contextual by definition? There’s no “raw” Claude underneath the prompts.
And how is that different from humans roleplaying on social media too?
10/10 hahaha. i love this. what an interesting story.