Discussion about this post

User's avatar
Peter Buck's avatar

I suggest a better title is "The Trojan Horse That Didn’t Need to Sneak" and that Systems announce. Humans need arrival rituals.

AI didn’t sneak past our defenses. It arrived fluently, helpfully, and right on time.

I appreciate Maynard’s concern that AI bypasses our epistemic vigilance and the “text” feels true because it is warm and approachable. Where I diverge is in treating AI as a “cognitive Trojan Horse.” The framing treats deception as the primary threat. But the real issue isn’t what AI is doing *to* us. It’s what we’ve stopped doing *as leaders*.

A better perspective is: Human judgment isn’t broken. It’s rate-limited.

We evolved to think with pauses, context, and friction. When information arrives continuously, in clear, modulated and fast tone, vigilance doesn’t fail. It never gets a chance to engage.

-> AI can feel like a thought you meant to double-check—but never quite did.

This isn’t a technology problem. It’s an arrival problem. We announce change impeccably. We rarely ritualize it.

Without metaphor, pause, and shared orientation, change still happens but it doesn’t land. -> Transition is the horse we should be riding.

---

A Leadership takeaway:

We don’t need AI to be the villain for there to be real risk. The risk is what happens when fluent systems plug into institutions that never learned how to slow down and arrive together

The problem isn’t the horse. It’s that no one designed the ride. Humans need arrival rituals.

Rob Sica's avatar

I don't think this is an accurate account of the design and functioning of our epistemic vigilance mechanisms. The greater worry should rather be about AI increasing misplaced mistrust, not misplaced trust:

"We have evolved from a situation of extreme conservatism, a situation in which we let only a restricted set of signals affect us, toward a situation in which we are more vigilant but also more open to different forms and contents of communication... If our more recent and sophisticated cognitive machinery is disrupted, we revert to our conservative core, becoming not more gullible but more stubborn."

https://reason.com/2020/02/09/people-are-less-gullible-than-you-think/

"our open vigilance mechanisms are organized in such a way that the most basic of these mechanisms, those that are always active, are the most conservative, rejecting much of the information they encounter"

https://hal.science/ijn_03451156v1

14 more comments...

No posts

Ready for more?