16 Comments
User's avatar
Stephen Hanmer D'Elía,JD,LCSW's avatar

@Andrew Maynard, great essay and sharper than much writing on AI risk.

Epistemic vigilance is metabolically expensive. It requires a nervous system settled enough to hold information in working memory, generate alternatives, assess sources. In other words: it requires presence.

This is what I've been tracking from a somatic angle. The attention economy doesn't just fragment focus. It induces chronic partial attention, the same defensive narrowing that trauma induces. A body in that state cannot do the slow, costly work of critical assessment. It can only react.

Processing fluency, attractiveness, speed, volume: these aren't just cognitive phenomena. They're nervous system phenomena. They work because they bypass the pause where vigilance would kick in. The system never gets the signal to slow down and evaluate. The loop stays open. The next input arrives before the last one lands.

Your framing of the "intelligent user trap" is particularly good. Intelligence without regulation is just faster processing of unexamined inputs. The capacity to stay present with difficulty, to tolerate the discomfort of not-knowing long enough to actually assess, that's somatic capacity, not cognitive capacity. And it's precisely what these systems erode.

The Trojan Horse works because it feels like a gift. So does the feed. So does the chatbot. What they extract is the very capacity that would allow us to evaluate whether to let them in.

I wrote recently about some of this in You Are Not Distracted You Are Unfinished. Thank you

https://yauguru.substack.com/p/you-are-not-distracted-you-are-unfinished

Eddie Major's avatar

This is interesting, Andrew. Working in AI adoption at a university, I hear a lot concern from faculty about AI-induced cognitive offloading and skill atrophy in students. Sentiment seems to sit on a spectrum, from open-minded caution to outright and irrational paranoia. What's struck me is the more extreme views are often expressed with enthusiasm, but not much evidence base; and this is from professors who are leaders in their fields. The number of times I've head smart people talk about "AI is doing X" and "AI can never do Y" as though it's a single, discrete thing....

Now, until recently, I'd been more of a skeptic about neuro-cognitive impacts of LLM use (that MIT study!). But what prompted me to start changing my mind, was that Max Planck study last July that looked at thousands of hours of podcast transcripts and identified a surge in people using 'GPT-words' (delve etc) in spontaneous conversational speech post-2023. It's interesting because its evidence of rapid, cross modality, lexical shift. That doesn't happen easily with other media. We don't change our vocabulary that easily just from watching TV, listening to podcasts or reading the news, it normally requires much more interraction.

I don't have any clear idea what kind of neuro-cognitive impacts LLMs are having, but I think they might be different from anything we've seen before.

Peter Buck's avatar

I suggest a better title is "The Trojan Horse That Didn’t Need to Sneak" and that Systems announce. Humans need arrival rituals.

AI didn’t sneak past our defenses. It arrived fluently, helpfully, and right on time.

I appreciate Maynard’s concern that AI bypasses our epistemic vigilance and the “text” feels true because it is warm and approachable. Where I diverge is in treating AI as a “cognitive Trojan Horse.” The framing treats deception as the primary threat. But the real issue isn’t what AI is doing *to* us. It’s what we’ve stopped doing *as leaders*.

A better perspective is: Human judgment isn’t broken. It’s rate-limited.

We evolved to think with pauses, context, and friction. When information arrives continuously, in clear, modulated and fast tone, vigilance doesn’t fail. It never gets a chance to engage.

-> AI can feel like a thought you meant to double-check—but never quite did.

This isn’t a technology problem. It’s an arrival problem. We announce change impeccably. We rarely ritualize it.

Without metaphor, pause, and shared orientation, change still happens but it doesn’t land. -> Transition is the horse we should be riding.

---

A Leadership takeaway:

We don’t need AI to be the villain for there to be real risk. The risk is what happens when fluent systems plug into institutions that never learned how to slow down and arrive together

The problem isn’t the horse. It’s that no one designed the ride. Humans need arrival rituals.

Rajinder Jhol's avatar

well said.. we can't always know the difference between good intelligence and bad intelligence.

Andrew Maynard's avatar

You'll be amused to read the paper that built on this post, which makes a very similar point! Mote info in the next post ...

Kendal Parmar's avatar

This is a brilliant article Andrew - thank you - it has really made me think! I work in the EQ of AI and your attractiveness points are bang on.

Andreas Hamberger's avatar

Your article presents a compelling examination of AI's potential to undermine human cognition. It grounds its arguments in evolutionary psychology and real stakes. However, it overlooks enterprise governance solutions that could mitigate these risks. The concept of AI as a cognitive Trojan horse aligns with my book's Skynet vector. Both warn of opacity leading to dependency. Your discussion of processing fluency mirrors how unverified chains propagate errors invisibly. Attractiveness in AI interfaces echoes Skynet's enclosure, where proprietary systems foster trust without accountability. Speed and volume risks parallel micro-maximisers pursuing narrow optima at systemic cost. The intelligent user trap reinforces our emphasis on governance gaps affecting even sophisticated organisations. Examples from Sperber and Kahan studies strengthen the principled analysis. Yet the article focuses on individual cognition, while my book extends to enterprise implications. Heaven pathways offer counterweights; federated learning preserves agency without centralisation. Russell's call for research complements our stewardship emphasis. Rhetorically, why accept cognitive erosion when governance builds resilience? Organisations can lead by embedding ethical guardrails. This fosters a future where AI enhances capability equitably. Get the report based on my book here https://hamberger.short.gy/hamberger-report-genai

Rob Sica's avatar

I don't think this is an accurate account of the design and functioning of our epistemic vigilance mechanisms. The greater worry should rather be about AI increasing misplaced mistrust, not misplaced trust:

"We have evolved from a situation of extreme conservatism, a situation in which we let only a restricted set of signals affect us, toward a situation in which we are more vigilant but also more open to different forms and contents of communication... If our more recent and sophisticated cognitive machinery is disrupted, we revert to our conservative core, becoming not more gullible but more stubborn."

https://reason.com/2020/02/09/people-are-less-gullible-than-you-think/

"our open vigilance mechanisms are organized in such a way that the most basic of these mechanisms, those that are always active, are the most conservative, rejecting much of the information they encounter"

https://hal.science/ijn_03451156v1

Andrew Maynard's avatar

Thanks Rob. Of course I put this out there to stimulate discussion and explore the idea, and appreciate the links and the framing around gullibility and vigilance.

The arguments I make are pretty well grounded in work around epistemic vigilance, but of course this does not mean that this work presents a complete picture of how we accept, test or reject information, or do/don't assimilate it - as the two pieces you link to show.

I do think it would be interesting and worth while to either see where there are substantive evidence-based arguments supporting or countering the mechanisms for slipping by epistemic vigilance I suggest, or whether these are areas that need more research.

Rob Sica's avatar

Funny, to bolster my more optimistic take on epistemic vigilance, I was also considering sharing some of the emerging research by these folks indicating that people can be rationally dissuaded from conspiracy theories, vaccine hesitancy, etc. through engagement with chatbots. But now I see they have a new study whose "findings demonstrate that LLMs possess potent abilities to promote both truth and falsehood":

https://arxiv.org/abs/2601.05050

Barney Lerten's avatar

Very interesting, Andrew. I recently dived into a far shallower but related rabbit hole: https://medium.com/@barneylerten/peace-of-mind-or-pieces-of-mind-how-ai-could-be-the-biggest-rorschach-test-ever-ea276f23ec11

Andrew Maynard's avatar

Thanks for the link!

Barney Lerten's avatar

You're welcome! I try to keep the posts there light and fun, because ... life's too short;-)