Your article presents a compelling examination of AI's potential to undermine human cognition. It grounds its arguments in evolutionary psychology and real stakes. However, it overlooks enterprise governance solutions that could mitigate these risks. The concept of AI as a cognitive Trojan horse aligns with my book's Skynet vector. Both warn of opacity leading to dependency. Your discussion of processing fluency mirrors how unverified chains propagate errors invisibly. Attractiveness in AI interfaces echoes Skynet's enclosure, where proprietary systems foster trust without accountability. Speed and volume risks parallel micro-maximisers pursuing narrow optima at systemic cost. The intelligent user trap reinforces our emphasis on governance gaps affecting even sophisticated organisations. Examples from Sperber and Kahan studies strengthen the principled analysis. Yet the article focuses on individual cognition, while my book extends to enterprise implications. Heaven pathways offer counterweights; federated learning preserves agency without centralisation. Russell's call for research complements our stewardship emphasis. Rhetorically, why accept cognitive erosion when governance builds resilience? Organisations can lead by embedding ethical guardrails. This fosters a future where AI enhances capability equitably. Get the report based on my book here https://hamberger.short.gy/hamberger-report-genai
I don't think this is an accurate account of the design and functioning of our epistemic vigilance mechanisms. The greater worry should rather be about AI increasing misplaced mistrust, not misplaced trust:
"We have evolved from a situation of extreme conservatism, a situation in which we let only a restricted set of signals affect us, toward a situation in which we are more vigilant but also more open to different forms and contents of communication... If our more recent and sophisticated cognitive machinery is disrupted, we revert to our conservative core, becoming not more gullible but more stubborn."
"our open vigilance mechanisms are organized in such a way that the most basic of these mechanisms, those that are always active, are the most conservative, rejecting much of the information they encounter"
Thanks Rob. Of course I put this out there to stimulate discussion and explore the idea, and appreciate the links and the framing around gullibility and vigilance.
The arguments I make are pretty well grounded in work around epistemic vigilance, but of course this does not mean that this work presents a complete picture of how we accept, test or reject information, or do/don't assimilate it - as the two pieces you link to show.
I do think it would be interesting and worth while to either see where there are substantive evidence-based arguments supporting or countering the mechanisms for slipping by epistemic vigilance I suggest, or whether these are areas that need more research.
Funny, to bolster my more optimistic take on epistemic vigilance, I was also considering sharing some of the emerging research by these folks indicating that people can be rationally dissuaded from conspiracy theories, vaccine hesitancy, etc. through engagement with chatbots. But now I see they have a new study whose "findings demonstrate that LLMs possess potent abilities to promote both truth and falsehood":
This is a brilliant article Andrew - thank you - it has really made me think! I work in the EQ of AI and your attractiveness points are bang on.
Your article presents a compelling examination of AI's potential to undermine human cognition. It grounds its arguments in evolutionary psychology and real stakes. However, it overlooks enterprise governance solutions that could mitigate these risks. The concept of AI as a cognitive Trojan horse aligns with my book's Skynet vector. Both warn of opacity leading to dependency. Your discussion of processing fluency mirrors how unverified chains propagate errors invisibly. Attractiveness in AI interfaces echoes Skynet's enclosure, where proprietary systems foster trust without accountability. Speed and volume risks parallel micro-maximisers pursuing narrow optima at systemic cost. The intelligent user trap reinforces our emphasis on governance gaps affecting even sophisticated organisations. Examples from Sperber and Kahan studies strengthen the principled analysis. Yet the article focuses on individual cognition, while my book extends to enterprise implications. Heaven pathways offer counterweights; federated learning preserves agency without centralisation. Russell's call for research complements our stewardship emphasis. Rhetorically, why accept cognitive erosion when governance builds resilience? Organisations can lead by embedding ethical guardrails. This fosters a future where AI enhances capability equitably. Get the report based on my book here https://hamberger.short.gy/hamberger-report-genai
I don't think this is an accurate account of the design and functioning of our epistemic vigilance mechanisms. The greater worry should rather be about AI increasing misplaced mistrust, not misplaced trust:
"We have evolved from a situation of extreme conservatism, a situation in which we let only a restricted set of signals affect us, toward a situation in which we are more vigilant but also more open to different forms and contents of communication... If our more recent and sophisticated cognitive machinery is disrupted, we revert to our conservative core, becoming not more gullible but more stubborn."
https://reason.com/2020/02/09/people-are-less-gullible-than-you-think/
"our open vigilance mechanisms are organized in such a way that the most basic of these mechanisms, those that are always active, are the most conservative, rejecting much of the information they encounter"
https://hal.science/ijn_03451156v1
Thanks Rob. Of course I put this out there to stimulate discussion and explore the idea, and appreciate the links and the framing around gullibility and vigilance.
The arguments I make are pretty well grounded in work around epistemic vigilance, but of course this does not mean that this work presents a complete picture of how we accept, test or reject information, or do/don't assimilate it - as the two pieces you link to show.
I do think it would be interesting and worth while to either see where there are substantive evidence-based arguments supporting or countering the mechanisms for slipping by epistemic vigilance I suggest, or whether these are areas that need more research.
Funny, to bolster my more optimistic take on epistemic vigilance, I was also considering sharing some of the emerging research by these folks indicating that people can be rationally dissuaded from conspiracy theories, vaccine hesitancy, etc. through engagement with chatbots. But now I see they have a new study whose "findings demonstrate that LLMs possess potent abilities to promote both truth and falsehood":
https://arxiv.org/abs/2601.05050
Very interesting, Andrew. I recently dived into a far shallower but related rabbit hole: https://medium.com/@barneylerten/peace-of-mind-or-pieces-of-mind-how-ai-could-be-the-biggest-rorschach-test-ever-ea276f23ec11
Thanks for the link!
You're welcome! I try to keep the posts there light and fun, because ... life's too short;-)