Discussion about this post

User's avatar
Rob Sica's avatar

I don't think this is an accurate account of the design and functioning of our epistemic vigilance mechanisms. The greater worry should rather be about AI increasing misplaced mistrust, not misplaced trust:

"We have evolved from a situation of extreme conservatism, a situation in which we let only a restricted set of signals affect us, toward a situation in which we are more vigilant but also more open to different forms and contents of communication... If our more recent and sophisticated cognitive machinery is disrupted, we revert to our conservative core, becoming not more gullible but more stubborn."

https://reason.com/2020/02/09/people-are-less-gullible-than-you-think/

"our open vigilance mechanisms are organized in such a way that the most basic of these mechanisms, those that are always active, are the most conservative, rejecting much of the information they encounter"

https://hal.science/ijn_03451156v1

Expand full comment
Barney Lerten's avatar

Very interesting, Andrew. I recently dived into a far shallower but related rabbit hole: https://medium.com/@barneylerten/peace-of-mind-or-pieces-of-mind-how-ai-could-be-the-biggest-rorschach-test-ever-ea276f23ec11

Expand full comment
4 more comments...

No posts

Ready for more?