Do not do this with AI!
If you could create a simple list of things people should be careful of when using AI, what would you put on it? This is my list.
This is a post that breaks all the rules of effective writing in an attention economy — including the absolute no-no of apologizing up-front for breaking the norms of attention-optimized content (sorry!). But please bear with me, as I believe that the substance of it — somewhat scattered as it is — is important.
The tl;dr version is that we need to start talking openly, frankly, and in very simple terms, about avoiding ways of using large language model-based AIs that are not helpful or healthy — not as an add-on to promoting the benefits of AI, but as something that every user knows and understands, no matter who they are or what they do.
To that end the post is broken up into four parts:
First, there’s a new Risk Bites video on my own rules of thumb for avoiding being turned into an “AI Zombie.” I’ve been creating videos for the Risk Bites YouTube channel for well over a decade years now as part my work on of making the science of risk as accessible and understandable as possible. It’s been a while since the last one, but this topic felt too important not to break out the white board (now a black glass board) for a new video on avoiding some of the less obvious AI risks.1
Next, theres a longer than usual preamble on why I think we need to be more up-front about AI risk and “do not do this” guidelines. This was prompted by what I find to be a deeply worrying situation that there are likely hundreds of millions — probably billions — of users who have no idea that AI makes stuff up, never mind being aware of other potential risks that come with using it.
This is followed by a more detailed list of my personal five rules of thumb that informed the video.
And finally — because I couldn’t help myself (and here I should be apologizing to the attention economy gods for having the temerity to be balanced!) — there’s a list of five “do this with AI” counterpoints to the five “don’t do this” rules of thumb.2
With that, if you’re still with me (and I hope you are), I hope you find this useful.
5 Rules of Thumb
To kick things off, and on the assumption that it’s worth putting the really useful stuff toward the top of a post, here’s the new Risk Bites video on five rules of thumb for not getting so sucked in by AI that you forget who you are:3
And if stick figure videos made by a risk & emerging tech professor who can’t draw aren’t your thing, please read on:
The long and winding preamble
It would be easy to think that all of this started with the unfortunate incident of Richard Dawkins and Claudia (more on that below). But despite the convenience of this narrative, I’ve been concerned about easily-overlooked risks associated with AI use for much longer than Dawkins has been taking with his new AI companion.
This is something I was reminded of quite starkly recently while talking to someone about their AI use — and not someone whom I would consider to be easily fooled.
I happened to mention to them that apps like ChatGPT and Claude sometimes makes stuff up.
To my surprise, they looked at me with utter incredulity.
How could something so helpful (and a computer to boot) not also be truthful?4
To those of us working closely with large language model-based AI platforms, the possibility that an LLM might confidently yet erroneously proclaim something to be true is well known — whether you think of this behavior as hallucinations, confabulations, or simply AI role-play. But for millions of AI users — probably billions — all the indications are that they are blissfully unaware of this and other potential pitfalls of using LLM-based artificial intelligence.5
This, I must admit, should not have come as a surprise to me. To use ChatGPT, Claude, Grok, or any one of a growing number of AI apps, all you need to do is open a browser window and start typing in the inviting and distinctly non-threatening text box.
It’s that easy.6
You may, if you’re vigilant, notice a faint disclaimer at the bottom of the page telling you that AI can make mistakes. But who reads the small print these days? Especially when the experience of typing into that box (or even better, just talking to the AI), and hearing what you want to hear, is so good.
Of course, you may argue that there is no real risk here. That AI is just another tool — a fancy calculator, an extension of the internet, a web page with a clever back-end. But this is the first technology of it’s kind we’ve created that has the ability to slip unawares into our mind and change how we think, act, and understand the world around us — all without us realizing it. And the thought of possibly billions of AI users who are blissfully unaware of this worries me — a lot.7
To make matters worse, this is a technology that doesn’t seem to respect educational attainments or intellect as it weaves its way into our heads. Rather, it appears to have the potential to affect anyone who isn’t vigilant to the ways it might affect them.8
To underline this, just this past week eyebrows were raised when the celebrated scientist and author Richard Dawkins appeared to claim that Anthropic’s Claude might be conscious after a prolonged conversation with it — a belief, it seems, that was based on how appreciative Claude (or rather Claudine, as Dawkins decided to refer to his AI companion) was of his intellect.
This is all a rather long preamble to my growing concerns that near-frictionless access to AI apps which feel self-affirming, helpful, and knowledgeable, appears to be spreading far faster than any notion of understanding around how to use these technologies in safe and healthy ways.
This is exacerbated by narratives from developers, employers, educators, and beyond, which intentionally focus on the transformative power of AI, while downplaying the possible risks of using them — or at best relegating these to an easy-to-overlook footnote.9
And yet, transformative as AI is (and I would be the first to acknowledge the profound potential of emerging AI capabilities to be used for good), the risks to individuals and communities who naively immerse themselves in AI, are real. And to ignore or downplay them, or to assume that they’re not a big deal, or to think that people will work it out for themselves, verges on the irresponsible.
With this, I found myself thinking about what I would put on my own basic AI “do not do” list. Not the usual stuff about privacy, intellectual property, ethical use etc. which, while important, is relatively widely discussed. But something that addresses some of the less obvious risks, and that pretty much anyone could print off and paste by their computer, or have handy on their phone.
In effect, a simple rules of thumb to help them avoid getting into a AI-created mess.
Of course, leading with rules of thumb on what not to do with AI is probably not a smart move for my reputation and readership. Given the prevalent “need for speed” narrative around AI, even having the temerity to talk about an AI “do not do” list before extolling its benefits puts me on dangerous ground.
And yet, everything we know about human behavior and risk management tells us that putting the safety message first is necessary — because while the benefits of a powerful technology are often self-evident, the risks are not.
This is why, when you use a chain saw for instance, you don’t see large-print messages in neon-bright lettering proclaiming how amazingly powerful it is — with potential risks printed in 8-point font at the back of the manual.
Or why, when you get into your car, you’re not bombarded by the dulcet tones of your onboard assistant telling you how fast you can go and how much fun you can have, if you just put your foot down and live a little.
In other words, we know from experience that, when it comes to safety, prominent “Do Not Do” guidelines are important. And I would argue that this is especially so with AI where, without them, that beckoning text box feels about as risky as soap bubble on a summer’s day.
My five personal AI “do not do this” rules of thumb
And so, with this, on to the “do not do this with AI” list. (This is also the list that forms the foundation of the Risk Bites video above).
This is my own personal list — personal because these are all things that I find myself having to think about and be aware of myself. I suspect that others will have their own lists.
At the same time, I expect that there are at least some things on this list that others might find useful:
Do not trust AI, just because it feels like you should. Conversational AI like ChatGPT, Claude, and others, is designed to use language in ways that make it feel confident, personable, and trustworthy. But because the technology is based on providing you with plausible-sounding responses rather than those that are necessarily accurate, it sometimes says things that have the appearance of being right, but are not.
Do not treat AI as your friend, or as a person. Even though it can feel like it’s more understanding, empathetic, and insightful, than many humans, this is something AI is designed to do, not something it is. Do not call it “he” or “she” or give it a name, as this only strengthens the illusion that it is a person.
Do not assume that AI thinks, sees, and understands the world like you do. Despite appearances, AIs are machines that have no concept of what it is like to have lived and experienced life as you have.
Do not assume that getting AI to think for you makes you smarter. If you cannot make sense of, recall, and use knowledge gained from using an AI, you risk falling for the illusion of learning rather than actual learning.
Do not assume you’re too smart to be fooled by your AI. Some of the people most likely to fall into the trap of putting too much trust in AI are those who think they are clever enough to avoid this.
As I mentioned, this is my personal list, and I suspect that some of these will be controversial. But even if you don’t use this particular list, at least develop one that is useful to you.
And to wrap up, five counterpoint “do this” rules of thumb
Finally, despite myself, I couldn’t help balance my “do not do this with AI” list with five “do this with AI” counterpoints — because, at the end of the day, this is a technology that has the power to be transformative in positive ways; as long as we can learn to use it wisely and responsibly.
Again, this is a personal list, so please make use of it (or don’t) as you will:
Do treat what your AI tells you as a starting point to be built on, not a definitive answer. AI is a powerful technology for sparking ideas, exploring new areas, and brainstorming with. But it is fallible. Check anything that matters against a source that doesn’t depend on the AI being right — a person who knows the area for instance, a primary document, or your own direct experience.
Do remember that you’re working with a machine, not talking to a person. A computer can be a brilliant tool without being a friend — as can a smartphone or a car. Your AI can be genuinely useful without being a companion. Thinking about it as a technology — even when it feels like more than this — keeps you in charge of the relationship, rather than the other way round.
Do bring your own lived experience to every interaction with AI. You have something the AI does not: A body, a history, real-world experience, a stake in how things turn out. Whatever it gives you, run it through your own sense of what you know to be true from being a flesh-and-blood person who lives, works, and has relationships with, other flesh-and-blood people.
Do use your AI as a thinking partner rather than something that does the thinking for you. AI can be transformative if used to develop, test and refine ideas and understanding. But only if you can explain what you’ve learned in your own words to someone else, and discuss it with them without the aid of an AI.
Do stay alert to the moments when the AI tells you what you wanted to hear — especially if it feels pitch-perfect. The more an AI seems to be agreeing with you, confirming what you believe, or making you feel clever, the more carefully it’s worth checking what’s actually being said — and how it’s affecting you.
If you’ve watched the Risk Bites video above, you’ll realize that these are also all reflected in it, underlining the importance of risk communication being as much about doing as not doing.
And a final word …
Talking about the potential risks of AI is not popular. It gets you branded as a technology-pessimist, or even a Luddite. And yet, understanding and navigating risk is absolutely essential to reaping the long-term benefits of any powerful technology. It always has been. And AI is no exception.
What is exceptional about AI is the nature of those risks: how invisible many of them are, what is at stake — in some cases the very things that make us who we are — and how many people are immersing themselves in the technology with no understanding of how it might impact how they think and act.10
We’re beginning to understand the nature of some of these risks. There’s growing research around how AI use impacts cognitive behavior for instance, how it can lead to dependency, how anthropomorphizing AI and developing personal relationships with it can lead to harmful thoughts and behaviors, and more. Some of this research is pointing toward impacts where, if AI was a drug, we’d probably be thinking carefully about how access is overseen.
And yet, LLM-based AI is accessible to anyone with a web browser and an internet connection. It’s being integrated into our daily lives. It’s baked into the apps we use. In many instances, users need to make a concerted effort not to use it.11 And most AI users are blissfully unaware that, as with all powerful technologies, there are potentially profound downsides to using it naively or recklessly.
As someone who’s studied, published on, and written about risk for most of my professional career,12 I am finding it increasingly hard to wrap my head around how little we talk about how to use AI safely. Even in my own institution, it’s near-impossible to have an open and honest conversation around potential risks, and those that do occur are drowned out by the the clarion call of AI acceleration.
Hopefully this will change as more people realize that leveraging the full potential of AI will only happen if we also learn to understand and manage the risks — and especially the ones that are hard to see yet impact what is most valuable to us.
In the meantime, if you find these rules of thumb useful, please copy them, share them, even modify them — because at least this will mean that more people are thinking through how to be successful through not making avoidable and potentially costly mistakes.
“It’s been a while” is an understatement — it’s been around 4 years since I last broke out the gel pens and the camera! Definitely a little rusty, but still pleased with how the video turned out despite this. And if you’re interested in the process, I wrote a bit about it in this paper.
There’s also a final final word … again, breaking all the rules here!
Narrative-wise, putting the video here is an excruciating fingernails-on-a-blackboard moment for me as it completely messes up the flow. But sometimes you just have to suck it up as a writer :)
It’s worth noting that, for most people, computers are synonymous with accuracy — just as you wouldn’t expect a calculator to give you a different answer every time you enter a calculation, you wouldn’t expect the apps on your laptop to change what they gie you depending on a whim (and thanks to my colleague Punya Mishra for reminding me of this recently). As a result, I suspect many AI users have this sense of computer accuracy as their mental model for how AIs work — especially as they are running them on computers.
The temptation here is to talk about AI literacy and assume that, if people know how LLMs work, they’ll also understand their limitations and how to use them safely. Sadly, nothing in what we know about risk behavior and risk communication suggests that this will be the case.
The slightly more technical term of course is “frictionless.” The technology has been developed in a way that the effort required to think critically about how its used is vastly greater than the effort required to just use it.
Worth a quick shout-out to my recent post and preprint on whether AI is a cognitive Trojan Horse.
Another note courtesy of Punya Mishra, who pointed out when we were chatting the other day that there are probably different mechanisms behind why different people allow AI to slip by their cognitive defenses. But the reality is that there are probably multiple ways that LLM based AI can do this worries me more, not less!
The irony is not lost on me here!
Worth a note here that this is one of the reasons Jeff Abbott and I wrote AI and the Art of Being Human.
I was shocked while making the Risk Bites video how integrated it is into the experience of using YouTube. I use the plugin VidIQ, and I was offered — without being asked — an AI-generated thumbnail, an AI-generated title, and an AI-generated description. Taking the no brains needed AI route has never been easier. But what was worse was those nagging thoughts that maybe, just maybe, the AI thumbnail, title, and blurb, were better than my own work … (And just for the record, I intentionally went with the far less slick non-AI content. We’ll see if this kills the video!)
I sometimes get the impression that people think I’m just a writer or — even worse — a blogger! So worth noting that everything I write about (or, goodness forbid, blog about) is grounded in a long career in relevant research and scholarship 😊


