Should universities be doing more to address the mental health risks of using AI?
Emerging concerns suggests they should
This past Thursday, seven lawsuits were filed in California alleging that ChatGPT-use was connected to a number of cases of wrongful death and mental breakdowns.
Just one day after the lawsuits were filed, a research letter published in the Journal of the American Medical Association (JAMA Network) claimed that over 13% of US youths are using generative AI for mental health advice, with over 65% seeking advice monthly or more often.
Both the lawsuits and the paper reflect an increasingly complex emerging landscape around mental health and generative AI use. And it’s one that, I would argue, should be front and center of university AI strategies as they encourage students to use these tools.
To be clear, I am a strong proponent of students experimenting with and using generative AI. I see the benefits on a near-daily basis while working with my own students and in talking with others. These cover everything from enhanced learning to professional development, as well as mental and emotional health support.
Here, the JAMA Network study reflects my own observations from talking with others: when someone needs help and cannot face talking to another person (or is paralyzed at the mere thought of this), generative AI can be a life saver. And no number of impersonal warnings against using these platforms in this way are likely to deter users who feel seen and validated by generative AI that is always there, never judges, and feels like the most caring are understanding friend they ever had.
And yet, despite many users finding generative helpful for AI for mental and emotional support (over 92% of users found the advice “somewhat or very helpful” in the JAMA Network paper), there is clearly a danger that the very human-feeling relationships they develop with AI apps and chatbot can become harmful in some cases.
One day after the California lawsuits were filed, Bloomberg.com published a long article on growing concerns around some AI chatbot users being nudged into delusional behavior. And while it’s easy to dismiss these cases as involving people who are uniquely prone to such responses, I’m not sure that theres strong evidence to support this.
For the article, Bloomberg Businessweek interviewed 18 people who had either experienced delusions after interactions with chatbots, or were coping with a loved one who had. They also analyzed “hundreds of pages of chat logs from conversations that chronicle these spirals.”
What emerges is a pattern of seemingly-balanced users who are, nevertheless, drawn into an alternative reality by AI chatbots without realizing it—or worse, feel empowered by it.1
As with the lawsuits, these cases are likely to be the very small tip of a very large metaphorical iceberg—the cases that are so extreme they are visible, compared to many more that are most likely not. And this is where there’s a growing argument for more care to be taken in how these tools are deployed, and how users are helped to use them safely and responsibly.
This is especially true within universities, where young people often find themselves in an emotional and mental health pressure-cooker environment and separated from their usual support systems, while being given free access to generative AI tools that, for some, are likely to offer an easy perceived solution to their problems.
Over the past few months, there’s been a growing trend in universities giving students free access to ChatGPT EDU—the educational version of the AI app that comes with privacy guardrails and some performance advantages, but is otherwise similar to the regular app. These include the California State University system, Arizona State University, the University of Oxford, and several more.
The deals being cut with OpenAI (and other AI companies) make sense from a learning and education perspective. They place powerful tools in the hands of students who can benefit from them in their studies and their future career prospects. But, as concerns around mental health impacts continue to grow, these tools also come with complex risks that are unlike anything universities have had to navigate. And because they are both providing them to students and encouraging their use, this, to my mind, comes with a social, moral and (I would assume) legal duty of care for the health and wellbeing of users.
Yet, as far as I have been able to tell, apart from warnings about data privacy, the only university to offer ChatGPT EDU to all students that has directly addressed mental health in this context is the University of Oxford.2
Here, the advice to users is:
Because of the way that GenAI tools work, they are not a replacement for human interaction and are not equipped to offer therapy or mental health support. You should be aware that GenAI tools lack emotional nuance and clinical awareness. They can sometimes give inaccurate and potentially dangerous advice. At times of heightened distress or mental health crisis, AI chatbots may reinforce harmful thoughts through feedback loops.
It’s a good start—as long as students actually read the guidance and pay attention to it. But I suspect that statements like this are not that effective compared to the lure of a sympathetic and comforting AI bot that seems to know you intimately, and can provide exactly the advice you think you need, just when you need it the most.
And we know this. Effective health interventions require time, understanding, legitimacy, expertise, and trust; not just words buried in a document.
Given the evolving situation around AI use and mental health, I would argue that universities should being doing more to protect their students from harm—especially as they take on the responsibility of providing potentially risky generative AI tools to them. This, of course, needs to be approached in the context of the positive and potentially transformative uses of tools like ChatGPT. Yet as with every other case where potentially dangerous technologies are used within university settings, effective risk reduction and management strategies are surely essential.
The trouble is, it’s not clear yet what “doing more” might mean here.
I am deeply skeptical, as someone who’s studied and worked in risk assessment, management and communication for decades, that simply telling students to “be careful” will work.
I also suspect that banning certain uses of generative AI is also likely to be ineffective, unless institutions start to intrusively monitor the use of platforms like ChatGPT EDU. And this opens up a whole other can of worms around surveillance and privacy.
Another approach that is actively being explored is to training AI models and bots to detect potentially harmful use, and either caution users, provide advice on where to get help, or report such use to administrators and health professionals. It is, however, an approach that is fraught with difficulties that range from a perceived breach of trust by users, to potentially neutering models with overly restrictive guardrails.3 And when students can get access to less restrictive AI modes for free outside of their institution, it risks encouraging them to explore options that may be more convenient, but less safe.
One alternative approach is to ensure that students learn to understand and navigate potential risks to mental and emotional health as they use AI apps and platforms; not by simply providing guidance or “AI literacy” classes (which risk becoming performative), but by talking with them and listening to them, creating safe environments for discussion, building trust, and helping them develop the skills and understanding they need to thrive with AI.
Even this is no guarantee that encouraging the use of ChatGPT and other AI tools won’t lead to some students facing serious mental health issues. But it would at least begin to open up conversations around where the risks lie, and how they may be effectively addressed.
And perhaps having such conversations is one of the most useful first steps forward at this point, so that students, staff, instructors, administrators, and others, are aware of the potential dangers, and are motivated to work together to navigate them.
Because the last thing we need is a suite of AI and mental health lawsuits where the plaintiffs are the parents of students, and the defendants are the universities that provided them with the tools that lead to harm.
According to the Bloomberg article, OpenAI Chief Executive Officer Sam Altman told reporters at a recent dinner that cases of mental distress associated with ChatGPT use are unusual, estimating that fewer than 1% of ChatGPT’s weekly users have unhealthy attachments to the chatbot. However, the article goes on to note that “ChatGPT is the world’s fifth-most popular website, with a weekly user base of more than 800 million people worldwide.” Given these numbers, the article estimates based on OpenAI’s own figures that there could be up to “560,000 people exhibiting symptoms of psychosis or mania weekly, with 1.2 million demonstrating heightened emotional attachment and 1.2 million showing signs of suicidal planning or intent.” If these are just the cases that can be inferred from observed and classified behaviors, the chances are that the true numbers are much higher.
At the time of writing, the California State University system FAQ on ChatGPT EDU do not mention mental health, although CSU’s guidelines on Ethical and Responsible Use of AI for Students do note that “AI platforms (ChatGPT, Copilot, Gemini, etc.) should not be considered for medical or psychological advice.” ASU—another prominent user of ChatGPT EDU (and my own institution)—does not mention mental health in its FAQ, but did publish a news story in September cautioning against using AI for emotional support. The Wharton School at the University of Pennsylvania does not explicitly mention mental health on its ChatGPT EDU help page, but researchers at Wharton are exploring potential links. Similarly, Harvard University does not mention mental health issues on its ChatGPT EDU page, but researchers at Harvard are actively exploring associations between AI use and mental health. These are not the only universities offering ChatGPT EDU to students, but are amongst the most prominent.
The question of whether to add additional guardrails to educational versions of AI platforms is a complex one as, while it potentially reduces risks, it also reduces functionality compared to freely available platforms outside the confines of universities.


