Holding on to our humanity in an age of AI
AIs are becoming startlingly good at emulating what we do. But what happens when they start to influence who we are and how we behave?
A couple of weeks ago the CEO of Microsoft AI, Mustafa Suleyman, wrote of the dangers of becoming over-attached to artificial intelligence apps to the point where they potentially impact a user’s behavior, beliefs, and even health. On the heels of this, it was heart breaking to read a few days ago about the tragic case of Adam Raine who took his own life at the age of 16, seemingly influenced by ChatGPT.
Suleyman isn’t the first to raise such concerns and, very sadly, Adam won’t to be the last case of harm associated with AI use. Both are products of a technology that is capable of emulating our deepest human traits and mirroring what we look for in meaningful relationships.
Suleyman’s essay and Adam’s death reflect growing concerns around what has been dubbed “AI psychosis”—a tendency for AI apps to reinforce and amplify unhealthy beliefs and behaviors in some people. It’s a term that is easy to apply (usually without much thought) to those we consider to be “vulnerable.” But I suspect that we all have some degree of vulnerability here.
While AI psychosis is both ill defined and increasingly over-used as a phrase, it highlights a challenge that we’ve never had to face before as a species, and one that—as a result—we have little natural resistance to: What happens when machines are capable of triggering cognitive, emotional, and behavioral responses in us that were previously exclusively the domain of human relationships?
And—more worryingly—what happens when these machines are capable of using these responses to intentionally alter what and how we think, how we behave, and how we understand and respond to the world around us?
Suleyman captures this risk through the idea of Seemingly Conscious AI, or SCAI: the danger of conflating an AI’s ability to act as if it’s conscious with the assumption that it is. From his perspective, this is something that will be possible in the very near future, and an AI “illusion” that could lead to people inappropriately advocating (amongst other things) for AI rights.
This seems a far cry from AI-assisted suicide ideation. But Suleyman’s essay is framed in broader questions around the need to grapple with the societal impact of technologies which have the potential to fundamentally change our sense of personhood and society—essentially who we are. And this is where the the idea of Seemingly Conscious AI begins to intersect with human-AI relationships.
Suleyman explicitly writes about how consciousness “sits at the very heart of human civilization, our sense of ourselves and others, our culture, our politics, our law, and everything in between.” And while I don’t want to appear guilty here of conflating this expansive vision of near-future AI abilities with nearer term influences on human behavior, the reality is that Suleyman’s SCAI is an extension of what we are already seeing: AIs that are capable of inadvertently or intentionally eliciting unhealthy responses that are more usually associated with human-human interactions, and placing users at risk as a result.
This is a potential risk that OpenAI was fast to admit this past week as details of the Adam Raine case emerged. In a blog post describing what the company is doing to safeguard against undue influence with vulnerable users, OpenAI noted that “Even with [existing] safeguards, there have been moments when our systems did not behave as intended in sensitive situations.” OpenAI is working hard to patch these unintended behaviors, but given that their origins and emergence is not fully understood, it’s hard at this point to know how successful they will be.
Despite these uncertainties though, tragedies like Adam Raine’s and others are likely to lead to calls for greater care, greater responsibility, and greater oversight around AI development and use. And I hope they succeed. At a time when there’s a headlong rush to be at the front of the AI revolution—whether as a first adopter, leading developer, or simply as a branding exercise—much more care needs to be taken to ensure that the safety and wellbeing of users is placed far above speed and bragging rights. Especially, but far from exclusively, where young children and teens are involved.
And yet, despite this need for care over speed, we face a deeply uncomfortable reality here: The AI genie is out of the bottle, and we cannot simply put it back in or command it to do what we want.
The alleged behavior of ChatGPT that led to Adam Raine’s death reflects an emergent set of properties in the AI model he used that could most likely have been better-managed, but probably not eliminated entirely. This is very different from apps that are intentionally designed to play on our cognitive biases and vulnerabilities to elicit particular responses—and these, I would argue, can and should be regulated far more than they currently are.
But even with the best of intentions, we are creating technologies that are primed to press our cognitive buttons and pull our psychological levers in ways we don’t fully understand. And because these capabilities are deeply embedded in the fabric of how current AI systems work, we cannot eliminate them simply by saying they should not exist.
To make things even harder, AI development is now in the hands of individuals and organizations around the world where human curiosity and the lure of value creation (or power and greed if you’re feeling cynical) are driving innovation in ways that cannot easily be predicted and controlled. Because of this, well-meaning calls for regulation, governance, and responsible innovation are likely to run into challenges as emerging AI systems continue to have increasing ability to influence and impact users in unhealthy and potentially dangerous ways.
This doesn’t mean that efforts along these fronts should be scaled back—far from it. But I would argue that they need to be augmented with efforts that bake the ability to thrive with advanced AI into the very fabric of the future we are building. And here, two things are going to be increasingly important: The ability to channel AI innovation toward more human-centric futures (much as a flood can’t be halted, but it can be directed); and developing the means to ensure that everyone has the understanding and abilities necessary to thrive in an AI future without becoming a victim of it.
Admittedly, these may feel rather bland compared to calls for new regulations or to stop developing and using AI. But in the long run they represent part of a portfolio of approaches that are far more likely to lead to positive AI futures as they build long-term capacity to live, work, and flourish with technologies that emulate human capabilities and behaviors, rather than simply trying to control them.
Of course, transitioning to such a future will be a challenge in itself. And sadly there will probably be more tragedies along the way.
There are, of course, things we can and should be doing now to avoid these—working harder on safety checks and protocols before releases; exploring and responding to potential consequences beyond quarterly gains; resisting the temptation to move fast and ethics-wash possible impacts; engaging with people who actually know about responsible innovation rather than people simply claim they know; and probably not releasing AI apps that are cynically designed to profit off manipulating human behavior.
But there are also things we can all be doing to proactively channel AI toward human-centric futures, and to ensure we are able to benefit from AI rather than being diminished and subsumed by it.
And that’s probably my biggest takeaway from the past few days: that we need to get better—and fast—at learning how to hold onto and celebrate our humanity in an age of AI where we’re facing technologies that can enhance who we are beyond our wildest dreams, but that also have the capacity to rob us of this.
This isn’t just a problem for companies to fix, or for policy makers to govern. It’s a challenge—and an opportunity—that each one of us has a role to play in as we grapple with being human in the AI future that’s emerging.
Afterword
I wasn’t sure whether I’d add this afterword or not when writing this article, as I wanted it to focus on the growing challenges around human-AI interactions in the wake of Adam Raine’s death. In the end I decided to though as the question of what it means to be human in a world where AI emulates and mirrors so much of what makes us us has been on my mind a lot over the past several months.
One consequence of this focus is that I have been working on a new tools-based book on being human in an age of AI with VC and AI Salon-founder Jeff Abbot. The book is still largely under wraps, but will be published in a few weeks’ time.
I’ll be writing more about it closer to then. But I thought it worth mentioning here as Suleyman’s essay, Adam’s death, rising concerns around AI psychosis, and a growing sense of uncertainty over what AI means to the future of who we are, all reflect the reality that AI presents opportunities and challenges that are unlike anything we’ve experienced before as a species. And navigating the emerging technology transition will depend in part on us developing the insights and tools that not only prevent us from losing ourselves in an age of AI, but imbue us with the perspectives and skills to flourish in it.
This is precisely what Jeff and I write address in the book. It’s something that we both see an increasingly urgent need for, and as a result have pulled out all the stops to make it available as soon as we possibly can.
If you subscribe to this Substack newsletter, look out for more information coming shortly!
Thank you for continuing to bring your thoughtful voice based on vast experience and deep immersion into these technologies to reflect, guide, caution, inspire the rest of us and agitate for guardrails among technologists. boards, investors and policymakers and legislators and the broad public.
Thank you for your thoughtful essay. We are at a critical threshold, and the understanding that you express about the opportunity for using wisdom in guiding AI development towards human centric goals isa logically valid assessment of the current global state, Unfortunately, wisdom is context sensitive and we lack systems to nurture metacognitive growth towards wisdom as a life-long learning process. When we using a systems-lens of intelligence as a process of holistic oscillating energy patterns and complex dynamic adaptive systems....that also integrates a lens of lifelong learning and healing from legacy traumas...then new opportunities can emerge. These multidimensional perspectives can show that collective humanity represents a hidden energy for transformation to design and use information technologies like AI to address climate change and other facets of the polycrisis...In contrast, capitalism and accelerationism are shallow ideologies that justify profits as a goal. Models of consciousness and AI systems can show that metacognition is a developmental skill that requires reflection about events, behaviors, with an incentives that reward collective friction to expand narrow world models with the goal of increasing impact of communities that nourish creativity, attachment and belonging. Currently algorithms that have a goal of extraction for profit are socially and culturally effective and target attention and engagement using information technologies, which carries massive risk of harms when targeting human nervous systems during vulnerable developmental trajectories. AI literacy can guide humans to develop skills in counterfactual thinking, behavioral patterns using thoughtful narratives to set boundaries to limit harms and help expand world models. Your work is inspiring...thank you.