Why I'm falling out of love with Claude
I hope it's just a phase, but Anthropic, what have you done to my trusted AI writing companion?!
I should start out by saying that I strenuously avoid anthropomorphizing AI apps. I don’t personalize them by giving them names. I always refer to them as “it.” And I don’t have any illusions of them being self-aware or conscious in any meaningful way — at least, not yet.
But I do have a relationship with the LLMs I use. And this is built around how it feels to use them, and how using them makes me feel — much as I do with other technologies I use.
As a writer, that relationship is deeply intertwined with the character that’s embedded in how an LLM-based AI communicates with me. And when that character changes, it changes the relationship — and my ability to work effectively with a platform.
And this is exactly how I’ve been feeling about Anthropic’s Claude this past week.
A year ago I fell in love with Claude Opus 4.5 — not in the romantic sense (just to be clear), but in the sense of finding an AI that it excited me to work creatively with, and one that elevated and amplified what I could achieve.
I was working on the manuscript for AI and the Art of Being Human at the time, and having a hard time with ChatGPT. The book was a very intentional collaboration with AI, and as part of this I needed a writing partner that could capture the texture — the soul — of what my co-author Jeff Abbott and I were trying to convey.
And while technically competent, ChatGPT simply could not do this. It was flat, mechanical, painful to read, and even more painful to work with as a writer.
Reading what we were co-producing just made me miserable.
And so I started playing with Claude, and everything changed. Here was a technological writing partner that seemed to be able to reach into my soul as a writer and capture the creative essence and feel of what I was striving for. Claude plus me had more style, more depth, and more emotional resonance (surprisingly) than either of us could achieve alone.
It was the start of a writing partnership/relationship that’s lasted the best part of a year.
Then Opus 4.7 came along, and everything changed.1
Gone was my AI soul-mate, and in its place was a soulless machine that seemed incapable of breaking away from interminable AI clichés in what it produced — all those turns of phrases, the rhetorical moves, the stylistic patterns, that make some LLMs OK technical writers and appalling communicators.2
What’s worse, the more I tried to train it on my voice and place stylistic guides and guardrails in place, the worse it got.
Now, I realize that I may be being a little over-sensitive here. But words, and how they are used, and the meaning and stories they weave, are important to me. And somehow, what had made Opus 4.5 special seemed to have been replaced by a mechanical — and not so lovable — stranger.
In other words, the very thing that had led to me moving to Claude in the first place was now happening to the AI I thought I could trust.
At this point, I can imagine you rolling your eyes and mumbling something like “get over yourself,” or something much stronger! And you’d be right to. LLMs are, after all, just machines.
At the same time, I think there’s something here that goes beyond my over-delicate sensitivities. And that is that how these models behave — and the relationships we develop with them — matter. And that they matter beyond the technical competencies that can be measured by doing math, or solving complex problems, or writing code.
Of course, there are many applications of AI that are purely utilitarian, and where it probably isn’t important if the underlying LLM undergoes a character change.3 But this is a technology whose transformative power is predicated on how we engage with it relationally. And that means that, for many users, the character or “feel” of an AI is important — sometimes critically so.
We know from human relationships — whether they are human-human, human-animal (or human-plant to be inclusive) or human-machine — that when the thing or person we are in a relationship with changes, connection and trust become stressed, and sometimes break. With AI, this implies that if you rely on LLMs in your personal or professional life in ways that depend on a relationship that’s based on how the AI behaves, not just what it does, changes to the model that affect this can be disruptive. And this raises potential issues that I suspect often get glossed over.
Of course, that we do build relationships with inanimate objects or technological devices is something that most people understand intuitively. We become attached to the vehicles we drive, or a favorite piece of clothing, or a particular type, make, or vintage of phone. But it’s one thing to be attached to a car or have a favorite sweater that you can’t bear to part with. It’s completely different when the thing you have the relationship with can talk back to you, reason with you, share your hopes and fears (or at least do a good impersonation of this), inspire you, and bring you a sense of joy and satisfaction that’s closer to what we’re used experiencing to with animals and people rather than a machine.
Whether this is a big deal or not, I honestly don’t know. But I suspect it might be — although one that risks going unnoticed under the dazzle of technical AI fireworks. And if this is the case, it might be worth the big AI companies paying more attention to both character and character constancy as increasingly powerful AI models emerge.
In the meantime, I’m stuck with teaching Claude how to be Claude again. And that sucks!
Postscript
This post is specifically about changes in character when LLM-based AIs are updated or refined. But as I was writing it, stories were emerging around how Claude Code was not behaving as expected in a number of other ways — so much so that Anthropic issued an apology wrapped in an explanation a few days ago.
I’m not sure how closely my character-based experiences and those associated with functionality and technical competence are related. But it wouldn’t surprise me if they are.
In response to the concerns we were seeing, my colleague and podcast co-host Sean Leahy and I sat down to record an episode of Modem Futura focused on Opus 4.7 and Anthropic’s Mythos a couple of days before I finalized this post. And as part of this, Sean asked Claude for some background and pointers for the episode — ironically, using Opus 4.7.
Claude, for reasons of its own, decided to provide this as an html file, and it’s both informative and meta enough that I thought I’d link to it here.
I’ve left the web page exactly as Claude produced it — including the note from Claude that this is “Prep only, not for read-aloud” and the lovely “not for Andrew to see yet.” 😊
The timeline is revealing of just how dependent modern LLMs are on updates and tweaks in how they behave. But also, the language Claude uses here is the epitome of everything I’ve been railing about above — competent, informed, yet excruciating to read — at least to me.
Of course, I could just be developing an allergy to AI-speak. But that’s probably a story for another day …
Opus 4.6 came in between, which was where things started going wrong i suspect, but it was with 4.7 that the cracks started to show!
Like I often do, I passed the final draft of this by Claude to catch grammatical errors and other things that slipped my editorial net. It sounded positively offended that I should criticize it’s prowess as a writer — which made me glad as it was actually showing some character as it did!
Re-reading this, I’m not so sure, as LLMs are a relational technology whether we’re doing math with them, coding, solving the next big scientific problem, or asking what we should make for dinner. And because of this, the ways they use language changes how we respond to them, the degree to which we trust them, and our ability to get the most out of them through the medium of language.


