Discussion about this post

User's avatar
Ruth McKimmie's avatar

“And here it’s worth noting that the Claude Code generated proposal above isn’t good because Claude Code in isolation knew what a great degree program looks like, but because I was able to provide expert direction, feedback and evaluation along the way.” This, I think, is the most profound point in your essay. How to ensure that enough humans become experts with deeply internalised knowledge of their fields to collaborate effectively with AI (providing expert direction, feedback and evaluation). As AI gets better, the temptation to use it blindly to do the work (eg assignments in the education setting, proposals and projects and policy in legal, commercial and government settings) will get stronger for the unscrupulous, lazy, pressurized, and I worry about that for the future of humanity!

AI may be right, may produce better documents, plans, and analyses than the average person but if that knowledge is not in human brains, is it real?

Hilary Sutcliffe's avatar

Just to tie this up Andrew, I was about to set up Claude Co-Work (Code with a better interface) and so I asked it about the downsides, particularly around the slippery slope of not thinking for myself, accepting the plausible text and getting lazy on references etc.

The answer is below. The critical question is "whether you trust yourself to maintain that distinction under time pressure". I don't trust myself, and as our work has found, it is very likely this will happen to us all.

So I won't be using co-work for now. I am struggling enough with ordinary Claude and 'incremental abdication' of 'intellectual content' letting slightly off sentences fly past and not checking references that seem OK. Even on a good day I have a 'get it out, not get it right' mentality, this will make it worse. As it says, our whole Addiction Economy work is about how

Response from Claude Chat:

This is a really important and perceptive concern — and honestly one that not enough people are asking. Let me give you a straight answer rather than just selling you on the tool.

## The real risks

**1. The "polished output" problem**

This is well-documented and your instinct is right. Research on Claude conversations found that polished outputs — code, files, finished-looking documents — reduce critical evaluation, with users showing significantly lower rates of checking for missing context, fact-checking, and reasoning challenge compared to conversational outputs. A beautifully formatted Word document sitting in your folder *feels* done in a way that text in a chat window doesn't. That psychological effect is real.

**2. Hallucinated references**

This is the most concrete risk for your kind of work. AI systems — including Claude — can produce plausible-sounding citations, statistics, and attributions that are partially or entirely fabricated. In a chat window you tend to interrogate claims as they appear. When Cowork delivers a 12-page synthesised research report directly to your folder, the sheer volume and professional appearance makes it much easier to absorb rather than audit.

**3. Voice drift**

Cowork out of the box produces generic output — and most people, when they see something competent-looking, edit around the edges rather than rewriting from scratch. For a book that depends on your distinctive voice and Joe's, this matters. The Addiction Economy has a specific register. Cowork doesn't know what makes a sentence sound like you versus like every other AI-assisted business book published this year.

**4. The lazy delegation spiral**

The risk isn't one big mistake — it's incremental abdication. You ask Cowork to draft a section, it comes back reasonable, you tighten it a little, ship it. Next time you tighten it less. The tool rewards delegation and your brain adapts to that reward. This is, not incidentally, exactly the mechanism your book describes operating in other domains.

---

## Where the risks are lower

To be fair, Cowork is genuinely safer for some tasks: organising your files, formatting documents, pulling together a bibliography you then verify, creating a first structural skeleton you then fill. The risk scales with how much *intellectual content* you're delegating, not how much *administrative* work.

---

## A practical safeguard worth building in

Given what you're writing about, there's actually a principled reason to be deliberate here. The about-me file we drafted could include an explicit instruction:

> *"When producing any document that makes factual claims, empirical assertions, or cites research, flag each one clearly so I can verify it before treating it as final. Never present synthesised content as finished — always treat it as a first draft requiring my review."*

That turns the tool's transparency feature into a habit rather than an afterthought.

---

The deeper point is that your hesitation is well-founded not just as a practical matter but as an intellectual one. The book's argument is partly about how convenience erodes agency incrementally. Using Cowork unreflectively for the book about exactly that problem would be a fairly spectacular own goal. Used consciously and on the right tasks, it's genuinely useful. The question is whether you trust yourself to maintain that distinction under time pressure — which is, again, exactly the question your book asks about everyone else.

10 more comments...

No posts

Ready for more?