I confess I have been a bit of a Gemini fanatic recently. The one thing I have found with AI is that it is good at finding the norm but not necessarily good at thinking outside of the box. It does go to show, however, just how much potential it has to redefine education.
100% agree - although with clever techniques things are getting better. But always worth remembering that most humans are also bad at thinking beyond norms and means, and sometimes this is what's needed.
Just to tie this up Andrew, I was about to set up Claude Co-Work (Code with a better interface) and so I asked it about the downsides, particularly around the slippery slope of not thinking for myself, accepting the plausible text and getting lazy on references etc.
The answer is below. The critical question is "whether you trust yourself to maintain that distinction under time pressure". I don't trust myself, and as our work has found, it is very likely this will happen to us all.
So I won't be using co-work for now. I am struggling enough with ordinary Claude and 'incremental abdication' of 'intellectual content' letting slightly off sentences fly past and not checking references that seem OK. Even on a good day I have a 'get it out, not get it right' mentality, this will make it worse. As it says, our whole Addiction Economy work is about how
Response from Claude Chat:
This is a really important and perceptive concern — and honestly one that not enough people are asking. Let me give you a straight answer rather than just selling you on the tool.
## The real risks
**1. The "polished output" problem**
This is well-documented and your instinct is right. Research on Claude conversations found that polished outputs — code, files, finished-looking documents — reduce critical evaluation, with users showing significantly lower rates of checking for missing context, fact-checking, and reasoning challenge compared to conversational outputs. A beautifully formatted Word document sitting in your folder *feels* done in a way that text in a chat window doesn't. That psychological effect is real.
**2. Hallucinated references**
This is the most concrete risk for your kind of work. AI systems — including Claude — can produce plausible-sounding citations, statistics, and attributions that are partially or entirely fabricated. In a chat window you tend to interrogate claims as they appear. When Cowork delivers a 12-page synthesised research report directly to your folder, the sheer volume and professional appearance makes it much easier to absorb rather than audit.
**3. Voice drift**
Cowork out of the box produces generic output — and most people, when they see something competent-looking, edit around the edges rather than rewriting from scratch. For a book that depends on your distinctive voice and Joe's, this matters. The Addiction Economy has a specific register. Cowork doesn't know what makes a sentence sound like you versus like every other AI-assisted business book published this year.
**4. The lazy delegation spiral**
The risk isn't one big mistake — it's incremental abdication. You ask Cowork to draft a section, it comes back reasonable, you tighten it a little, ship it. Next time you tighten it less. The tool rewards delegation and your brain adapts to that reward. This is, not incidentally, exactly the mechanism your book describes operating in other domains.
---
## Where the risks are lower
To be fair, Cowork is genuinely safer for some tasks: organising your files, formatting documents, pulling together a bibliography you then verify, creating a first structural skeleton you then fill. The risk scales with how much *intellectual content* you're delegating, not how much *administrative* work.
---
## A practical safeguard worth building in
Given what you're writing about, there's actually a principled reason to be deliberate here. The about-me file we drafted could include an explicit instruction:
> *"When producing any document that makes factual claims, empirical assertions, or cites research, flag each one clearly so I can verify it before treating it as final. Never present synthesised content as finished — always treat it as a first draft requiring my review."*
That turns the tool's transparency feature into a habit rather than an afterthought.
---
The deeper point is that your hesitation is well-founded not just as a practical matter but as an intellectual one. The book's argument is partly about how convenience erodes agency incrementally. Using Cowork unreflectively for the book about exactly that problem would be a fairly spectacular own goal. Used consciously and on the right tasks, it's genuinely useful. The question is whether you trust yourself to maintain that distinction under time pressure — which is, again, exactly the question your book asks about everyone else.
What a great response - and love the irony of an AI advising you not to use it! Also aligns with some of the things we touched on in AI and the Art of Being Human!
“And here it’s worth noting that the Claude Code generated proposal above isn’t good because Claude Code in isolation knew what a great degree program looks like, but because I was able to provide expert direction, feedback and evaluation along the way.” This, I think, is the most profound point in your essay. How to ensure that enough humans become experts with deeply internalised knowledge of their fields to collaborate effectively with AI (providing expert direction, feedback and evaluation). As AI gets better, the temptation to use it blindly to do the work (eg assignments in the education setting, proposals and projects and policy in legal, commercial and government settings) will get stronger for the unscrupulous, lazy, pressurized, and I worry about that for the future of humanity!
AI may be right, may produce better documents, plans, and analyses than the average person but if that knowledge is not in human brains, is it real?
That's good in theory. You don't need to be unscrupulous or lazy or pressurised to think that it might help you do a better job than you otherwise would do, but a by-product of that could be the incremental offsetting of cognitive labour, which makes your work better, but gradually erodes your ability to think innovatively for yourself. I see that happening to me already and I have only used it about 10 times for quite boring things, not writing per se. I learned alot about how to write proposals for example, but then find myself, just leaning on it a little more, and post rationalising that I am busy and this isn't core to my thinking so it is good enough.
Still haven't figured the best way, but the cognitive creep is real, and you don't really see it happening. Does it matter? Isn't it like using auto complete instead of learning how to spell? Who knows, but I am going to be vigilant, and perhaps just a little bit lazy when it doesn't seem to matter.
Thanks, very interesting. So perhaps something good will come out of it long term if we find better ways of judging students and not just rewarding in such a narrow parameter. Who knows even the arts might be valued instead of just STEM
A good example of where expert human and AI produce extremly fast and good results than on. their own. I hope that managers understand that it still needs humnas to understand what they want how to articulate that and then edit the responses to get quality outputs.
Very interesting indeed. Last week on a whim I asked Claude to develop me and Joe a new think tank with and for young people, it did quite a good job with some innovative thinking about engagement, though the language was the usual em-dash irritating stuff. It even came up with a name and some spiel which we liked and edited a bit.
The First Third - a new think tank by, and for, the people growing up now
"The first third of life - birth to roughly thirty - is when the human brain forms, when habits and identities are established, when the foundations of health, opportunity and resilience are laid. It is also the period when the external environment has the most power to harm or help.
Young people are not absent from policy debates about this important part of their lives; they are studied, surveyed, worried about and legislated for constantly. But their own analysis of what is happening to them - grounded in lived experience, backed by rigorous evidence, delivered in their own voice - is almost entirely missing from the discussions where it matters."
...There's more, but you get the gist.
I made it into a two-sided scoping study which was mainly me, and Joe got lots of potential interest in funding the initial consultation at a conference he went to at the end of the week.
So now I was just sitting here thinking should I get Claude to do more of the scoping and research properly so that the scoping study can be mainly interviews. Your experience here has made me think it would be stupid not to really. But it feels wrong, like I am cheating, and people will just dismiss it if they found Claude helped, as I probably would. Aaah!
Sorry to use you as a helpline -also I don't really know the difference between Claude Code and ordinary Claude, I thought it was just about creating code and would be beyond me. I have been irritated by Claude's inability to keep diffferent book chapters in mind when I used Projects in ordinary Claude, perhaps I should have been using Claude code.
No need to answer, but helpful anyway, thanks again Andrew, hope you are well.
Thanks Hilary, and just so you know, these are similar to the types of things I grapple with almost every day and still struggle with — it's why I am both using Claude for things like this and refusing to use AI to write pieces like this!
There's a very real tension I think between using AI to increase the value or effectiveness of something where I really dislike the implicit idea of "human and ineffective is better than AI-enabled and effective" - but then, considering how important relationships and trust are in everything, this adds a complex overlay to things.
I am finding that using AI with care as a thinking and research/development partner while not offloading what is important to me (and others) as a human is useful - and in the case above, if this was a real project I would have no qualms in using Claude Code for establishing the framework and bones, and then building on this using my own expertise and sweat-equity.
And Claude Code is a game changer compared to the web app - if you are a Mac user it's worth downloading it and giving is a whirl. You will need to point it to a folder on your laptop where it can save files and read what you give it.
How kind, thanks very much for that advice. Wow. Will give it a go. Yes, I found it totally useless for the book and we haven't used it at all for writing and just a bit for translating complex neuroscience books, but used Consensus for that too (paying for it grr to find it is now included in Claude!).
But it seemed really good when I asked 'If you were the publisher at x (the real person who is looking at the book) what would you say about it and how would you improve it". It gave mainly structural suggestions. Sadly I only thought of doing that after we'd sent it!
I confess I have been a bit of a Gemini fanatic recently. The one thing I have found with AI is that it is good at finding the norm but not necessarily good at thinking outside of the box. It does go to show, however, just how much potential it has to redefine education.
100% agree - although with clever techniques things are getting better. But always worth remembering that most humans are also bad at thinking beyond norms and means, and sometimes this is what's needed.
Just to tie this up Andrew, I was about to set up Claude Co-Work (Code with a better interface) and so I asked it about the downsides, particularly around the slippery slope of not thinking for myself, accepting the plausible text and getting lazy on references etc.
The answer is below. The critical question is "whether you trust yourself to maintain that distinction under time pressure". I don't trust myself, and as our work has found, it is very likely this will happen to us all.
So I won't be using co-work for now. I am struggling enough with ordinary Claude and 'incremental abdication' of 'intellectual content' letting slightly off sentences fly past and not checking references that seem OK. Even on a good day I have a 'get it out, not get it right' mentality, this will make it worse. As it says, our whole Addiction Economy work is about how
Response from Claude Chat:
This is a really important and perceptive concern — and honestly one that not enough people are asking. Let me give you a straight answer rather than just selling you on the tool.
## The real risks
**1. The "polished output" problem**
This is well-documented and your instinct is right. Research on Claude conversations found that polished outputs — code, files, finished-looking documents — reduce critical evaluation, with users showing significantly lower rates of checking for missing context, fact-checking, and reasoning challenge compared to conversational outputs. A beautifully formatted Word document sitting in your folder *feels* done in a way that text in a chat window doesn't. That psychological effect is real.
**2. Hallucinated references**
This is the most concrete risk for your kind of work. AI systems — including Claude — can produce plausible-sounding citations, statistics, and attributions that are partially or entirely fabricated. In a chat window you tend to interrogate claims as they appear. When Cowork delivers a 12-page synthesised research report directly to your folder, the sheer volume and professional appearance makes it much easier to absorb rather than audit.
**3. Voice drift**
Cowork out of the box produces generic output — and most people, when they see something competent-looking, edit around the edges rather than rewriting from scratch. For a book that depends on your distinctive voice and Joe's, this matters. The Addiction Economy has a specific register. Cowork doesn't know what makes a sentence sound like you versus like every other AI-assisted business book published this year.
**4. The lazy delegation spiral**
The risk isn't one big mistake — it's incremental abdication. You ask Cowork to draft a section, it comes back reasonable, you tighten it a little, ship it. Next time you tighten it less. The tool rewards delegation and your brain adapts to that reward. This is, not incidentally, exactly the mechanism your book describes operating in other domains.
---
## Where the risks are lower
To be fair, Cowork is genuinely safer for some tasks: organising your files, formatting documents, pulling together a bibliography you then verify, creating a first structural skeleton you then fill. The risk scales with how much *intellectual content* you're delegating, not how much *administrative* work.
---
## A practical safeguard worth building in
Given what you're writing about, there's actually a principled reason to be deliberate here. The about-me file we drafted could include an explicit instruction:
> *"When producing any document that makes factual claims, empirical assertions, or cites research, flag each one clearly so I can verify it before treating it as final. Never present synthesised content as finished — always treat it as a first draft requiring my review."*
That turns the tool's transparency feature into a habit rather than an afterthought.
---
The deeper point is that your hesitation is well-founded not just as a practical matter but as an intellectual one. The book's argument is partly about how convenience erodes agency incrementally. Using Cowork unreflectively for the book about exactly that problem would be a fairly spectacular own goal. Used consciously and on the right tasks, it's genuinely useful. The question is whether you trust yourself to maintain that distinction under time pressure — which is, again, exactly the question your book asks about everyone else.
What a great response - and love the irony of an AI advising you not to use it! Also aligns with some of the things we touched on in AI and the Art of Being Human!
“And here it’s worth noting that the Claude Code generated proposal above isn’t good because Claude Code in isolation knew what a great degree program looks like, but because I was able to provide expert direction, feedback and evaluation along the way.” This, I think, is the most profound point in your essay. How to ensure that enough humans become experts with deeply internalised knowledge of their fields to collaborate effectively with AI (providing expert direction, feedback and evaluation). As AI gets better, the temptation to use it blindly to do the work (eg assignments in the education setting, proposals and projects and policy in legal, commercial and government settings) will get stronger for the unscrupulous, lazy, pressurized, and I worry about that for the future of humanity!
AI may be right, may produce better documents, plans, and analyses than the average person but if that knowledge is not in human brains, is it real?
That's good in theory. You don't need to be unscrupulous or lazy or pressurised to think that it might help you do a better job than you otherwise would do, but a by-product of that could be the incremental offsetting of cognitive labour, which makes your work better, but gradually erodes your ability to think innovatively for yourself. I see that happening to me already and I have only used it about 10 times for quite boring things, not writing per se. I learned alot about how to write proposals for example, but then find myself, just leaning on it a little more, and post rationalising that I am busy and this isn't core to my thinking so it is good enough.
Still haven't figured the best way, but the cognitive creep is real, and you don't really see it happening. Does it matter? Isn't it like using auto complete instead of learning how to spell? Who knows, but I am going to be vigilant, and perhaps just a little bit lazy when it doesn't seem to matter.
This…
https://substack.com/@samillingworth/note/c-236440901?r=4vp99r&utm_medium=ios&utm_source=notes-share-action
Thanks, very interesting. So perhaps something good will come out of it long term if we find better ways of judging students and not just rewarding in such a narrow parameter. Who knows even the arts might be valued instead of just STEM
A good example of where expert human and AI produce extremly fast and good results than on. their own. I hope that managers understand that it still needs humnas to understand what they want how to articulate that and then edit the responses to get quality outputs.
Very interesting indeed. Last week on a whim I asked Claude to develop me and Joe a new think tank with and for young people, it did quite a good job with some innovative thinking about engagement, though the language was the usual em-dash irritating stuff. It even came up with a name and some spiel which we liked and edited a bit.
The First Third - a new think tank by, and for, the people growing up now
"The first third of life - birth to roughly thirty - is when the human brain forms, when habits and identities are established, when the foundations of health, opportunity and resilience are laid. It is also the period when the external environment has the most power to harm or help.
Young people are not absent from policy debates about this important part of their lives; they are studied, surveyed, worried about and legislated for constantly. But their own analysis of what is happening to them - grounded in lived experience, backed by rigorous evidence, delivered in their own voice - is almost entirely missing from the discussions where it matters."
...There's more, but you get the gist.
I made it into a two-sided scoping study which was mainly me, and Joe got lots of potential interest in funding the initial consultation at a conference he went to at the end of the week.
So now I was just sitting here thinking should I get Claude to do more of the scoping and research properly so that the scoping study can be mainly interviews. Your experience here has made me think it would be stupid not to really. But it feels wrong, like I am cheating, and people will just dismiss it if they found Claude helped, as I probably would. Aaah!
Sorry to use you as a helpline -also I don't really know the difference between Claude Code and ordinary Claude, I thought it was just about creating code and would be beyond me. I have been irritated by Claude's inability to keep diffferent book chapters in mind when I used Projects in ordinary Claude, perhaps I should have been using Claude code.
No need to answer, but helpful anyway, thanks again Andrew, hope you are well.
Thanks Hilary, and just so you know, these are similar to the types of things I grapple with almost every day and still struggle with — it's why I am both using Claude for things like this and refusing to use AI to write pieces like this!
There's a very real tension I think between using AI to increase the value or effectiveness of something where I really dislike the implicit idea of "human and ineffective is better than AI-enabled and effective" - but then, considering how important relationships and trust are in everything, this adds a complex overlay to things.
I am finding that using AI with care as a thinking and research/development partner while not offloading what is important to me (and others) as a human is useful - and in the case above, if this was a real project I would have no qualms in using Claude Code for establishing the framework and bones, and then building on this using my own expertise and sweat-equity.
And Claude Code is a game changer compared to the web app - if you are a Mac user it's worth downloading it and giving is a whirl. You will need to point it to a folder on your laptop where it can save files and read what you give it.
How kind, thanks very much for that advice. Wow. Will give it a go. Yes, I found it totally useless for the book and we haven't used it at all for writing and just a bit for translating complex neuroscience books, but used Consensus for that too (paying for it grr to find it is now included in Claude!).
But it seemed really good when I asked 'If you were the publisher at x (the real person who is looking at the book) what would you say about it and how would you improve it". It gave mainly structural suggestions. Sadly I only thought of doing that after we'd sent it!