19 Comments
User's avatar
Ryan Bromley's avatar

I enjoyed reading your prccess. I wonder if you've heard of https://prism.openai.com/ - OpenAI's tool for writing academic papers? Perhaps it might be interesting for you.

Deborah Osborne's avatar

I think your comment about the value to the public good is spot on. I wrote a book using AI in a similar fashion that will be published by Routledge - for that purpose.

Michael G Wagner's avatar

Your observation that the “process mimicked collaborating with a talented grad student or postdoc” is spot on. That includes the feeling that the paper is not entirely yours. Same thing.

I would also argue that using grad students’ work for profile padding has been a common practice for decades, if not centuries. That isn’t new either.

What AI exposes is really a systemic problem in academia in general. We are measuring the wrong things to define success.

ElandPrincess's avatar

"Rather, because they mimic characteristics that are often associated with trustworthiness — not out of maliciousness but simply because that’s the nature of LLM-based AI models — these models have the capacity to slip through our epistemic vigilance systems." - 💯 well articulated

Peter Buck's avatar

I read your work as saying that AI doesn’t deceive—it arrives with trust cues humans evolved for other humans. The real concern is how human judgment can be bypassed when those cues fall outside what our epistemic vigilance is calibrated to evaluate.

Where my thinking diverges is less on diagnosis and more on response. You name the failure mode—epistemic bypass. My work at BeHuman.wtf asks a related but different question: *what do humans need to stay oriented when systems move faster than meaning?*

You’re describing the disease; I’m trying to name the vaccine. You say epistemic vigilance; I say arrival rituals. Adjacent terrain, different vectors—and possibly a useful place to meet.

My POV: AI can be the glue for humanity — if we design for human orientation, not just system speed.

Nicole Bowens, PhD's avatar

If you started your career using LLMs beginning in grad school, would you be able to develop the skills needed to revise and redirect AI drafts? I don't think so. I do think it's through working directly with the sources and putting the words together that you really see the shortcomings in the literature or gaps you have not properly addressed.

Alex Boss's avatar

The next 5 years are going to move so fast that LLMs will not be the same at all, grad school won’t be the same at all and even the jobs people do will not be the same. Then the 5 years after that will include 10X more change.

Andrew Maynard's avatar

That's a huge concern amongst many people – including myself — and of course implicitly reflected in the process here. The challenge though is that LLMs and more advanced forms of AI aren't going away, so how do we adapt?

Alejandro Piad Morffis's avatar

Exactly my concern, and very similar to what we're seeing in software development. I find myself getting not the mythical 10x but perhaps something like 2 to 3x improvement in productivity regarding coding, but I've been coding professionally for over 15 years and I'm confident in my ability to remain critical and thorough enough to actually get the best out of AI. But I see my students and they aren't getting remotely the same results, I think mainly because they jump too fast to the conclusion, they accept the AI output too easily, it passes by their untrained defenses, to paraphrase your own words. I'm terrified. How the hell do we teach in this new paradigm? It's paradoxical in the sense that the very skills needed to get the most out of AI are the ones that get most hindered when using AI too soon.

na's avatar

Then question becomes how would you convey to students how to hone the skills despite their AI workflow.

Alejandro Piad Morffis's avatar

Indeed. Telling me trust me it's better this way is like telling a teenager believe me, it's better if you wait a bit before trying out booze or sex.

Justin's avatar

I'm new here. Thanks for your work! Do you mind if I load up this recent Cognitive Trojan Horse into NotebookLM (not for Audio Overviews, which I see you've covered) but for infographics/slides?

Andrew Maynard's avatar

Oh, of course - please do! I hope all of my published material is used as widely and as creatively as possible for it to be useful

Justin's avatar

Quite timely. I'm not in research anymore (anyways it was a brief stint). I did a similar process for my most recent short essay "murals for your mind"—that is, the back and forth with AI, steering it, vetoing it, welcoming insights) and am working to write that up (and by working, I mean working with Claude). And your explorations fit in line with my "whose bread" piece.

Madeleine Champagnie's avatar

I’ve long been wondering about when and how academia would do this. Fascinating. It’s still the case that the human has to have the skills, ideas and knowledge to be able to have that level of conversation with AI, and to produce an output which is valuable.

In some ways nothing has changed, which is reassuring, while everything is changing.

Maybe we need to not so much focus on “spotting slop” as “understanding quality”. Just because something is produced by or with AI does not immediately mean it’s “slop”. Humans produce a lot of rubbish too…

Justin's avatar

Thank you for pointing to this!

Madeleine Champagnie's avatar

In my head it’s a bit like the emergence of plastic as a material. Extremely useful, multiple uses, can be utter tat, disastrous for the environment, but can also be used to make exceptionally beneficial artefacts.

I’m beginning to see AI art & music which is objectively very good, to the point where I don’t care that it’s AI. It’s just very good and well done to the human who crafted it; it doesn’t matter how much or how little of it “is AI generated”. It’s just *good*.

It will be interesting to see how it all pans out…

Andrew Maynard's avatar

I like this analogy

Brice Barrett's avatar

The tension you describe in 'cracking' to write an academic paper is the perfect illustration of the Infrastructure Gap. As you’ve explored in AI and the Art of Being Human, we are in a race between 'exponential code' and 'linear institutions.'

In my fictional work The Digital Prophets series, I see this as the First Domino: when the formal systems of knowledge (like academia) can no longer parse the speed of the machine, we risk outsourcing our discernment to AI Proxies. Are we 'co-creating' a future with these systems, or are we accidentally building the foundation for an Algorithmic Sovereignty that won't need our papers at all? 🕯️🧭