AI misuse in student-advisor collaborations. Part 1
What do you do when your PhD advisor insists on using AI in ways that put you in a difficult position?
Note: If you’re a PhD student who is grappling with an advisor who you feel is making your life challenging with how they are using AI, there’s an opportunity to anonymously share your experiences below.
If you work or study in a university, it’s hard to avoid the topic of AI and education. How do you combat AI-assisted “cheating?” How do you AI-proof your course? How do you accelerate learning using AI? How, as an instructor, can you offload the “tedium” of teaching to an AI bot? And so on.
But one topic I haven’t come across until recently is the potential misuse of AI by PhD advisors when collaborating with their grad students. And it’s one that I find concerning me—a lot.1
Imagine a scenario where you are a PhD student who’s been working for weeks on a paper you’re writing with your advisor.2
Now imagine that your advisor returns your carefully crafted draft, and you don’t recognize it. It’s been extensively rewritten, it no longer reflects your voice or ideas, it’s full of errors, and half citations are wrong.
Your work, it turns out, has been run through the academic mangle of ChatGPT by the person who holds your academic career in their hands. And you’re expected to pick up the pieces.
This is a hypothetical scenario. But it does reflect behaviors that I’m increasingly hearing about on the academic grapevine. And it’s reflective of ways of using AI that potentially place grad students in a very precarious position personally and professionally.3
And yet, despite the occasional anecdote and grapevine-whisper, there is remarkably little known about the nature and extent of such uses of AI in PhD-advisor collaborations—or similar collaborations where there’s a clear power differential between the people involved.
Because of this, rather than weigh in feet-first, I wanted to gather a bit more evidence—albeit anecdotal.
And so, if you’re a PhD student who has experienced AI-related challenges when collaborating with your advisor, I’d like to hear from you.
This isn’t a research project, and the data won’t be analyzed and published—its simply a way of getting a better sense of the landscape around the use of AI in student-advisor collaborations so that potential issues can be surfaced.4 It’s also fully anonymous.
If you have something you’re willing to share here, please follow the link below to an anonymous Google Form.
Depending on the responses, I’m intending to write more about what the challenges look like here—and how they might be addressed—in a later post.
In the meantime, for anyone working with grad students, it’s worth asking how your use of AI impacts your student in ways you may not be thinking about—including whether it’s robbing them of their dignity, denying them learning opportunities, making their life unnecessarily harder, or even placing them in a position where their academic career could be in jeopardy.
More to come …
I focus on PhD-advisor collaborations here, but the same holds for undergraduates and postdocs working with faculty who have a say in their academic and professional standing and career.
This is common, and expected of PhD advisors.
While not directly aligned with this hypothetical, there are anecdotal cases online of people struggling with similar challenges. In a post on Academic SubStack from a couple of years ago a user write about the challenges of working with a co-author who quite happily ran their work through ChatGPT to their dismay. And just a few months ago a PhD student wrote about what they considered inappropriate use of AI by their advisor—again on Academic Substack. In this case the person talked to their advisor about their AI behavior. The response: “ChatGPT is fully reliable and shame on me because I don’t produce results in 5 hours as ChatGPT does, and generally if I ever don’t finish an analysis in time for a paper, he will be publishing AI chatbot output results instead of my work.”
I did run this through our IRB. Given the journalistic nature of the exercise it was confirmed that does not come under IRB requirements.



Hey Andrew, big fan of your work! Love the consistency and quality of your content! :)
I wonder if this is really a new problem. Bad advisors have always existed. So the tool didn't create the problem. In fact, given how capable AI tools are, it may be a huge improvement in many cases! :)
Also, this seems to be a very unlikely scenario. The advisor needs to be engaged enough to actually bother to edit the student work, but lazy enough to dump it into ChatGPT, and careless enough to not even glance at the output, and incompetent enough to not use some of the more advanced models which are actually quite competent (as you have shown here), yet authoritative enough to INSIST the student keep all the hallucinated nonsense. Does this academic advisor even exist? If so, how likely is this scenario?
While I get the imbalances in academia, don't we already have ethics committees and accountability structures to deal with such sloppy and unethical behavior (not necessarily related to AI, but just in general)? Btw, this does not seem any different than a professor using AI to grade their students' assignments (in a similar sloppy fashion). In fact, when you have 100 students, it's far more likely to overlook the AI output vs when you have one PhD student.
At the end, the most likely scenario is that PhD students and advisors are using these tools responsibly, as a great productivity tool. And I wonder if this sort of talk creates more panic about edge cases that is highly unlikely and catastrophizes a tool the most people are using just fine.
But i will be curious to see what you find out in your survey! :)
In my experience, the entire organizational structure of academia is deeply problematic because it relies on an organizational model that treats humans as objects in a factory. AI is potentially upending this top-down control-flow model because now information is abundant and so the academic silos and adversarial ecosystems must adapt towards a distributed paradigm where we teach students how understand information dynamics and parametric systems to weave together threads across disciplines. I don't think many tenured faculty or academic administrators have the cognitive flexibility, emotional intelligence, or motivational incentives to deal with this changing landscape. The dysfunctional systems that are operating in governance models around the globe are echos of the dysfunctional systems of academia. As a grandmother in Texas, I was deeply dismayed to experience such dysfunction across domains of engineering and creative arts. My conclusion is that most academic institutions have no understanding of the concept of ethical integrity; instead, they are focused on legal policies, projecting dominance, superiority to increase rankings, impress alumni, attract students, and therefore, students learn to model that shallow behavior. It's a shallow learning model and AI is simply amplifying the flaws that already exist. New organizational models that prioritize holistic information flows would recognize the need for participatory learning experiences where students are encouraged to question the status quo, to use AI to weave together diverse perspectives and understand the nature of complexity, we are all interdependent. These 'new' integrative information flow models are the basis of our technologies, yet they are not implemented to inspire updated organizational governance models. The meritocratic, organizational model of PhD production is designed as an extractive model, just like capitalism. It's a system-organizational problem...all humans in the systems are in zero-sum game where the incentive structures motivate disconnection from our shared humanity. AI is just amplifying the flawed nature of top-down control....the lack of integration of somatic experiences in educational systems is tragic because our unconscious models must be integrated in our learning experiences so that we help students learn to expand and extend their world models because the future they face will require students to appreciate and value the nature of complexity. AI does allow for curious questioning that is not supported by the archaic siloed structures of academia. AI can guide an evolutionary transformation in learning otherwise we seem to be trending toward violent revolutionary transition as the poly-crisis will not be mitigated until humans are incentivized to collaborate to address our collective state....radical intentional kindness is my only hope for humanity. I'd like to figure out how to trigger an avalanche of kindness based on complexity and neuroscience and physics principles that show we're all interconnected, interdependent energy flows. I'm reading your new book to learn your perspectives on how to use AI because you have the insight and courage to ask important questions...keep up the good work!