In my experience, the entire organizational structure of academia is deeply problematic because it relies on an organizational model that treats humans as objects in a factory. AI is potentially upending this top-down control-flow model because now information is abundant and so the academic silos and adversarial ecosystems must adapt towards a distributed paradigm where we teach students how understand information dynamics and parametric systems to weave together threads across disciplines. I don't think many tenured faculty or academic administrators have the cognitive flexibility, emotional intelligence, or motivational incentives to deal with this changing landscape. The dysfunctional systems that are operating in governance models around the globe are echos of the dysfunctional systems of academia. As a grandmother in Texas, I was deeply dismayed to experience such dysfunction across domains of engineering and creative arts. My conclusion is that most academic institutions have no understanding of the concept of ethical integrity; instead, they are focused on legal policies, projecting dominance, superiority to increase rankings, impress alumni, attract students, and therefore, students learn to model that shallow behavior. It's a shallow learning model and AI is simply amplifying the flaws that already exist. New organizational models that prioritize holistic information flows would recognize the need for participatory learning experiences where students are encouraged to question the status quo, to use AI to weave together diverse perspectives and understand the nature of complexity, we are all interdependent. These 'new' integrative information flow models are the basis of our technologies, yet they are not implemented to inspire updated organizational governance models. The meritocratic, organizational model of PhD production is designed as an extractive model, just like capitalism. It's a system-organizational problem...all humans in the systems are in zero-sum game where the incentive structures motivate disconnection from our shared humanity. AI is just amplifying the flawed nature of top-down control....the lack of integration of somatic experiences in educational systems is tragic because our unconscious models must be integrated in our learning experiences so that we help students learn to expand and extend their world models because the future they face will require students to appreciate and value the nature of complexity. AI does allow for curious questioning that is not supported by the archaic siloed structures of academia. AI can guide an evolutionary transformation in learning otherwise we seem to be trending toward violent revolutionary transition as the poly-crisis will not be mitigated until humans are incentivized to collaborate to address our collective state....radical intentional kindness is my only hope for humanity. I'd like to figure out how to trigger an avalanche of kindness based on complexity and neuroscience and physics principles that show we're all interconnected, interdependent energy flows. I'm reading your new book to learn your perspectives on how to use AI because you have the insight and courage to ask important questions...keep up the good work!
Hey Andrew, big fan of your work! Love the consistency and quality of your content! :)
I wonder if this is really a new problem. Bad advisors have always existed. So the tool didn't create the problem. In fact, given how capable AI tools are, it may be a huge improvement in many cases! :)
Also, this seems to be a very unlikely scenario. The advisor needs to be engaged enough to actually bother to edit the student work, but lazy enough to dump it into ChatGPT, and careless enough to not even glance at the output, and incompetent enough to not use some of the more advanced models which are actually quite competent (as you have shown here), yet authoritative enough to INSIST the student keep all the hallucinated nonsense. Does this academic advisor even exist? If so, how likely is this scenario?
While I get the imbalances in academia, don't we already have ethics committees and accountability structures to deal with such sloppy and unethical behavior (not necessarily related to AI, but just in general)? Btw, this does not seem any different than a professor using AI to grade their students' assignments (in a similar sloppy fashion). In fact, when you have 100 students, it's far more likely to overlook the AI output vs when you have one PhD student.
At the end, the most likely scenario is that PhD students and advisors are using these tools responsibly, as a great productivity tool. And I wonder if this sort of talk creates more panic about edge cases that is highly unlikely and catastrophizes a tool the most people are using just fine.
But i will be curious to see what you find out in your survey! :)
Oh, completely agree that AI tools have not created a problem of poor mentorship. However, what i am seeing -- and this is based on what I've observed -- is that naive use of AI is placing some grad students in a very challenging position. It'll be interested in seeing what sort of responses come in though.
In my experience, the entire organizational structure of academia is deeply problematic because it relies on an organizational model that treats humans as objects in a factory. AI is potentially upending this top-down control-flow model because now information is abundant and so the academic silos and adversarial ecosystems must adapt towards a distributed paradigm where we teach students how understand information dynamics and parametric systems to weave together threads across disciplines. I don't think many tenured faculty or academic administrators have the cognitive flexibility, emotional intelligence, or motivational incentives to deal with this changing landscape. The dysfunctional systems that are operating in governance models around the globe are echos of the dysfunctional systems of academia. As a grandmother in Texas, I was deeply dismayed to experience such dysfunction across domains of engineering and creative arts. My conclusion is that most academic institutions have no understanding of the concept of ethical integrity; instead, they are focused on legal policies, projecting dominance, superiority to increase rankings, impress alumni, attract students, and therefore, students learn to model that shallow behavior. It's a shallow learning model and AI is simply amplifying the flaws that already exist. New organizational models that prioritize holistic information flows would recognize the need for participatory learning experiences where students are encouraged to question the status quo, to use AI to weave together diverse perspectives and understand the nature of complexity, we are all interdependent. These 'new' integrative information flow models are the basis of our technologies, yet they are not implemented to inspire updated organizational governance models. The meritocratic, organizational model of PhD production is designed as an extractive model, just like capitalism. It's a system-organizational problem...all humans in the systems are in zero-sum game where the incentive structures motivate disconnection from our shared humanity. AI is just amplifying the flawed nature of top-down control....the lack of integration of somatic experiences in educational systems is tragic because our unconscious models must be integrated in our learning experiences so that we help students learn to expand and extend their world models because the future they face will require students to appreciate and value the nature of complexity. AI does allow for curious questioning that is not supported by the archaic siloed structures of academia. AI can guide an evolutionary transformation in learning otherwise we seem to be trending toward violent revolutionary transition as the poly-crisis will not be mitigated until humans are incentivized to collaborate to address our collective state....radical intentional kindness is my only hope for humanity. I'd like to figure out how to trigger an avalanche of kindness based on complexity and neuroscience and physics principles that show we're all interconnected, interdependent energy flows. I'm reading your new book to learn your perspectives on how to use AI because you have the insight and courage to ask important questions...keep up the good work!
Hey Andrew, big fan of your work! Love the consistency and quality of your content! :)
I wonder if this is really a new problem. Bad advisors have always existed. So the tool didn't create the problem. In fact, given how capable AI tools are, it may be a huge improvement in many cases! :)
Also, this seems to be a very unlikely scenario. The advisor needs to be engaged enough to actually bother to edit the student work, but lazy enough to dump it into ChatGPT, and careless enough to not even glance at the output, and incompetent enough to not use some of the more advanced models which are actually quite competent (as you have shown here), yet authoritative enough to INSIST the student keep all the hallucinated nonsense. Does this academic advisor even exist? If so, how likely is this scenario?
While I get the imbalances in academia, don't we already have ethics committees and accountability structures to deal with such sloppy and unethical behavior (not necessarily related to AI, but just in general)? Btw, this does not seem any different than a professor using AI to grade their students' assignments (in a similar sloppy fashion). In fact, when you have 100 students, it's far more likely to overlook the AI output vs when you have one PhD student.
At the end, the most likely scenario is that PhD students and advisors are using these tools responsibly, as a great productivity tool. And I wonder if this sort of talk creates more panic about edge cases that is highly unlikely and catastrophizes a tool the most people are using just fine.
But i will be curious to see what you find out in your survey! :)
Oh, completely agree that AI tools have not created a problem of poor mentorship. However, what i am seeing -- and this is based on what I've observed -- is that naive use of AI is placing some grad students in a very challenging position. It'll be interested in seeing what sort of responses come in though.
Clearly, I am giving more credit to academics than they deserve then 😂 looking forward to what you discover!