In my experience, the entire organizational structure of academia is deeply problematic because it relies on an organizational model that treats humans as objects in a factory. AI is potentially upending this top-down control-flow model because now information is abundant and so the academic silos and adversarial ecosystems must adapt towards a distributed paradigm where we teach students how understand information dynamics and parametric systems to weave together threads across disciplines. I don't think many tenured faculty or academic administrators have the cognitive flexibility, emotional intelligence, or motivational incentives to deal with this changing landscape. The dysfunctional systems that are operating in governance models around the globe are echos of the dysfunctional systems of academia. As a grandmother in Texas, I was deeply dismayed to experience such dysfunction across domains of engineering and creative arts. My conclusion is that most academic institutions have no understanding of the concept of ethical integrity; instead, they are focused on legal policies, projecting dominance, superiority to increase rankings, impress alumni, attract students, and therefore, students learn to model that shallow behavior. It's a shallow learning model and AI is simply amplifying the flaws that already exist. New organizational models that prioritize holistic information flows would recognize the need for participatory learning experiences where students are encouraged to question the status quo, to use AI to weave together diverse perspectives and understand the nature of complexity, we are all interdependent. These 'new' integrative information flow models are the basis of our technologies, yet they are not implemented to inspire updated organizational governance models. The meritocratic, organizational model of PhD production is designed as an extractive model, just like capitalism. It's a system-organizational problem...all humans in the systems are in zero-sum game where the incentive structures motivate disconnection from our shared humanity. AI is just amplifying the flawed nature of top-down control....the lack of integration of somatic experiences in educational systems is tragic because our unconscious models must be integrated in our learning experiences so that we help students learn to expand and extend their world models because the future they face will require students to appreciate and value the nature of complexity. AI does allow for curious questioning that is not supported by the archaic siloed structures of academia. AI can guide an evolutionary transformation in learning otherwise we seem to be trending toward violent revolutionary transition as the poly-crisis will not be mitigated until humans are incentivized to collaborate to address our collective state....radical intentional kindness is my only hope for humanity. I'd like to figure out how to trigger an avalanche of kindness based on complexity and neuroscience and physics principles that show we're all interconnected, interdependent energy flows. I'm reading your new book to learn your perspectives on how to use AI because you have the insight and courage to ask important questions...keep up the good work!
I want to apologize for my negative tone, it's completely unprofessional and harmful. It was very poor communication.
I appreciate your efforts to protect PhD students.
I believe most people in academia have sincerely good intensions.
I had mentors with my wellbeing in mind and it made a huge difference how I perceived the challenges that I faced.
However, the important idea is that patriarchal structures of the systems are the source of the problems, so that everyone becomes adversarial and siloed due to the nature of the reward / incentive structure and the structures for human organizational systems need to evolve asap. There are engineering approaches that align with holistic information flow models that provide superior governance models and current AI systems are able to help explain how to transition towards better structures to support learning and decision making. Your work is very helpful in highlighting that AI systems will need to be integrated thoughtfully into educational systems, but the real solutions are structural in nature so they align with understanding humans as integrated in a self-learning universe with an organic holarchy structure.
Ha - that's fine Karen. I'd rather have authentic rants to read than well crafted platitudes 😊
And yes, while I would argue that there maybe more good intentions than you might suggest, and more mentors who do an amazing job, this is a system that is deeply mired in power relationships where PhD students are almost always dependent on the whims of their advisors -- which is OK if your advisor is conscientious, but not so good if they are not (or simply unaware of what they are doing).
And of course all of this is reflected -- and even potentially amplified -- as AI enters the mix, which is why I'm interested in evidence that helps identify some of the emerging issues
Hey Andrew, big fan of your work! Love the consistency and quality of your content! :)
I wonder if this is really a new problem. Bad advisors have always existed. So the tool didn't create the problem. In fact, given how capable AI tools are, it may be a huge improvement in many cases! :)
Also, this seems to be a very unlikely scenario. The advisor needs to be engaged enough to actually bother to edit the student work, but lazy enough to dump it into ChatGPT, and careless enough to not even glance at the output, and incompetent enough to not use some of the more advanced models which are actually quite competent (as you have shown here), yet authoritative enough to INSIST the student keep all the hallucinated nonsense. Does this academic advisor even exist? If so, how likely is this scenario?
While I get the imbalances in academia, don't we already have ethics committees and accountability structures to deal with such sloppy and unethical behavior (not necessarily related to AI, but just in general)? Btw, this does not seem any different than a professor using AI to grade their students' assignments (in a similar sloppy fashion). In fact, when you have 100 students, it's far more likely to overlook the AI output vs when you have one PhD student.
At the end, the most likely scenario is that PhD students and advisors are using these tools responsibly, as a great productivity tool. And I wonder if this sort of talk creates more panic about edge cases that is highly unlikely and catastrophizes a tool the most people are using just fine.
But i will be curious to see what you find out in your survey! :)
Oh, completely agree that AI tools have not created a problem of poor mentorship. However, what i am seeing -- and this is based on what I've observed -- is that naive use of AI is placing some grad students in a very challenging position. It'll be interested in seeing what sort of responses come in though.
In my experience, the entire organizational structure of academia is deeply problematic because it relies on an organizational model that treats humans as objects in a factory. AI is potentially upending this top-down control-flow model because now information is abundant and so the academic silos and adversarial ecosystems must adapt towards a distributed paradigm where we teach students how understand information dynamics and parametric systems to weave together threads across disciplines. I don't think many tenured faculty or academic administrators have the cognitive flexibility, emotional intelligence, or motivational incentives to deal with this changing landscape. The dysfunctional systems that are operating in governance models around the globe are echos of the dysfunctional systems of academia. As a grandmother in Texas, I was deeply dismayed to experience such dysfunction across domains of engineering and creative arts. My conclusion is that most academic institutions have no understanding of the concept of ethical integrity; instead, they are focused on legal policies, projecting dominance, superiority to increase rankings, impress alumni, attract students, and therefore, students learn to model that shallow behavior. It's a shallow learning model and AI is simply amplifying the flaws that already exist. New organizational models that prioritize holistic information flows would recognize the need for participatory learning experiences where students are encouraged to question the status quo, to use AI to weave together diverse perspectives and understand the nature of complexity, we are all interdependent. These 'new' integrative information flow models are the basis of our technologies, yet they are not implemented to inspire updated organizational governance models. The meritocratic, organizational model of PhD production is designed as an extractive model, just like capitalism. It's a system-organizational problem...all humans in the systems are in zero-sum game where the incentive structures motivate disconnection from our shared humanity. AI is just amplifying the flawed nature of top-down control....the lack of integration of somatic experiences in educational systems is tragic because our unconscious models must be integrated in our learning experiences so that we help students learn to expand and extend their world models because the future they face will require students to appreciate and value the nature of complexity. AI does allow for curious questioning that is not supported by the archaic siloed structures of academia. AI can guide an evolutionary transformation in learning otherwise we seem to be trending toward violent revolutionary transition as the poly-crisis will not be mitigated until humans are incentivized to collaborate to address our collective state....radical intentional kindness is my only hope for humanity. I'd like to figure out how to trigger an avalanche of kindness based on complexity and neuroscience and physics principles that show we're all interconnected, interdependent energy flows. I'm reading your new book to learn your perspectives on how to use AI because you have the insight and courage to ask important questions...keep up the good work!
I want to apologize for my negative tone, it's completely unprofessional and harmful. It was very poor communication.
I appreciate your efforts to protect PhD students.
I believe most people in academia have sincerely good intensions.
I had mentors with my wellbeing in mind and it made a huge difference how I perceived the challenges that I faced.
However, the important idea is that patriarchal structures of the systems are the source of the problems, so that everyone becomes adversarial and siloed due to the nature of the reward / incentive structure and the structures for human organizational systems need to evolve asap. There are engineering approaches that align with holistic information flow models that provide superior governance models and current AI systems are able to help explain how to transition towards better structures to support learning and decision making. Your work is very helpful in highlighting that AI systems will need to be integrated thoughtfully into educational systems, but the real solutions are structural in nature so they align with understanding humans as integrated in a self-learning universe with an organic holarchy structure.
Ha - that's fine Karen. I'd rather have authentic rants to read than well crafted platitudes 😊
And yes, while I would argue that there maybe more good intentions than you might suggest, and more mentors who do an amazing job, this is a system that is deeply mired in power relationships where PhD students are almost always dependent on the whims of their advisors -- which is OK if your advisor is conscientious, but not so good if they are not (or simply unaware of what they are doing).
And of course all of this is reflected -- and even potentially amplified -- as AI enters the mix, which is why I'm interested in evidence that helps identify some of the emerging issues
Hey Andrew, big fan of your work! Love the consistency and quality of your content! :)
I wonder if this is really a new problem. Bad advisors have always existed. So the tool didn't create the problem. In fact, given how capable AI tools are, it may be a huge improvement in many cases! :)
Also, this seems to be a very unlikely scenario. The advisor needs to be engaged enough to actually bother to edit the student work, but lazy enough to dump it into ChatGPT, and careless enough to not even glance at the output, and incompetent enough to not use some of the more advanced models which are actually quite competent (as you have shown here), yet authoritative enough to INSIST the student keep all the hallucinated nonsense. Does this academic advisor even exist? If so, how likely is this scenario?
While I get the imbalances in academia, don't we already have ethics committees and accountability structures to deal with such sloppy and unethical behavior (not necessarily related to AI, but just in general)? Btw, this does not seem any different than a professor using AI to grade their students' assignments (in a similar sloppy fashion). In fact, when you have 100 students, it's far more likely to overlook the AI output vs when you have one PhD student.
At the end, the most likely scenario is that PhD students and advisors are using these tools responsibly, as a great productivity tool. And I wonder if this sort of talk creates more panic about edge cases that is highly unlikely and catastrophizes a tool the most people are using just fine.
But i will be curious to see what you find out in your survey! :)
Oh, completely agree that AI tools have not created a problem of poor mentorship. However, what i am seeing -- and this is based on what I've observed -- is that naive use of AI is placing some grad students in a very challenging position. It'll be interested in seeing what sort of responses come in though.
Clearly, I am giving more credit to academics than they deserve then 😂 looking forward to what you discover!