6 Comments
User's avatar
Jacob Carr's avatar

I really enjoyed this today. I like that you took a balanced approach, recognizing the legitimate concerns in the space while also the legitimate possibilities.

Expand full comment
Andrew Maynard's avatar

Thanks Jacob — I have this odd belief that balance and nuance in the face of complexity is sort of important 😊 Appreciate he comment!

Expand full comment
Michael Woudenberg's avatar

This strikes me as a corrolation vs. causation element. Because AI was used AND a person committed suicide, we blame AI. However, most people who commit suicide also used a human therapist and still killed themselves... We don't get to read those conversations so the human is not blamed. Did the AI cause the suicide? What agency do we accept the human in the loop has? And if they don't have enough agency due to their condition, why aren't they institutionalized?

Expand full comment
Bette A. Ludwig, PhD 🌱's avatar

This is a valid concern. But unless we start being honest about who is going to take this on and how we’re going to fund and train them, we’re just adding another expectation to already exhausted staff. The question we should be starting with is, who owns this?

Expand full comment
Andrew Maynard's avatar

This is such an important question - thanks! And one that I think university leadership should be addressing at the same time as they develop partnerships with AI companies

Expand full comment
Bette A. Ludwig, PhD 🌱's avatar

Honestly Andrew, I don’t know how universities are going to manage all of this. I worked in higher ed for 20 years before leaving a few years ago. Decision making in higher ed is not built for the rapid pace of technology change we’re seeing or what's still coming.

Expand full comment