9 Comments
User's avatar
Vitor Zenaide's avatar

Thanks for sharing with us!

I believe this issue can be fixed if we put the right incentives into AI systems. They should be human-centric and embedded with our core values. But above all, we need to rescue the human-to-human trust. Every opportunity that we have to reinforce this kind of trust, we'll be moving towards a better world of human-AI interaction.

Expand full comment
Hilary Sutcliffe's avatar

So the Bros finally get wise to what academics and activists have been telling them for many years, and because they were so excited about the tech and the money they ignored. Same old same old. This is the reason why I ditched my work of 15 years on 'responsible innovation' because I finally got wise to the fact that they were never going to create tech in a precautionary way, they were always going to put people second to money and so now we focus only on regulation and helping the public see their practices for what they are and protest. This means that in the absence of precaution unfortunately harm has to occur for politicians to be able to act. But systems are systems and unless you change the incentives you keep getting what you always got.

Expand full comment
Annalie Killian's avatar

Thank you for continuing to bring your thoughtful voice based on vast experience and deep immersion into these technologies to reflect, guide, caution, inspire the rest of us and agitate for guardrails among technologists. boards, investors and policymakers and legislators and the broad public.

Expand full comment
Andrew Maynard's avatar

Thanks Annalie!

Expand full comment
Grant Castillou's avatar

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Expand full comment
Karen Doore's avatar

Thank you for your thoughtful essay. We are at a critical threshold, and the understanding that you express about the opportunity for using wisdom in guiding AI development towards human centric goals isa logically valid assessment of the current global state, Unfortunately, wisdom is context sensitive and we lack systems to nurture metacognitive growth towards wisdom as a life-long learning process. When we using a systems-lens of intelligence as a process of holistic oscillating energy patterns and complex dynamic adaptive systems....that also integrates a lens of lifelong learning and healing from legacy traumas...then new opportunities can emerge. These multidimensional perspectives can show that collective humanity represents a hidden energy for transformation to design and use information technologies like AI to address climate change and other facets of the polycrisis...In contrast, capitalism and accelerationism are shallow ideologies that justify profits as a goal. Models of consciousness and AI systems can show that metacognition is a developmental skill that requires reflection about events, behaviors, with an incentives that reward collective friction to expand narrow world models with the goal of increasing impact of communities that nourish creativity, attachment and belonging. Currently algorithms that have a goal of extraction for profit are socially and culturally effective and target attention and engagement using information technologies, which carries massive risk of harms when targeting human nervous systems during vulnerable developmental trajectories. AI literacy can guide humans to develop skills in counterfactual thinking, behavioral patterns using thoughtful narratives to set boundaries to limit harms and help expand world models. Your work is inspiring...thank you.

Expand full comment
Andrew Maynard's avatar

Thanks Karen!

Expand full comment
Michael Woudenberg's avatar

This essay is really good and one of the reasons why I think the HSD program at ASU is so good.

Expand full comment
Andrew Maynard's avatar

Thanks Michael!

Expand full comment