18 Comments
User's avatar
Intel for the Quantum Info Age's avatar

It's really challenging to reconcile the motives of a for-profit company that sees a marketing benefit to calling itself a "public benefit corporation" presumably for their own self enrichment without any appreciable public benefits. OpenAI is guilty of this as well and one day I might wake and see the game that it is, which is to get money and do something useful with it.

Expand full comment
Intel for the Quantum Info Age's avatar

Full disclosure: find "reasoning" and context window of Anthropic models and I use them extensively but also just within the past 24 hours I think it has censored and refused to respond to 2-3 prompts via its own API (but nevertheless charged for the queries). This seems symptomatic of greater societal ills that someone unknown to me decided what was good and bad for me and I don't get a voice for my own benefit other than to use a different model, which will give me that information.

Expand full comment
Russian Record's avatar

Thanks for the review. When you invent a ship you invent a shipwreck. When people start using AI in sufficient number of interlinked systems this will produce normal accidents. I just love Charles Perrow.

Also, as a professional in a field of mental health with a background in general medicine I am very, very skeptical about AI being able to “fix” anything. Few people even recognise what is broken in health itself and in healthcare as a system so hoping to fix something is just as naive as techbros’ attempts to cure cancer by messing with “human code.” Big hopes, big disappointments.

AI can be quite useful though. A well trained AI can listen to therapy session and give a live supervision. But the client has to give consent for his very, very personal feelings and thoughts to be fed to a machine he has no way to control. And that’s just one possible application.

Expand full comment
Andrew Maynard's avatar

Great call out to Perrow! And interesting of course that he considered unexpected adverse outcomes as part of the landscape around innovation -- which we should probably take as a warning that bad stuff's going to happen!

Expand full comment
Tyger AC's avatar

Thank you for a wonderful exposition and critique (and, of course, all futurists of the humane (pro-civilization-pro-culture) brand see Banks as their guiding principle).

A few issues I see relate to the main topic of human messiness. The question of ‘solving’ mental (emotional/cultural/religious, etc.) issues is fundamentally problematic- obviously, it’s not about educating (though the benefits of an educated mind are underestimated). It is fascinating to read how many of the well-networked minds involved in the tech world think on parallel lines (the latest from Vinod Khosla – AI Dystopia or Utopia from Sept 20 is an amazing piece and worthwhile reading). https://www.khoslaventures.com/ai-dystopia-or-utopia-summary/

To different degrees, we can see that all of them carry the idea of ‘re-inventing or re-defining’ what a human is.

How (and more importantly, why) will such a human be, with what values and direction of evolution is the primary question we need to address; that technology and AI, in particular, are disruptive is not in question; what is, is what kind of human we desire to become and what is the roadmap to reach that desired goal? (Does this future include all humans? All sentiency?) see my latest Vast Minds.

https://tygerac.substack.com/p/minds-vast-minds-we-need?r=qirvq

Expand full comment
Andrew Maynard's avatar

Thanks Tyger - and thanks for the link!

I'm not sure this is that popular a perspective (at least in some circles) but I think there's a need to create space for more positive/exploratory thinking about what it means to be human in a future where conventional boundaries and constraints are removed, rather than fighting to preserve what we currently assume to be immutable.

It's the creative space of possibilities that I often hanker after -- not because I think we should or even can change what fundamentally makes us us, but because its dangerously myopic to conflate what we experience now and how we define "human" as a result, as something that should never be questioned.

Expand full comment
Shon Pan's avatar

His opt-out problem was chilling to me, in spite of generally liking Dario. It echoes the idea that if even anyone rejects a highly nonhuman AI world, they are a problem and must be "solved."

He means well, but its a concerning dogma when it echoes into a final conclusion of "surely most good visions must be similar to this."

It jumps basically to the idea that if Amish exist and refuse to use vaccines, they have to be stopped.

Expand full comment
Andrew Maynard's avatar

As you'll see I also had issues here -- but this, to me, is a point where discussions and thinking need to be opened up in a constrictive way, I think that this is what Dario is beginning to do here.

Expand full comment
Shon Pan's avatar

Does it matter though? I'll email you but it feels like a bigger issue that I am not sure if our opinions matter at all now, beyond scraping and pleading that the new technolords might have sympathy for us.

It feels like at some point, power and autonomy has been conspiously robbed from the individual.

Expand full comment
Andrew Maynard's avatar

Looking forward to it -- society desperately needs research, teaching and thought leadership that transforms the landscape here, and we are standing ready to take a lead if the funding was there -- but near-impossible to get funding for what's needed rather than what narrowly focused agencies, foundations etc. will invest in

Expand full comment
Intel for the Quantum Info Age's avatar

I'm not sure the evidence agrees with you because that's exactly what I've been doing for the better part of a year through my nonprofit and website q08.org. And because I don't have a marketing team, I'm not getting views. Reasoned thought leadership is out there for anyone who searches, but it's not well funded, it doesn't have a VC check behind it pushing for a profitable product in return.

Expand full comment
Michael Woudenberg's avatar

It's good to read something balanced. There is a ton of opportunity for advancement. It just takes good honest critique to make sure we don't lose our way.

Expand full comment
Arjun Basu's avatar

I want someone to start talking about the energy that is going to be required for all this AI and where it's going to come from. And I would like to know that these AI techdudes are thinking about it. Because you can't enjoy your utopia when the natural world is out of control.

Expand full comment
Michael Woudenberg's avatar

Nuclear energy is a great place to start especially with the evolved technology and micro-nuclear reactors.

https://www.polymathicbeing.com/p/nuclear-meltdown

Expand full comment
Andrew Maynard's avatar

Yep -- I think the energy vs progress conversation is complex, but there are not enough people (or organizations) in the energy transitions and sustainability/climate change talking about this seriously -- touched on briefly last week: https://futureofbeinghuman.com/p/the-double-or-nothing-bet-on-ai-fixing-the-climate

Expand full comment
Arjun Basu's avatar

I think the tech “visionaries” ignore this subject at their own peril.

Expand full comment
Mark Daley's avatar

Well said.

I, too, loved the shout out to Banks. When people ask me what a future with "powerful" AI could look like, Banks' Culture is my go-to fictional utopian exemplar (as much as I enjoyed reading cyberpunk literature in my youth, I never thought "I'd love to actually inhabit this universe which has so clearly been written as a cautionary example.")

Expand full comment
George Pór's avatar

For another reference to the Machines of Loving Grace, check out https://medium.com/@technoshaman/machine-love-is-coming-to-a-screen-near-to-you-e1fd13fd08b2

Expand full comment