Discussion about this post

User's avatar
Mark Daley's avatar

Well said! As a computer scientist, Ilya knows full well that absolute safety is *mathematically impossible* for any nontrivial definition of safety( https://noeticengines.substack.com/p/the-hard-problem-of-hard-alignment ). I respect him enormously, but it leaves a strange taste in my mouth to read a proclamation that, on the face of it, rejects the silicon valley "productize and ship everything" mentality in favour of a pure research mentality but cannot be read by anyone with a background in theoretical computer science as anything other than marketing copy.

Your position that this very important, but complex and nuanced, matter should be approached with humility, and in the context of the full breadth of existing intellectual frameworks on safety, is one with which I wholly agree.

Expand full comment
Luke K's avatar

Thanks for that eye-opening article on safety!

I interpreted the proclamations differently, though. IMO, the SSI founders know their goals are nuanced and aspirational. They would agree wholly with this article.

They did not mean to undersell the challenges in pursuing safe AI, not the least of which is defining "safety." If anything, they wanted to do the opposite: Proclaim that pursuing Safe AI is too important a goal to be burdened by the pressure to ship products tomorrow.

The SSI tweet concluded with a recruitment pitch. When recruiting or fundraising, one needs to communicate a clear aspirational purpose, a big moonshot goal. One has to assume those "in the game" know the devil is in the details and that nuance is unnecessary for attention-grabbing 140-character soundbites.

The measured use of marketing copy can be strategic, especially when competing with Silicon Valley for the best talent.

Expand full comment
5 more comments...

No posts