A new statement from the Center for AI Safety claims the extinction risk presented by AI should be a global priority. I argue that potential catastrophic risks should be a higher priority
Great perspective, and I am one of the signatories who shares your view of the key threats being well down the chain from total anihilation. I particularly appreciate this point of yours - that there are "potential catastrophic risks of not developing new AI-based technologies." Would love to know what you think of my call for an IPAI - Intergovernmental Panel on AI akin to the IPCC for climate change: https://revkin.substack.com/p/im-with-the-experts-warning-that
Thanks Andy -- I'll leave a comment over on your post as well, but my sense at this point is that a) we need international coordination and collaboration; b) this needs to have some form/degree of authority; c) I have no idea what this will look like, and am not yet convinced that existing models will work; and d) this means that we need a rich pool of ideas and dialogue (and fast) -- the IPAI concept may well be one that works, but I also worry that it will be too slow, not sufficiently grounded in multi-stakeholder dialogue and decision-making, and too prone to pushback from various groups. But the bottom line is that we need something!
Great perspective, and I am one of the signatories who shares your view of the key threats being well down the chain from total anihilation. I particularly appreciate this point of yours - that there are "potential catastrophic risks of not developing new AI-based technologies." Would love to know what you think of my call for an IPAI - Intergovernmental Panel on AI akin to the IPCC for climate change: https://revkin.substack.com/p/im-with-the-experts-warning-that
Thanks Andy -- I'll leave a comment over on your post as well, but my sense at this point is that a) we need international coordination and collaboration; b) this needs to have some form/degree of authority; c) I have no idea what this will look like, and am not yet convinced that existing models will work; and d) this means that we need a rich pool of ideas and dialogue (and fast) -- the IPAI concept may well be one that works, but I also worry that it will be too slow, not sufficiently grounded in multi-stakeholder dialogue and decision-making, and too prone to pushback from various groups. But the bottom line is that we need something!