Responsible AI: Lessons from Nanotechnology
20 years ago we were learning how to navigate the risks and benefits of nanotechnology. Two decades on, are we applying those hard-won lessons effectively to artificial intelligence?
At first blush nanotechnology and artificial intelligence may not seem to have that much in common. And yet there are surprising similarities when it comes to avoiding failures in a society where the success of transformative technologies depends on far more than technical knowhow alone.
Despite this, it’s not at all clear that we’re learning from the past as we rush headlong into an AI future.
My colleague Sean Dudley and I explore this further in a new commentary in the journal Nature Nanotechnology and an accompanying article in The Conversation — and conclude that there’s a lot to be learned the transdisciplinary initiatives and broad stakeholder engagement that underpinned nanotechnology.
In the articles we draw on the early days of nanotech development — something I was at the heart of as I co-chaired the interagency Nanotechnology Environmental and Health Implications working group, and later served as science advisor to the highly influential Project on Emerging Nanotechnologies. And we make the case for greater investment in understanding and navigating advanced technology transitions in ways that “bridge disciplines and sectors, and bring together people, communities, and organizations with diverse expertise and perspectives to investigate emerging landscapes and drive toward a more equitable, sustainable, and promise-filled future.”
Plenty of mistakes have been made in the development and use nanotech over the past two decades — but we’ve also learned a lot about the importance of working with experts from the arts, humanities, and social sciences, in addition to those at the forefront of nano-specific science and technology. We’ve also learned that broad stakeholder and public engagement are absolutely critical to success.
As artificial development gathers pace, these lessons don’t seem to be getting through though. Development is still being driven by a small group of experts and companies who believe that they have all the understanding they need. And while there’s a growing urgency around how to ensure the safe and responsible development ofAI, there’s still a reluctance to engage a diversity of voices and perspectives in these conversations.
This is a serious mistake, and one that needs to be corrected as soon as possible. We’ve learned a lot from previous advanced technology transitions like nanotechnology. I’d include the development of technologies like genetically modified organisms here, which was a masterclass in how naivety, hubris, greed, and a lack of broad engagement, can create near-insurmountable roadblocks to progress. In fact early investment in responsible nanotech drew heavily on lessons learned from the GMO debacle.
As AI development continues to accelerate, we cannot afford to get things wrong — if anything, the stakes here are far higher than they were with either nanotechnology or GMOs. But to do this, AI needs to learn from the lessons of the past if it’s to lead to a better future — and fast.
Read more in:
Navigating Advanced Technology Transitions Using Lessons from Nanotechnology. Andrew D. Maynard and Sean M. Dudley. Nature Nanotechnology, October 2, 2023.
Navigating the risks and benefits of AI: Lessons from nanotechnology on ensuring emerging technologies are safe as well as successful. Andrew D. Maynard and Sean M. Dudley. The Conversation, October 2, 2023.


