Are design principles for responsible and beneficial AI useful?
Yes, but only if they're actually used
In 2024, I was part of an AI Ethics committee that helped craft a set of AI design principles for Arizona State University.
It’s been a while since I’ve read the principles (they were published in August 2025). But I found myself revisiting them this past week as a new AI product from ASU started attracting headlines which suggested that they may not have been as useful as we’d hoped.
The six principles for responsible and beneficial AI were created to guide to daily decision-making about the creation and implementation of generative AI experiences, as well as serving as a resource and as an accountability framework for the ASU community.1
They address:
Amplifying possibilities in service of respecting human autonomy and empowering individuals and communities;
Bringing the best of what technology has to offer to the ASU community while being aware of potential risks;
Rigorously evaluating AI tools, platforms, models, and experiences for possible impacts and potential harm before their release;
Designing for equity by committing to measuring impact, protecting privacy, and prioritizing access for all;
Protecting privacy by developing and deploying AI models and applications with attention to the rights of individuals’ privacy and agency in the use of their data, individually and in aggregate; and
Committing to a shared responsibility between individuals and the enterprise for the responsible and beneficial development and use of AI.
In theory, the principles should be reflected in new ASU AI products and initiatives, including how they are released and what happens next. And so I was intrigued to see reports appearing on social media and in news outlets which suggested that that something might be amiss.
The product in question is ASU Atomic — a subscription service that scrapes ASU’s catalog of online courses and uses AI to custom-create learning modules based on patched together content, including short segments from instructor videos.
The idea makes sense on paper — especially if you buy into the transmission model of education that focuses on optimizing content-transfer, rather than models where learning emerges through experience, dialogue, and reflection.2
ASU Atomic approaches education by breaking down content into its constituent “atoms” (hence the name) and then building up — educational atom by educational atom — modules that are tailored to what the student is looking for.
As ASU President Michael Crow described it to the Arizona Board of Regents earlier this year,
“Imagine that we have thousands and thousands and thousands of courses. And you can break these courses down into tens of thousands or hundreds of thousands of sub-component parts. Then you build a program in which you can ask the computer, ‘I want to learn about this.’ And then it takes some component of all these different things and then organizes what you need to learn.”
I first heard of Atomic a little over a week ago as it was being pushed out to a select group of former students. I was intrigued by the concept, but didn’t have the chance at the time to dig into it.
Then the news website 404 Media broke a story highlighting faculty concerns about the app. And other sites followed suit, including the Chronicles of Higher Education and Inside Higher Ed.
The issue, it seemed, wasn’t so much the underlying concept as its execution. And this is what led to me taking a closer look while going back to ASU’s AI Design Principles, to see how things squared up.
ASU Atomic was launched as a “beta” — a platform undergoing public testing, where users would normally be given access in exchange for feedback on what’s working and what is not. Users pay $5 per month (with the first 14 days free), tell the platform’s AI what they want to learn, and answer several questions that help the AI hone things down. Then, after a few minutes, they are presented with a customized educational module — complete with slides, a narrator, video clips, and assignments — which they can work through at their leisure.
However, there’s no indication of where the content comes from in the generated modules, how it’s been validated, what is scraped directly from courses or is AI-generated, and — for video clips extracted from courses — which courses they are from, who the instructor in the clips is, what the context of the clip is (essential if it needs to be interpreted within a broader learning context), whether the content is still relevant, whether it was intended for highly contextual or broader public consumption, and how it fits the pedagogy of the AI-generated module.3
On top of this, few if any of the instructors, it seems, were aware that their material was being used before the 404 Media article was published. And there is currently no mechanism on the website for feedback — either on course content or platform behavior.
To be clear, if a transmission model of education is assumed to be fit for purpose, the idea of ASU Atomic makes sense — as long as the creators of the content it draws on are using the same model in how they teach. If they are not though, the chances of mismatches that undermine the value of the AI-generated modules is pretty high.
This is concerning. But it’s not what drove the media coverage. What did was the use of course material without consulting faculty first.4
This is, in principle, perfectly allowable under the terms of use of the learning management system ASU uses for online courses, as well as the terms and conditions faculty operate under — where developed educational material belongs to the university. But, of course, there’s often a gap between what is legally allowed, and what is good practice for an enterprise and the people who work for it.5
And this is where the AI Design Principles come in.
Measuring ASU Atomic up against these though suggests that something might have been missed in the development and deployment process.
For instance, it’s not clear if and how Atomic was designed to amplify possibilities in service of respecting human autonomy. Or whether possible impacts and potential harm were rigorously evaluated before its release. Or how it demonstrates a commitment to a shared responsibility between individuals and the enterprise for the responsible and beneficial development and use of AI.
Atomic may well have undergone a rigorous internal process that addressed these and the rest of the AI principles, although given this week’s media coverage, this seems doubtful.
Ot it may just be, as is so often the case in a large, complex and fast-moving organization, that despite the best of intentions, steps in the process were inadvertently overlooked.
In either case, the bigger question to me is whether the AI Design Principles could have helped avoid a situation where media coverage and faculty frustrations could potentially undermine ASU’s use of AI.
If they could have — and re-reading them, I think they could — this suggests that such principles are useful as a tool for aligning AI use with institutional ambitions, while avoiding unnecessary mis-steps.
But only, of course, if they are actually used.
This, I have to confess, is not a model of education that I use in my own teaching — preferring instead to develop learning environments that are more relational and experience-based than purely transactional while still having concrete learning goals. At the same time, there are legitimately different theories of learning that different institutions lean toward.
ASU’s Alex Halavais has a excellent essay on the “atomized professor” and Atomic on his website a thaumaturgical compendium.
Just as an aside, my work some years ago on risk innovation was motivated in part by the challenges of introducing complex technologies into an equally complex stakeholder landscape by providing organizations with simple tools for identifying and navigating potential threats to value.
As anyone in management or leadership knows, trust and good will within an organization have a massive impact on its ability to operate and achieve its goals.


