America's AI Action Plan: "Build, Baby, Build"
The White House's just-released AI Action Plan prioritizes speed, power, and US AI supremacy, over pretty much everything else. This will likely be good news for some, but not for others.
This morning the White House unveiled its new AI Action Plan for the US. There’s a lot here to digest, but I did want to provide my first take on the plan – and what it might mean moving forward.1
This is an action plan that I suspect will be received very differently by different communities. And to be fair, it’s probably not as bad as some will be making out, or as great as others will argue.
From the perspective of AI tech companies for instance (and the ecosystem of industries emerging around them), I can imagine that most will likely embrace the plan as it advocates for removing regulatory, political and — let’s be honest — social constraints to going as fast as they can, while lining themselves up for massive investments and financial wins.
Universities are also likely to be scrambling to align themselves with potential new AI funding opportunities and an opportunity to signal alignment with the current administration. The plan very clearly indicates the Federal government is intending to invest heavily in research and development into AI systems, infrastructure, and its use in discovery — all areas that promote American AI supremacy.
Similarly, I suspect that higher education institutions across the country will be scrambling to take advantage of the plan’s very clear support for pro-AI education. Expect to see efforts that rebrand programs that are AI agnostic or even AI-critical to programs that claim to fully embrace move-fast innovation and equip graduates to be a part of building an American AI empire.2
And then there are the institutions, organizations, and individuals, who have been advocating for considered, responsible, principled, and socially beneficial AI development. These, I suspect, will not welcome the plan, and be pushing hard against its prioritization of US power, influence and supremacy over pretty much anything else.3
From these somewhat speculative vignettes you might surmise — correctly — that the Action Plan seeks to remove barriers to the US moving as fast as possible on becoming the AI superpower; getting rid of regulations and awkward questions around unintended consequences and social responsibility that might slow things down, while ensuring that America rules the world in, with, and through, AI.
As the plan’s introduction puts it, “Simply put, we need to ‘Build, Baby, Build.’”
The Action Plan
The plan itself is built around three pillars: Accelerating AI innovation; building American AI infrastructure; and leading in international AI diplomacy and security (which in this context means that US controlling everything). Together, these outline a strategy for ensuring that the US holds the global reigns of power in what is pitched as a transformative age of world-changing AI.
Contrasting with this, there is no language around cooperation, co-creation, or win-win development within the plan’s vision. Rather, this is a vision where “allies” are beholden to US technologies, ideas, principles, and power,4 while “adversaries” are countered and beaten at every turn.
This is also a vision where concerns over “misinformation, Diversity, Equity, and Inclusion, and climate change” are seen as ideological barriers to success that need to be eliminated.5
Yet despite what I suspect is my poorly disguised tone of unease over the plan, it’s not all bad. There are elements of it that echo previous policy pushes to ensure the US benefits from emerging technologies. And the plan — if implemented in policy — will for sure accelerate AI development and use, education, and (potentially) jobs.
However, there is an underlying ideology within the Action Plan that prioritizes power before people and that values US exceptionalism over global wellbeing, And this worries me.
Much of this is seen in each of the plan’s three pillars. The following is not a comprehensive review, but a first take on parts of the Action Plan that stood out to me as I read through it:6
Pillar I: Accelerate AI Innovation
According to Pillar I of the plan, “America’s private sector must be unencumbered by bureaucratic red tape” and federal regulations that hinder AI innovation and adoption need to be addressed.
This includes establishing a “‘try-first’ culture for AI across American industry” — essentially an ask forgiveness rather than permission policy that assumes (hopes?) that any untoward consequences will be fixable, despite many leading AI experts having warned us for years that this probably wont be the case.
Despite this “go fast and bugger the consequences” attitude (my words, not from the plan!), I was pleased to see the recommendation to establish regulatory sandboxes, where some degree of governance experimentation and development can be carried out. This could be helpful — but only if carefully (and responsibly) executed.
This pillar also emphasizes the need to ensure that frontier AI models protect free speech. This might sound laudable, until it’s realized that “free speech” in this context means speech:
That promotes “American values;”
That does not include mention of “misinformation, Diversity, Equity, and Inclusion, and climate change;”
That is free from “top-down ideological bias;”7
And that involves evaluating Chinese-sourced frontier models for “alignment with Chinese Communist Party talking points and censorship.”
It’s an interesting interpretation of the idea of free speech.
On the other hand, the Action Plan promotes open-source and open-weight models — something that will be welcomed by many who worry about the corporate control and associated vulnerabilities associated with powerful closed models.
This recommendation is designed in part to support academic research “which often relies on access to the weights and training data of a model to perform scientifically rigorous experiments” — something that I can see researchers applauding and embracing. At the same time though, the Action Plan also notes that “We need to ensure America has leading open models founded on American values” — so open source models with strings attached.
This pillar of the plan also emphasizes the need for new educational initiatives around AI — and this is where I suspect that colleges and universities across the country will be scrambling to ensure their programs align with what the White House wants (especially as the consequences of misalignment are becoming all too apparent within higher education).
There’s a lot more packed into this pillar. But one final thing I did want to highlight before moving on to the second one is the call to accelerate AI adoption in government — something that’s already happening in the US. The Action Plan reiterates a push to transform government through the widespread use of advanced AI systems — something that could be positively transformative if done right, but which also comes with very serious concerns around human agency, governance, and democracy; none of which are addressed in the plan.
Pillar II: Build American AI Infrastructure
This Action Plan pillar makes a lot of sense if AI is to contribute substantially to personal and societal wellbeing in the US — as long as policies are driven by societal good rather than speculative hype or cynical greed.
There are some aspects of this pillar that I like. Improving the US’s energy grid for instance, and building a skilled workforce for AI infrastructure. But the mechanics of how to get there, and how this is situated this within a broader social and environmental context, give me pause for thought.
Reflecting this, the plan somewhat naively calls for reducing barriers to water use by AI infrastructure — despite water access and use being a highly complex socially, environmentally, and politically in the US. It also calls for reducing environmental regulations that might otherwise slow down infrastructure development. And it does not consider in any way of form the broader landscape of energy sources and transitions toward renewables and away from non-renewables.8
In fact, the tacit messaging is that access to as much energy as fast as possible is more important than where that energy comes from, or the associated long-term impacts on people and planet.
Pillar III: Lead in International AI Diplomacy and Security
The key to interpreting this pillar lies in the opening paragraph:
“To succeed in the global AI competition, America must do more than promote AI within its own borders. The United States must also drive adoption of American AI systems, computing hardware, and standards throughout the world.”
Diplomacy here means ensuring the US’ “allies” rely completely and absolutely on America’s AI technology, capabilities, and underlying ideologies, while its “adversaries” are countered and beaten at every turn.
Within the plan, it’s not completely clear who constitutes an “ally” or an “adversary,” although China is clearly in the latter camp, and I’d hazard a guess that Middle Eastern economies such as Saudi Arabia and the UAE are in the former.
American exceptionalism is front and center in this pillar — which is par for the course for a White House strategic document of course. But it is jarring to read this with little if any counterbalancing language around cooperation, collaboration, partnerships, and shared goals and visions for the future.
For instance, in a paragraph that appears to start out applauding international efforts to develop governance frameworks, the plan warns against efforts that “do not align with American values” — suggesting it’s the US way of the highway.
Putting aside the rights or wrongs of such a strong US-centric vision, it’s far from certain to me whether this is even possible within the emerging global AI landscape. This is a landscape where national borders are increasingly irrelevant to the flow of information and knowledge, and where the big wins are likely to come from collaborations and partnerships rather than isolationism.
Add to this the accelerating rise of AI capabilities in China and how these are having a global impact, and I can’t help wondering if the US perspective as articulated in the Action Plan is a little out of step with current realities.
Finally, a word on risks and responsibility
As I hinted at above, responsible innovation is not only not part of the Action Plan, but it’s actively portrayed as a barrier to US AI domination.
This, of course, is a problem if, like me, you worry that irresponsible (or simply unthinking) innovation is likely to lead to emergent risks that cannot easily be contained.
But this doesn’t mean that the Action Plan does not address risks in any form. It does. Six of them to be precise (updated).9
These are:
The risks of the US not wielding all the power in the coming Age of AI;
The risks of regulations, responsible approaches, or “radical dogmas” slowing down progress and power;
The security risks of “adversaries” like China developing and using more powerful AI systems than the US — especially in threatening ways;
The risks of AI systems that are infused with ideologies that are not in alignment with the current administration;
The risks of new malicious actors finding new ways to synthesize harmful pathogens and other biomolecules; and
The risks presented by malicious deepfakes.
Out of these, the last two risks at least have some alignment with broader thinking around how to ensure safe and beneficial AI. And of course, considering the last one, malicious deepfakes are an increasingly serious threat.10
But deepfakes and biosecurity concerns are just two risk in a large portfolio of potential concerns that could lead to “go fast at any cost” approaches to AI backfiring and threatening everything from personal health, safety, quality of life, and dignity, to environmental security and social cohesion. And it’s concerning that today’s AI Action Plan not only does not acknowledge these, but proposes dismantling systems that would help address them.
This is, sadly, in marked contrast to approaches pursued with previous transformative technologies. As my colleague Sean Dudley and I explored in the journal Nature Nanotechnology a couple of years ago, there’s a lot that can be learned from how the potential risks and benefits of nanotechnology (as an example) were navigated in the early 2000’s. In the case of nanotech there were concerted policy-based efforts to ensure the technology led to positive breakthroughs without leading to unanticipated harm. And what emerged was a balanced, proactive, and above all collaborative approach that placed wellbeing above dominance.
It’s an approach that’s sadly lacking in today’s "Build, Baby, Build" AI Action Plan. Of course, I’m sure many are applauding this as they imagine the downsides of innovation checks and balances. But given how complex the world is getting, there’s a growing likelihood that these selfsame checks and balances provide critical guardrails that help avoid triggering serious and irreversible failures which will impact the US as much as the rest of the world.
Of course, we won’t know whether removing them is a really bad idea or not until we try.11
Which, it seems, is what we’re about to do.
Updated 7/23/25 to add a sixth risk which I missed — biosecurity risks from malicious actors using AI to create new pathogens or harmful biomolecules.
This is very intentionally not an AI-generated first take, or even an AI-informed first take! There’s already been a lot of AI summaries floating around, but while these are able to capture much of the content, they aren’t that great at capturing meaning, implications, subtexts, or even “glaringly obvious texts” where they have a very human dimension. Hence the decision to not add to the AI slop here.
I’m expecting a flurry of program re-brandings across the country to emphasize how they empower students to be part of the US AI economy. At the same time, I suspect that it will quickly become clear which programs have been preparing for this for the past few years, and which are naively coming to the party too late!
It’s going to be interesting to see what the international response is to a plan that pretty much pits the US against the rest of the world at a time when many would argue that cooperation and collaboration between countries is more important than ever.
By ensuring that “our allies are building on American technology” for instance.
The policy recommendation here is to “revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.”
Again, just emphasizing that AI was not used at any point in this read-through and write-up, as I wanted it to be a true first take from my perspective, not ChatGPT’s!
There is, of course, a glaring irony that imposing ideological boundaries on what free speech is is, in itself, a form of top-down ideological bias.
On this point alone initiatives that are focused on sustainability and planetary health should be advocating extremely strongly for AI policies that protect rather than potentially destroy vulnerable social and environmental systems.
I realized a few hours after posting that I’d missed a risk — the biosecurity risks of new ways for malicious actors to synthesize harmful pathogens and other biomolecules.
It’s also the one risk that threatens the US President personally, although his recent amplification of a fake video of President Obama being arrested in the White House seems somewhat at odds with the Action Plan.
This is not strictly speaking true — we have lots of theories, studies, and evidence from multiple systems that provide a pretty good idea of what will happen with the guardrails down. Not that anyone seems to be paying attention.
I appreciate the complete and readable summary and the well-informed analysis,