They Called It “Winning the Race” — But Who’s Invited to the Finish Line?
Last July, the White House released Winning the Race: America’s AI Action Plan — a 28-page policy blueprint with over 90 federal directives designed to cement America’s dominance in artificial intelligence. The document is confident, sweeping, and ambitious. And if you read it carefully, it is also incomplete in ways that matter enormously to Black citizens, small business owners, and anyone who believes that governance without accountability isn’t governance at all.
I read it so you don’t have to. More importantly, I’m giving you the analysis your news feed won’t — because the question isn’t just what the plan says. It’s what the plan does not say, and why that silence has consequences.
What the Plan Actually Says
The Action Plan is organized around three pillars: Accelerating AI Innovation, Building American AI Infrastructure, and Leading in International AI Diplomacy and Security. Framed as a race against China, it reads like a national security document as much as a technology policy — and in many ways, that’s exactly what it is.
On innovation, the plan calls for sweeping deregulation — stripping what it calls “burdensome” rules at the federal and state levels, and redirecting agencies like the FTC and FCC to roll back oversight that the administration believes slows private-sector AI development. It also calls for protecting “free speech” in AI systems and eliminating what it characterizes as “ideological bias” in government-procured AI tools.
On infrastructure, the plan promises massive investment in data centers, semiconductor manufacturing, and energy infrastructure — along with workforce training programs to help Americans transition through AI-driven labor market changes.
On its face, much of this is reasonable. America does need a coherent AI strategy. Workforce development is genuinely important. And infrastructure investment creates real jobs. But the framework has critical omissions that every Black business owner, community leader, and AI practitioner needs to understand.
“The removal of DEI from federal AI risk standards does not make bias disappear. It removes the requirement to look for it.”
What the Plan Deliberately Left Out
Civil rights protections are gone. The prior administration’s AI framework explicitly required civil rights impact assessments, protections against algorithmic discrimination, and equity considerations in AI development. None of that language survived into this plan. There is no mention of algorithmic bias. No mention of disparate impact. No mention of how AI tools — in hiring, lending, healthcare, or law enforcement — have historically harmed Black communities at disproportionate rates.
DEI has been explicitly removed from risk standards. The plan directs the National Institute of Standards and Technology (NIST) to revise the AI Risk Management Framework to eliminate all references to Diversity, Equity, and Inclusion. This is not a passive omission. It is an active policy decision to strip accountability language from the framework that AI practitioners across the country — including me — rely on to build responsible governance architectures. When the national risk standard no longer requires organizations to consider bias, many organizations will stop looking for it.
State preemption threatens our strongest guardrails. The plan pushes aggressively to prevent states from passing their own AI regulations, arguing that a “patchwork” of laws would slow innovation. But the truth is that state-level protections have often been the most meaningful protections available to marginalized communities. Preempting them — before adequate federal replacements exist — creates a governance vacuum.
Small business access is mentioned but not designed for us. The plan does reference regulatory sandboxes and small business AI adoption, and the March 2026 National Policy Framework includes language about supporting small business uptake of AI tools. But there is no targeted pipeline, no equity lens, and no intentional design to ensure underrepresented founders are included. Access without intentionality is not inclusion — it is a check box.
WHAT THIS MEANS FOR BLACK BUSINESS OWNERS
AI tools used in hiring, lending, or customer decisioning carry bias risk — with no federal mandate to audit for it
State protections that currently guard against algorithmic discrimination are under threat
Workforce displacement will hit Black workers hardest without race-conscious targeting in retraining programs
Access to AI infrastructure and sandboxes exists — but without equity provisions, the same players who dominate today will dominate tomorrow
Your AI governance posture is now a competitive differentiator, not just a compliance function
What This Means for AI Governance Practitioners
Here is the strategic reality: when the federal government retreats from bias accountability, the market demand for independent governance expertise grows — not shrinks. Organizations still face legal exposure under existing employment law, consumer protection statutes, and state-level frameworks. Clients still need someone who can assess their AI risk landscape with precision, rigor, and an understanding of what the federal standards used to require and what fills the gap now.
The Burks AI Governance Model™ was built on a five-layer decision architecture — Strategic Intent, Decision Rights, Governance Flow, Control Environment, and Impact and Trust — precisely because good governance doesn’t wait for a federal mandate to do the right thing. The plan’s silence on equity and bias is a business case, not a barrier.
For practitioners: the removal of DEI language from the NIST AI RMF does not eliminate bias risk. It eliminates the requirement to name it. Your clients still carry it. Your job is to help them see it, measure it, and govern it — whether Washington tells them to or not.
“The governance vacuum left by this plan is not a problem to wait out. It is a market to lead.”
The Bottom Line
Winning the Race is a real document with real consequences. It will shape how federal agencies buy AI, how states can regulate it, and how the national risk framework defines responsibility. Every small business owner, HR leader, and community organization that uses AI — or plans to — will operate inside this framework whether they know it or not.
Knowing is strategy. Governance is protection. And the communities most likely to be harmed by ungoverned AI are the same ones this plan never mentions.
That is not an accident. And your response to it shouldn’t be either.
Not sure where your business stands on AI risk?
Download the free AI Governance Readiness Guide™
burksstrategicholdings.com/ai-governance
About the Author: Tamara Burks is the Managing Partner and Chief Strategist of Burks Strategic Holdings, Inc. and founder of Small Business Whisperer LLC. She holds an AI Automation Certificate from USF, an AI Fluency Certification from Anthropic, and is a Certified Neurodiversity Professional. She is the creator of the proprietary Burks AI Governance Model™ and a sought-after voice on AI governance for small business founders and underrepresented entrepreneurs.
Sovereignty · Systems · Strategy · Soul™