The Perils of Self-Regulation: Can AI Govern Itself Without Federal Oversight?

By: David Aguilera

Edited by: Donald Edwards


Introduction

          Artificial intelligence (AI) is rapidly transforming economic structures, civic engagement, and government operations. As algorithms are embedded in critical decision-making processes across sectors, questions of oversight have moved to the forefront of policy discourse. The Biden administration’s Executive Order 14110 established a framework for ethical AI governance balancing innovation with transparency, safety, equity, and civil rights protections.1,2

          The subsequent repeal of this executive order under the Trump administration3—replaced by a directive prioritizing deregulation4––has reignited debates about the capability of corporate self-regulation.5 This shift creates a divide between innovation and public interest protections. Meanwhile, as the European Union and China implement robust AI regulatory regimes, the lack of clear U.S. standards may undermine international confidence in American technologies.6,7       

          This article questions the feasibility of corporate AI self-regulation, analyzing its limitations, and historical parallels. It further examines the broader geopolitical, and economic implications of fragmented oversight, emphasizing how regulatory gaps may exacerbate inequalities, deepen public mistrust, and diminish U.S. leadership in setting global norms. The article concludes by offering a series of policy recommendations to ensure that AI advances in alignment with democratic values, and the collective public good.

 

The Illusion of Self-Regulation in High-Risk Innovation

          The notion that industry can regulate itself is appealing in high-tech sectors marked by rapid innovation. Advocates argue that companies have incentives to maintain public trust and avoid reputational damage. However, this optimistic view underestimates the structural pressures that prioritize short-term profits over long-term responsibility. AI companies operate amid fierce competition and minimal oversight, where voluntary ethics initiatives function more as reputational insurance than genuine safeguards.

          As AI systems present unique governance challenges, many models operate as “black boxes,” with decision processes opaque even to their own developers.8 This lack of transparency limits oversight and accountability. AI systems evolve as they interact with data, introducing risks that static internal policies cannot anticipate. Market pressures often disincentivize restraint, even when companies recognize potential harms.

 

Lessons From History: When Self-Regulation Falls Short

          History offers ample evidence about the limits of self-regulation. The 2008 financial crisis exemplifies this pattern. Financial institutions assured regulators that complex products such as credit default swaps were well-managed and internally regulated. Regulatory bodies, swayed by the rhetoric of innovation and market efficiency, were slow to intervene. The result was a global market collapse. The U.S. Securities and Exchange Commission later acknowledged that the self-regulation of investment banks contributed to the crisis, as institutions took on opaque, risky investments without sufficient oversight.

          Risks extend beyond finance. In 1983, a Soviet early-warning system mistakenly detected incoming U.S. missiles, triggered by a primitive AI driven system misinterpreting satellite data. A nuclear disaster was only avoided because a human operator questioned the AI’s output and refused to escalate the situation.9 This near-disaster, now part of the AI Incident Database,10 highlights the dangers of unregulated AI even in its early stages. Today, much more sophisticated technologies make decisions that affect millions––often without human oversight. In the 2010s, unregulated social media algorithms amplified misinformation and contributed to democratic destabilization. A 2021 Pew Research study found that experts widely believe these algorithmic systems exploited user vulnerabilities and exacerbated societal division,11 with some describing the digital public sphere as a “dumpster fire” of misinformation, rage, and manipulation.12 Whether in biased facial recognition, faulty healthcare algorithms, or exploitative generative AI, the harms are no longer theoretical—they are real, growing, and disproportionately impacting vulnerable populations.13

 

Geopolitical Consequences: From Rule Maker to Bystander

          The geopolitical consequences of a weak AI oversight are profound. Historically, the United States has played a pivotal role in setting the global framework for emerging technologies, from the development of nuclear technology to the establishment of international standards for the internet.14 This leadership position has allowed the U.S. to shape global norms, safeguard democratic values, and secure its technological advantage. However, without assertive federal leadership, other nations and blocs—most notably the European Union,15 for example, seeks to create a risk-based framework for AI systems, placing strict limitations on technologies deemed to pose unacceptable societal threats.

          Meanwhile, the U.S. remains fragmented and deferential to corporate interests, weakening its ability to shape global AI norms, and risks the concession of moral and technological leadership to competitors. Without federal coordination, subnational efforts form a patchwork of inconsistent regulations, often vulnerable to legal preemption and corporate lobbying. China presents a particularly stark contrast with U.S. values, having consolidated its centralized, state-drive AI governance framework that prioritizes state control over individual freedoms. This model is actively being exported across the Global South, raising serious concerns about the spread of authoritarian practices. The divergence between Europe, and China highlights the urgency for the United States to assert a clear, values-based regulatory approach. Without it, the United States risks becoming a bystander in shaping the future of AI.

 

Economic Implications of Inadequate Oversight

          The assumption that deregulation naturally creates innovation oversimplifies the complex economic dynamics of AI markets. Without clear federal standards, companies may deploy unproven AI products, undermining public trust. A 2024 Pew study found that seventy percent of Americans lack confidence in companies to use AI responsibly16—a warning sign for market stability. Inadequate oversight also threatens global competitiveness. The EU AI Act requires strict conformity assessments to sell high-risk AI products.17 U.S. companies failing these requirements may lose access to critical markets, creating costly barriers to international trade. Deregulation also deepens market concentration as dominant firms absorb risks while smaller innovators are edged out. Without intervention, tech monopolies consolidate power at the expense of broader public benefit.

 

Recommendations

    1. Create an independent AI oversight agency with rulemaking and enforcement authority, empowered to evaluate systems, conduct audits, and intervene when necessary.18

    2. Require transparency and explainability, especially in high-risk domains like healthcare and criminal justice. Developers should document training data, algorithmic logic, and system performance. The White House Blueprint for an AI Bill of Rights highlighted the importance of “notice and explanation” to safeguard individual rights.19

    3. Implement rigorous pre-deployment risk assessments conducted by third parties, evaluating social, economic, and civil rights implications before systems enter real-world environments. Drawing from the EU’s tiered risk-based approach, these evaluations would enhance accountability while reducing downstream harm.20

    4. Align U.S. regulatory approaches with international standards, particularly frameworks like the EU AI Act,21 to create mutual trust and simplify cross-border compliance.

    5. Fund open-source AI research and prioritize investment in socially beneficial projects, supporting underrepresented academic institutions and initiatives addressing inequities in healthcare, education, labor, and environmental justice.22

    6. Update labor laws and consumer rights to protect workers from algorithmic harms and ensure individuals are notified when AI systems make consequential decisions affecting employment, credit, healthcare, or surveillance.23 Legal frameworks must provide avenues to challenge such decisions through human review and independent redress.

    7. Create a national AI audit framework for ongoing, post-deployment evaluation to ensure AI systems perform safely and equitably over time.24

Conclusion

          AI has transformative potential, but without safeguards, it can deepen inequality and undermine democratic norms. History shows markets alone cannot prevent systemic failures. The United States now stands at a crossroads where inaction risks ceding leadership to regimes with vastly different values. Federal oversight is not a barrier to innovation but a prerequisite for responsible progress. Through enforceable standards and global partnerships, the U.S. can guide AI development in a way that serves both innovation and the public good.

 


Works Cited

1. Federal Register. 2025. “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Federal Register. https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence

2. Mackowski, Martin J. 2025. “Key Insights on President Trump’s New AI Executive Order and Policy & Regulatory Implications.” Legal News & Business Law News, February 10. https://natlawreview.com/article/key-insights-president-trumps-new-ai-executive-order-and-policy-regulatory#google_vignette

3. The White House. 2025. “Removing Barriers to American Leadership in Artificial Intelligence.” January 23. https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/

4. Ibid.

5. Cuéllar, Mariano-Florentino, Benjamin Cedric Larsen, Hong Luo, Alberto Galasso, Matt Perault, J. Scott Babwah Brennen, Mark MacCarthy, Tom Wheeler, Nicol Turner Lee, and Ben Kereopa-Yorke. 2024. “Balancing Market Innovation Incentives and Regulation in AI: Challenges and Opportunities.” Brookings, September 25. https://www.brookings.edu/articles/balancing-market-innovation-incentives-and-regulation-in-ai-challenges-and-opportunities/

6. European Parliament. 2025. “EU AI Act: First Regulation on Artificial Intelligence.” Topics | European Parliament. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

7. Ye, Josh. 2024. “China Issues Draft Guidelines for Standardizing AI Industry.” Reuters, January 18. https://www.reuters.com/technology/china-issues-draft-guidelines-standardising-ai-industry-2024-01-17/

8. Kosinski, Matthew. 2025. “What Is Black Box AI and How Does It Work?” IBM, January 15. https://www.ibm.com/think/topics/black-box-ai

9. Wirtschafter, Valerie, Michael E. O’Hanlon, Cameron F. Kerry, Brooke Tanner, Ian Seyal, Eduardo Levy Yeyati, and Mark MacCarthy. 2025. “How Unchecked AI Could Trigger a Nuclear War.” Brookings, February 28. https://www.brookings.edu/articles/how-unchecked-ai-could-trigger-a-nuclear-war/

10. AI Incident Database. 2025. “Artificial Intelligence Incident Database – Discover.” https://incidentdatabase.ai/apps/discover/?epoch_incident_date_max=433468800&hideDuplicates=1&is_incident_report=true

11. Anderson, Janna, and Lee Rainie. 2021. “The Future of Digital Spaces and Their Role in Democracy.” Pew Research Center. http://www.jstor.org/stable/resrep57316

12. Ibid.

13. Perault, Matt, J. Scott Babwah Brennen, Landry Signé, Nicol Turner Lee, Darrell M. West, Robin J. Lewis, Stephanie Siemek, Manann Donoghoe, Xavier de Souza Briggs, Cameron F. Kerry, and Aaron Klein. 2024. “Making AI More Explainable to Protect the Public from Individual and Community Harms.” Brookings, July 24. https://www.brookings.edu/articles/making-ai-more-explainable-to-protect-the-public-from-individual-and-community-harms/

14. White House. 2024. “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence.” National Archives and Records Administration. https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/

15. European Union. 2025. “Regulation – EU – 2024/1689 – En – EUR-Lex.” https://eur-lex.europa.eu/eli/reg/2024/1689

16. Faverio, Michelle. 2023. “Key Findings about Americans and Data Privacy.” Pew Research Center, October 18. https://www.pewresearch.org/short-reads/2023/10/18/key-findings-about-americans-and-data-privacy/

17. Browne, Ryan. 2024. “World’s First Major AI Law Enters into Force – Here’s What It Means for U.S. Tech Giants.” CNBC, August 1. https://www.cnbc.com/2024/08/01/eu-ai-act-goes-into-effect-heres-what-it-means-for-us-tech-firms.html

18. GovInfo. 2025. “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” https://www.govinfo.gov/metadata/granule/FR-2023-11-01/2023-24283/mods.xml

19. White House. 2025. “Blueprint for an AI Bill of Rights | OSTP.” National Archives and Records Administration. https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/

20. European Union. 2025. “Regulation – EU – 2024/1689 – En – EUR-Lex.” https://eur-lex.europa.eu/eli/reg/2024/1689

21.  Ibid.

22. National Science and Technology Council. 2025. National Artificial Intelligence Research and Development Strategic Plan: 2023 Update. https://www.nitrd.gov/pubs/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf

23. White House. 2025. “Blueprint for an AI Bill of Rights | OSTP.” National Archives and Records Administration. Accessed March 21, 2025. https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/

24. Ibid.


David Aguilera

David Aguilera is a Nashville, TN based attorney, and policy advisor with over a decade of experience at the Tennessee General Assembly. He currently serves as Policy Advisor to the Tennessee Senate Democratic Leader, Raumesh Akbari, where he provides strategic counsel on legislative priorities. David earned his J.D. from the Nashville School of Law and holds a B.A. in Political Science from the University of Tennessee. He is currently completing an Executive Master of Public Administration at Cornell University’s Jeb E. Brooks School of Public Policy and is a contributing writer for the Cornell Policy Review. Deeply committed to public service, David has held civic leadership roles on local nonprofit boards and within the Davidson County Democratic Party. His academic and professional work reflects a sustained commitment to strengthening democratic institutions and expanding equitable access to civic participation, particularly among historically underserved communities.
Scroll to Top