Shaping our AI Future: A Policy Perspective on Ensuring Innovation and Safeguarding Democracy

By: Sarah J. Buszka

Edited by: Andrew Bongiovanni

Graphic by: Arsh Naseer


Abstract

This article critically investigates the impacts of artificial intelligence (AI) on democratic principles, addresses the multifaceted challenges in the spheres of surveillance, misinformation, geopolitical tensions, liability assignment, and market regulation, and presents a comprehensive set of policy recommendations. It emphasizes the need for revisiting and updating antitrust laws, establishing AI-specific regulatory frameworks, and promoting transparency and accountability. Recognizing the key role of major tech firms in AI’s development and deployment, it suggests their collaborative involvement in regulatory efforts. Furthermore, it calls for international coordination in the global regulatory landscape for AI. The aim is to safely navigate adopting AI technologies while ensuring its wide-ranging benefits are realized without undermining democratic values.

 

Introduction

In the sphere of ever-evolving technology of the twenty-first century, few concepts have sparked as much intrigue, excitement, and apprehension as artificial intelligence (AI). The transformative capacity of AI is already causing paradigm shifts for technology, government, security, and many more economic sectors, with realized efficiency enhancements and ballooning innovation. However, AI is a dual-use technology that can present significant challenges and risks to the principles of democracy worldwide.

 

Surveillance, Misinformation, and Geopolitical Tensions

Domestically, one of the primary applications of AI that raises alarm is its role in surveillance.1,2,3 While governmental and private entities have drawn on AI technologies for extensive surveillance operations in the interests of national security and/or advancing economic objectives since the early 2010’s,4,5 public awareness and scrutiny over these practices became international news most notably with Edward Snowden’s whistleblowing on the extent of the surveillance conducted by National Security Agency (NSA). While such expeditions can have legitimate aims for security, they open a Pandora’s box of potential abuse, encroaching on privacy and civil liberties. At the heart of this issue is the risk of AI being employed to foster a culture of control and even exploitation rather than one of freedom.7,8 Thus, it is imperative to strike a balance between harnessing the potential of AI surveillance for enhancing security and preventing the potential transgression of civil liberties. 

Further complicating this balance is the reality of AI-enabled misinformation and deepfake technologies.9,10 These sophisticated tools have the capability to distort public discourse, manipulate perceptions, and even influence political decisions––a phenomenon witnessed during the 2016 US elections.11 In a study conducted by MIT researchers and published in Science, fake news is 70% more likely to be retweeted than the truth.12 According to Richard Painter, “bills are pending in Congress to address the problem, but some of these bills are overbroad and rely on criminal sanctions, exacerbating constitutional problems. No bill addressing deepfakes in elections has passed either house.”13 In advance of the upcoming 2024 election, the Federal Election Commission (FEC) is still deadlocked on whether to do something about deepfakes.14 This example of the FEC’s inaction underscores the delicate balance and difficult task of finding a way to protect the digital information space from harmful exploitation while ensuring freedoms of speech and expression remain uncompromised.

AI technology is not confined within national borders and its implications on the international stage are equally profound. With AI’s proliferation, there are growing concerns that pre-existing geopolitical tensions will be exacerbated by the rise in this technology, potentially paving the way for new arenas of competition, inequality, and exploitation.15,16,17 A useful perspective through which to examine this concept is the military. The current and future transformation in military AI is prominently positioned within the broader geopolitical rivalry between the United States and China.18 Ironically, President Vladimir Putin summarized this situation best in 2017 by asserting, “the one who becomes the leader in this [AI] sphere will be the ruler of the world” and warning that, “it would be strongly undesirable if someone wins a monopolist position.”19 With the US, China, and seemingly the rest of the world rushing to adopt AI technologies and create AI-enabled military systems, the threat to democracy couldn’t be higher. Moreover, this international race in AI technology occurs within a vacuum of regulatory control, leading to dual-use AI that poses vast ethical and legal conundrums in addition to disrupting international cooperation and governance.20 With national regulations and standards that vary widely, the challenge lies in establishing a consensus and crafting a robust, universally accepted legislative matrix for AI development and deployment.

 

Liability and Market Regulation

Unsurprisingly, the proliferation of Artificial Intelligence (AI) technologies has led to legal and regulatory challenges concerning liability assignment and market dominance,21,22 raising fundamental questions about accountability and responsibility.23,24,25 Unlike traditional products, AI technologies can operate independently and evolve, making it challenging to trace causality in incidents involving AI-related harms.26  With multiple stakeholders involved, including hardware manufacturers, software developers, service providers, and end-users, liability attribution is further complicated in the AI ecosystem.27 In the event of AI-driven errors or accidents, should liability be attributed to the machine itself, the underlying algorithms, the developers, the end-users, or someone else entirely? Recognizing these challenges, the European Union (EU) has proposed revisions to liability frameworks to better accommodate AI technologies, aiming to clarify roles and responsibilities to ensure adequate user protection.28 We can begin to elucidate AI liability issues by understanding the current legal frameworks and precedents surrounding product liability and negligence to help determine appropriate regulatory responses.29

 

Economic Analysis

From an economic standpoint, liability rules function as mechanisms to internalize harmful externalities and encourage safety measure investment.30,31 The choice between fault-based (negligence) and strict liability regimes for AI depends on factors such as the inclusion of information costs, levels of activity and innovation, and the types of risks involved.32 Fault-based liability holds parties responsible for failing to meet the standard of care, but its efficacy in the realm of AI is stymied by the inherent unpredictability and autonomy of AI technologies.33,34,35 

On the other hand, strict liability regimes can potentially erode beneficial AI innovations by shifting the burden entirely to the injurer, potentially reducing activity levels below the socially optimal threshold.36 Nevertheless, such rules could also incentivize firms to develop risk-mitigating technologies and enhance product design to make them safer, serving as a catalyst for AI technology innovation.37 Thus, revisiting the pivotal question: who should bear the responsibility––producers or operators of AI technologies? While producers control the safety features of AI products, operators play a significant role in their deployment, maintenance, and oversight. Holding both parties accountable incentivizes them to take appropriate precautions and ensures accountability throughout the AI lifecycle.

Against this economic backdrop, examining how the US government regulates monopolies and natural monopolies provides an avenue to potential regulatory approaches and policy recommendations for addressing market dominance in the AI sector. The primary motivations for US government intervention in AI regulation revolve around preserving competition and preventing monopolistic practices.38 The legal tradition of natural monopoly regulation has been applied to industries like railways, telecommunications, and utilities where economies of scale make it more efficient to have a single provider rather than competing firms.39 Some scholars argue that at least certain AI applications exhibit characteristics of natural monopolies due to factors like data network effects, high fixed costs, and low marginal costs of serving additional users.40,41,42 Historically, the US has regulated natural monopolies through tools like price regulation, non-discrimination requirements, universal service obligations, and structural separations to mitigate harms from monopoly power like high prices, poor service, and exclusion of rivals.43,44 These regulatory approaches could potentially be adapted for dominant AI technology providers. 

As AI technologies permeate increasingly more sectors of the US economy, ensuring fair competition and guarding against market concentration is necessary and urgent. However, the AI industry is currently led primarily by private companies rather than regulated utilities. The major tech firms (Google, Microsoft, Amazon, Meta, Apple) have unique advantages in data, computing power, and entrenched platforms that cement their dominance and allow them to shape AI development.46,47 Their monopolistic control over key inputs like data raises concerns beyond just high prices, including bias amplification, systemic risk, and stifling innovation.48,49 Moreover, the overt influence wielded by these dominant tech firms, coupled with their significant lobbying power, raises concerns about the potential manipulation of regulatory frameworks to serve their own interests, potentially stifling competition and innovation. An example of this threat is the March 2023 open letter calling for a six-month “AI pause,” demanding a temporary AI moratorium signed by technology leaders such as Elon Musk and Steve Wozniak.50 While regulation can help the economy and business, it also can harm it. The potential costs and compliance burdens associated with regulation raise questions about its impact on market dynamics and technological innovation, as smaller companies may be financially constrained to comply, and risk being outpaced by these tech giants. While AI does share some characteristics with past natural monopolies, its current industry structure led by a few private gatekeepers poses additional challenges. Policymakers can draw lessons from traditional monopoly and natural monopoly regulation while developing new approaches tailored to AI’s unique dynamics and societal impacts.51

 

Policy Recommendations

Emerging research suggests that AI can potentially be regulated through a combination of updates to existing antitrust laws, the establishment of new regulatory guidelines specific to AI development and deployment, and disclosure requirements to ensure transparency and accountability in algorithmic decision-making processes.

 

Updating Antitrust Laws for Digital Markets

Several sources highlight the need to adapt and apply existing antitrust laws to address potential anti-competitive effects emerging from the use of AI technologies and algorithmic pricing tools in digital markets:

  • U.S. antitrust regulators like the Federal Trade Commission (FTC) and the Department of Justice’s Antitrust Division (DOJ) together with the Equal Employment Opportunity Commission and Consumer Financial Protection Bureau, have affirmed their intent to apply antitrust laws to AI-facilitated violations such as collusion, price-fixing, and market allocation.52
  • There is increasing scrutiny and private litigation alleging antitrust violations by providers of algorithmic pricing tools and their users.53
  • Regulators may need to revisit guidelines and enforcement standards around information sharing, benchmarking, and data exchanges considering AI capabilities.54
 
Establishing AI-Specific Regulatory Guidelines

The literature also points to the need for new regulatory frameworks and guidelines tailored specifically to the unique challenges posed by AI technologies:

  • The UK’s Competition and Markets Authority (CMA) is developing guidance on potential antitrust issues and enforcement approaches for the rapidly evolving AI sector.55 
  • There are calls for AI governance frameworks, sectoral risk assessments, and supportive policies to foster responsible AI ecosystems and public-private partnerships.56,57 

Based on these data, policymakers should apply tools like price regulation, non-discrimination requirements, universal service obligations, and structural separations to dominant AI providers, similar to how natural monopolies like utilities have been regulated.58,59 To promote data-sharing and interoperability, policymakers should encourage or mandate data sharing, interoperability standards, and non-discrimination requirements for dominant AI providers to level the playing field,60,61 in addition to supporting digital trade frameworks that facilitate cross-border data flows needed for AI training.62 Furthermore, policymakers should consider setting up public-private partnerships and enable sectoral AI ecosystems involving companies of varying sizes, research institutions, and the public sector to drive innovation and adoption.63 

 

Promoting Transparency and Accountability

Finally, and perhaps most importantly, to ensure transparency and accountability in algorithmic decision-making, the literature suggests measures like:

  • Requirements for AI technologies to have adequate documentation, audit trails, and explainability capabilities to determine root causes of errors or harms.64,65 
  • Disclosure guidelines or “model cards” that provide information on an AI system’s intended use, performance characteristics, safety considerations, and limitations.66 
  • Staff training, reporting protocols, and independent oversight mechanisms for AI risk management practices within regulatory agencies themselves.67 

Thus, a multi-pronged approach is needed to tackle the intertwined challenges of market dominance by major tech firms and anti-competitive behavior stifling innovation to ensure the responsible development of artificial intelligence. This holistic approach must involve updating antitrust enforcement mechanisms while establishing new AI-specific regulations that mandate transparency and accountability for algorithms. In parallel, it necessitates constructive engagement with dominant AI tech firms to address thorny issues surrounding data sharing, interoperability standards, and open ecosystems while leveraging existing competition laws to promote market diversity. Crucially, effective solutions will require extensive cooperation – fostering public-private partnerships between government and industry alongside international coordination to align AI governance frameworks across borders. Only through this multifaceted strategy of modernized rules, collaborative reform efforts with key players, and robust cooperation spanning sectors and nations can we cultivate an AI landscape that unlocks innovation while upholding ethical principles and fair competition on a global scale.68 

 

Conclusion

In an environment where a handful of large private companies dominate AI research and development, the need for an even-handed playing field is imperative. This is where governmental powers across the globe must step up and leverage legislative tools to prevent undue concentration of AI-infused power. This includes reassessing and modernizing existing antitrust laws and drafting new AI-specific regulations. The end goal? A diverse, healthy market that fosters innovation and assures the safe advancement of AI technologies for the good of society. Developers, legal experts, tech companies, and policy architects are necessary members of this discussion and that of the future implications of AI on society. In summary, the transformational power of AI requires an intentional, comprehensive, and flexible policy response. As we continue moving through the AI evolution, we must incorporate elements of accountability, transparency, and fair competition into every regulatory discussion to safeguard democratic systems and freedoms while optimizing the benefits of AI.

 


Works Cited

[1] Colón Vargas, N. 2024. “Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups.” AI and Ethics. https://doi.org/10.1007/s43681-024-00502-w.

[2] Molnar, P. 2019. “Technology on the Margins: AI and Global Migration from a Human Rights Perspective.” Cambridge International Law Journal 8 (2): 305–330. https://heinonline.org/HOL/P?h=hein.journals/cajoincla8&i=305.

[3] Sharma, M., A. Dixit, and P. Rawat. 2024. “The Use of AI in Surveillance to Identify the Potential Threat of Terrorist Attacks.” 2024 IEEE 1st Karachi Section Humanitarian Technology Conference (KHI-HTC), 1–7. https://doi.org/10.1109/KHI-HTC60760.2024.10482367.

[4] Molnar, P. 2019. “Technology on the Margins.” 305–330.

[5] Sharma, M., A. Dixit, and P. Rawat. 2024. “The Use of AI in Surveillance.” 1-7.

[6] Snowden, E.J. 2019. Permanent Record. First edition. New York: Metropolitan Books/Henry Holt and Company.

[7] Colón Vargas, N. 2024. “Exploiting the Margin.”

[8] Molnar, P. 2019. “Technology on the Margins.” 305–330.

[9] Painter, R.W. 2024. “Deepfake 2024: Will Citizens United and Artificial Intelligence Together Destroy Representative Democracy?” Journal of National Security Law & Policy 14 (1): 121–149.

[10] Tomassi, A., A. Falegnami, and E. Romano. 2024. “Mapping Automatic Social Media Information Disorder. The Role of Bots and AI in Spreading Misleading Information in Society.” PLoS ONE 19 (5): 1–54. https://doi.org/10.1371/journal.pone.0303183.

[11] Painter. 2024. “Deepfake 2024.” 121-149.

[12] Vosoughi, S., D. Roy, and S. Aral. 2018. “The Spread of True and False News Online.” Science 359 (6380): 1146–1151. https://doi.org/10.1126/science.aap9559.

[13] Painter. 2024. “Deepfake 2024.” 121-149.

[14] Painter. 2024. “Deepfake 2024.” 121-149.

[15] Colón Vargas, N. 2024. “Exploiting the Margin.”

[16] Elliott, L., and G. Wearden. 2024. “Geopolitical Tensions and AI Dominate Start of World Economic Forum; Ukraine, Middle East and Taiwan Overshadow Annual Meeting at Davos, with Artificial Intelligence Also High on Agenda.” The Guardian (London, England), January. https://go.gale.com/ps/i.do?p=AONE&sw=w&issn=02613077&v=2.1&it=r&id=GALE%7CA779440331&sid=googleScholar&linkaccess=abs.

[17] Molnar, P. 2019. “Technology on the Margins.” 305–330.

[18] Glonek, J. 2024. “The Coming Military AI Revolution.” Military Review 104 (3): 88–99.

[19] “Putin: Leader in Artificial Intelligence Will Rule World.” 2017. CNBC, September 4. https://www.cnbc.com/2017/09/04/putin-leader-in-artificial-intelligence-will-rule-world.html.

[20] Glonek, J. 2024. “The Coming Military AI Revolution.” 88–99.

[21] Grynbaum, M.M., and R. Mac. 2023. “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work.” The New York Times, December 27. https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html.

[22] “Pause Giant AI Experiments: An Open Letter.” 2023. Future of Life Institute. March. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

[23] Buiten, M., A. de Streel, and M. Peitz. 2023. “The Law and Economics of AI Liability.” Computer Law & Security Review 48: 105794. https://doi.org/10.1016/j.clsr.2023.105794.

[24] Buiten, M.C. 2024. “Product Liability for Defective AI.” European Journal of Law and Economics. https://doi.org/10.1007/s10657-024-09794-z.

[25] Zech, H. 2021. “Liability for AI: Public Policy Considerations.” ERA Forum 22 (1): 147–158. https://doi.org/10.1007/s12027-020-00648-0.

[26] Davidson, S. 2024. “The Economic Institutions of Artificial Intelligence.” Journal of Institutional Economics 20: e20. https://doi.org/10.1017/S1744137423000395.

[27] Buiten, M.C. 2024. “Product Liability for Defective AI.”

[28] European Commission. 2021. Proposal for a Regulation of the European Parliament and the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM (2021a). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.

[29] Buszka, S.J. 2024. “Navigating Liability and Market Dominance in AI: Implications for US Government Regulation.” Unpublished paper, Brooks School of Public Policy, Cornell University.

[30] Buiten, M., A. de Streel, and M. Peitz. 2023. “The Law and Economics of AI Liability.”

[31] Davidson, S. 2024. “The Economic Institutions of Artificial Intelligence.”

[32] Buiten, M., A. de Streel, and M. Peitz. 2023. “The Law and Economics of AI Liability.”

[33] Buiten, M., A. de Streel, and M. Peitz. 2023.

[34] Buiten, M.C. 2024. “Product Liability for Defective AI.”

[35] Zech, H. 2021. “Liability for AI: Public Policy Considerations.” 147–158.

[36] Zech, H. 2021. 147–158.

[37] Zech, H. 2021. 147–158.

[38] Narechania, T.N. 2021. “Machine Learning as Natural Monopoly.” Iowa Law Review 107 (4): 1543–1614. https://heinonline.org/HOL/P?h=hein.journals/ilr107&i=1595.

[39] Narechania, T.N. 2021. “Machine Learning as Natural Monopoly.” 1543–1614.

[40] Chakravorti, B. 2024. “What if Regulation Makes the AI Monopoly Worse?” Foreign Policy, May 16, 2024. https://foreignpolicy.com/2024/01/25/ai-regulation-monopoly-chatgpt/.

[42] Mulligan, C.E.A., and P. Godsiff. 2023. “Datalism and Data Monopolies in the Era of A.I.: A Research Agenda.” arXiv:2307.08049. https://doi.org/10.48550/arXiv.2307.08049.

[42] Narechania, T.N. 2021. 1543–1614.

[43] Kak, A., S. Glickman, and S. West. 2024. “AI Nationalism(s): Global Industrial Policy Approaches to AI.” AI Now Institute. https://ainowinstitute.org/general/ai-nationalisms-executive-summary.

[44] Narechania, T.N. 2021. 1543–1614.

[45] Kak, A., S. Glickman, and S. West. 2024. “AI Nationalism(s).”

[46] Lynn, B., M. von Thon, and K. Montoya. 2023. “Report | AI in the Public Interest: Confronting the Monopoly Threat.” Open Markets Institute. https://www.openmarketsinstitute.org/publications/report-ai-in-the-public-interest-confronting-the-monopoly-threat.

[47] Mulligan, C.E.A., and P. Godsiff. 2023. 

[48] Chakravorti, B. 2024. “What if Regulation Makes the AI Monopoly Worse?”

[49] Lynn, B., M. von Thon, and K. Montoya. 2023. “AI in the Public Interest.”

[50] “Pause Giant AI Experiments: An Open Letter.” 2023.

[51] Buszka, S.J. 2024. “Navigating Liability and Market Dominance in AI.”

[52] Hoffman Kent, K., and K. Schwartz. 2023. “Antitrust Implications of AI Technology Remain Key Focus of U.S. Regulators.” New York Law Journal, November 13. https://www.law.com/newyorklawjournal/2023/11/13/antitrust-implications-of-ai-technology-remain-key-focus-of-u-s-regulators/.

[53] Goodman, J.M., M.L. Naranjo, and R.P. Satia. 2024. “AI and Algorithmic Pricing: Current Issues and Compliance Considerations.” Morgan, Lewis & Bockius Law Firm, April 29. https://www.morganlewis.com/pubs/2024/04/ai-and-algorithmic-pricing-current-issues-and-compliance-considerations.

[54] Goodman, J.M., M.L. Naranjo, and R.P. Satia. 2024. “AI and Algorithmic Pricing.”

[55] CMA AI Strategic Update. 2024. Competition & Markets Authority. https://www.gov.uk/government/publications/cma-ai-strategic-update/cma-ai-strategic-update.

[56] Coglianese, C., and A. Lai. 2022. “Antitrust by Algorithm.” SSRN Scholarly Paper 3985553. https://papers.ssrn.com/abstract=3985553.

[57] Hoffman Kent, K., and K. Schwartz. 2023. “Antitrust Implications of AI Technology.”

[58] Comunale, M., and A. Manera. 2024. “The Economic Impacts and the Regulation of AI: A Review of the Academic Literature and Policy Actions.” IMF Working Paper 2024/65. https://www.imf.org/en/Publications/WP/Issues/2024/03/22/The-Economic-Impacts-and-the-Regulation-of-AI-A-Review-of-the-Academic-Literature-and-546645.

[59] Business Roundtable. n.d. Business Roundtable Policy Recommendations for Responsible Artificial Intelligence. Accessed May 9, 2024. https://www.businessroundtable.org//policy-perspectives/technology/ai.

[60] Comunale, M., and A. Manera. 2024. “The Economic Impacts and the Regulation of AI.”

[61] Lynn, B., M. von Thon, and K. Montoya. 2023. “AI in the Public Interest.”

[62] Business Roundtable. n.d. Business Roundtable Policy Recommendations.

[63] High-Level Expert Group on AI (AI HLEG). 2019. Policy and Investment Recommendations for Trustworthy Artificial Intelligence. European Commission. https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence.

[64] Coglianese, C., and A. Lai. 2022. “Antitrust by Algorithm.”

[65] Hoffman Kent, K., and K. Schwartz. 2023. “Antitrust Implications of AI Technology.”

[66] Coglianese, C., and A. Lai. 2022. 

[67] Coglianese, C., and A. Lai. 2022. 

[68] Buszka, S.J. 2024. 


Sarah J. Buszka

Sarah J. Buszka is an Executive MPA candidate and Brooks Public Policy Fellow at Cornell University's Jeb E. Brooks School of Public Policy. With over a decade of experience supporting higher education across multiple roles and institutions, she enjoys exploring the intersections of leadership, education, technology, policy, and organizational culture.
Scroll to Top