Artificial Intelligence & Power Politics: A Political Tool in Modern Bureaucracy

By: Michelle Lee

Edited by: Ava LaGressa

Graphic by: Arsh Naseer 


“Bureaucracy develops the more perfectly, the more it is ‘dehumanized,’ the more completely it succeeds in eliminating from official business love, hatred, and all purely personal, irrational, and emotional elements which escape calculation.”
– Max Weber1

Combining human expertise and computer strengths, government agencies are working faster and smarter. Artificial intelligence (AI) is making its mark among the top trends shaping the government landscape in 2024. It’s an emerging cyber dominance, as seen by the Biden administration’s AI executive order signed in late 2023.2 The U.S. Government is now leveraging AI to respond to global issues relating to health care, infrastructure, climate change, and research—transforming strategies and strengthening bureaucracy tendencies and structures in the public sector administration. Incorporating AI in government agencies aligns with Max Weber’s concept of bureaucracy to some extent. Weber defined modern bureaucracy as a hierarchical organization characterized by specific rules and procedures for decision-making and administration.3 By implementing AI in government agencies, bureaucratic processes are underway toward a simpler and cost-efficient routine with enhanced efficiency and productivity in handling overwhelming caseloads. This paper engages with the intersection of the Weberian Bureaucracy theory and the advancements of AI in bureaucratic decision-making with a primary focus on the political consequences of technical and economic considerations in administrative tasks. First, we will examine Max Weber’s concept of modern bureaucracy and how it aligns with the current mechanization of bureaucracy. Then, we will explore how AI serves as an embodiment, perpetuation, and political extension of Weber’s principles and ideals of bureaucracy. We will finish with the identification and discussion of specific actions taken by federal agencies to adopt AI based on technical and economic justifications to achieve its intended objectives of efficiency and effectiveness.

 

I. WEBERIAN BUREAUCRACY IN THE AGE OF ARTIFICIAL INTELLIGENCE

Bureaucracies are deeply hierarchical and political, both in culture and in nature. Based on Max Weber’s principles of bureaucracy, bureaucratic organizations follow a hierarchical, monocratically organized office structure, with each jurisdiction abiding by a set of general and formal rules and duties. To ensure efficiency and legal order, bureaucratic organizations and political officials are expected to operate strictly on impersonal and functional purposes.4 In other words, Weber’s concept of bureaucracy stems from rational law with a focus on objectivity and calculation: “Bureaucracy develops the more perfectly, the more it is ‘dehumanized,’ the more completely it succeeds in eliminating from official business love, hatred, and all purely personal, irrational, and emotional elements which escape calculation”.5 Like bureaucracies, AI prioritizes the elimination of personal, emotional elements in favor of standardized, data-driven approaches to decision-making. AI systems embody the dehumanization aspect of Weberian ideals where they emphasize objectivity and efficiency over human judgment. With AI, the government can achieve a “personally detached” and “strictly objective expert” established based on “statutes” and rational law.6 This is not to argue the compatibility or effectiveness of AI under Weberian principles but to discern certain overlaps in the intersection of existing contemporary discussions. Nevertheless, the juxtaposition asserts the Weberian ideals of maintaining legal certainty, control, rationality, and impersonality in hierarchies.

 

This discussion is furthered by the prediction that AI serves as an implicit manifestation and political extension of Weber’s values of bureaucracy and thus, possesses some ideological motives behind its adoption and implementation in government operations. As outlined in an announcement from the White House, “AI professionals provide valuable expertise to regulators who need to ensure that algorithmic tools do not introduce unfair bias into hiring processes”.7 It is worth questioning whether this is true or feasible considering the inherently political implications of leveraging government-owned technological innovations: “Just like with any other tool, technological innovations, their adoption and implementation are not the product of natural evolution or inevitable. They are the product of political decisions, and both their design and implementation are policy choices”.8 This insight invites us to question whether such adoption is a deliberate policy choice rather than a natural progression—raising concerns over the need for careful consideration and strategic decision-making to effectively navigate the intersection of technology and governance.

 

II. FEDERAL AGENCIES EMBRACE AI: TECHNICAL AND ECONOMIC INSIGHTS INTO INNOVATION AND EFFICIENCY

In recent times, government agencies have increasingly turned to AI solutions to address societal challenges. Approximately 45% of federal agencies have integrated AI and machine learning (ML) tools into their decision-making processes, with law enforcement, health, and financial regulation ranked in the top three policy areas of use cases.9 Although it may appear that AI is sculpting a new reality, the truth is that data science simply involves translating and simplifying codes and algorithms into data language by expert technicians and programmers to facilitate predictive patterns—conforming to Weber’s theory on the significance of technical foundation and expertise in establishing bureaucratic organizations: “The decisive reason for the advance of bureaucratic organization has always been its purely technical superiority over any other form of organization”,10 which is further supported by Mulligan & Hsiang’s observation, “effective AI regulation and policy development requires technical expertise”.11

 

As defined under Max Weber’s terms, the basis of bureaucracy consists of “[p]recision, speed, unambiguity, knowledge of the files, continuity, discretion, unity, strict subordination, reduction of friction and of material and personal costs […]”.12 This theory resonates with the objective capabilities of AI as such that it strives for high precision, speed, resource optimization, and predictability in tackling complex governance tasks. However, bureaucracies are not immune to political influence, and the integration of AI into bureaucratic processes does not eliminate this reality—in fact, it solidifies it. The existing values in hierarchies as artificially employed by government agencies are theorized as a means of maintaining power politics.

 

Political biases may manifest in dataset selection, with certain groups or variables deliberately included or excluded to serve specific agendas. A case in point is the vulnerability of the Department of Homeland Security’s facial recognition system to exploitation, where manipulated technical specifications could falsely match innocent individuals with the no-fly list or evade detection by certain individuals.13 The standardization procedure, technical in nature, is susceptible to subjectivity at multiple stages, from data selection and representation to outcome interpretation. Moreover, variable interpretation and categorization within standardized datasets may reflect political perspectives due to the influence of political considerations, potentially leading to biased outcomes where existing political dynamics are reinforced. This stresses the importance of ensuring accountability, transparency, and fairness of AI systems and models. Though the White House’s executive order meets the “technical superiority” characteristic of the Weberian bureaucracy, it offers limited insight into the power and capacity of new algorithmic tools in advancing agency missions. The standardization of data is not entirely devoid of political biases, despite federal agencies’ efforts to mitigate this. Yet, neutralizing efforts appear primarily limited to “hiring processes” rather than the design of AI models: “…software engineers, designers, and AI professionals provide valuable expertise to regulators who need to ensure that algorithmic tools do not introduce unfair bias into hiring processes”.14 Moreover, establishing accountability in democratic governance systems involves limiting the level of autonomy possible within the public sector. This entails hierarchical organizations monitoring and authorizing the actions of their subordinates to facilitate oversight and accountability. As far as bureaucratic decisions are delegated to a machine, the choices made by the machine are equally political and will mirror the political ideologies of both the designers and the individuals who have chosen to adopt and implement them within public administration.

 

Federal agencies’ adoption of AI systems is driven by the objectives of cost-efficiency while minimizing inaccuracies productively. Modernized federal programs are designed from an economic standpoint, to reduce costs—aligning with this statement: “AI have the potential to reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data, thereby making government performance more efficient and effective” which encapsulates Weber’s ideology that administrative tasks are to a large extent, “economically determined” and motivated.15 Nonetheless, the administrative state’s increasing reliance on AI tools poses a significant risk of compounding biases against vulnerable or marginalized groups, potentially arousing significant political consequences and violating anti-discrimination laws. These concerns emerge from signs of algorithmic biases and disparities found across law enforcement agencies. For example, criminal risk assessment algorithms employed by the FBI and NYPD have exhibited higher false positive rates for African-American individuals as facial recognition technology algorithms were accurately trained on lighter skin tones, leading to low-quality database images of Black individuals and consequently, an overrepresentation of Black individuals in arrests.16,17 This is further amplified by the disproportionate rating of low-income individuals as “high risk” in child welfare assessments.18 Consequently, these biases shed light on the severe imbalance of benefits allocation among certain demographic groups, which are problematic and controversial for federal agencies established on the basis of legal and ethical grounds—leading to not only a reduction in trust and a rise in skepticism but also denial of access to healthcare and welfare. As AI systems become integral to various domains such as healthcare, finance, and governance, there is a corresponding rise in the need for bureaucratic mechanisms to govern their oversight, development, deployment, and ethical use.

 

By examining Weberian bureaucracy through the lens of technical and economic considerations in the context of AI, we gain insights into how traditional bureaucratic principles intersect with modern technological advancements and their impact on organizational efficiency, productivity, and economic performance. Weber’s theory provides a valuable framework for deciphering the interplay between technological innovation, bureaucratic organization, and societal development. The relationship between Weber’s ideal bureaucracy and its actual practice is complex and incompatible with its objective of prioritizing economic efficiency over social welfare when cost and consequences fall short. As AI becomes increasingly mainstream in government operations and agencies, questions and concerns surrounding ethics, transparency, accountability, and stronger regulation have emerged as prevailing topics of discussion. The rapid proliferation of AI calls for a comprehensive scrutiny of the internal agency capacity that is designed and implemented in a manner that adheres to legal frameworks, policies, and accountability standards.

 


Works Cited

1. Weber, M. 1968. Economy and Society. (G. Roth & C. Wittich, Eds., G. Roth & C. Wittich, Trans.). Los Angeles: University of California Press. pp. 956-1005. https://ia600305.us.archive.org/25/items/MaxWeberEconomyAndSociety/MaxWeberEco nomyAndSociety.pdf

2. The United States Government. 2023. Fact sheet: President Biden issues executive order on safe, secure, and trustworthy artificial intelligence. The White House, October 30. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-pre sident-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence

3. Weber, 1968, Economy, pp. 956-1005.

4. Weber, 1968, Economy, pp. 975.

5. Weber, 1968, Economy, pp. 975.

6. Weber, 1968, Economy, pp. 975.

7. Mulligan, Deirdre K., and Mina Hsiang. 2024. “A Call to Service for AI Talent in the Federal Government.” The White House. The United States Government. January 29. https://www.whitehouse.gov/ostp/news-updates/2024/01/29/a-call-to-service-for-ai-talent -in-the-federal-government/.

8. Cetina Presuel, Rodrigo, and Jose M. Martinez Sierra. 2024. “The Adoption of Artificial Intelligence in Bureaucratic Decision-Making: A Weberian Perspective.” Digital Government: Research and Practice 5 (1): 1–20. doi:10.1145/3609861.

9. Freeman Engstrom, David, Daniel E. Ho, Catherine M. Sharkey, and Mariano-Florentino Cuéllar. 2020. “Artificial Intelligence in Federal Administrative Agencies.” Stanford Law. February. https://law.stanford.edu/wp-content/uploads/2020/02/ACUS-AI-Report.pdf.

10. Weber, 1968, Economy, pp. 973.

11. Mulligan and Hsiang, 2024, “Call to Service.”

12. Weber, 1968, Economy, pp. 973.

13. Engstrom et al., 2020, “Artificial Intelligence.”

14. Mulligan and Hsiang, 2024, “Call to Service.”

15. Engstrom et al., 2020, “Artificial Intelligence.”

16. Najibi, Alex. 2020. “Racial Discrimination in Face Recognition Technology.” Science in

the News. October 26. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technol ogy/.

17. Engstrom et al., 2020, “Artificial Intelligence.”

18. Engstrom et al., 2020, “Artificial Intelligence.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

Scroll to Top