This article examines the relationship between artificial intelligence (AI) and neoliberalism, arguing that their connection is neither incidental nor merely contextual, but rooted in a shared foundational logic: instrumental rationality. Neoliberalism is understood here as an ordering paradigm that extends market-based, quantitative, and efficiency-oriented reasoning to all domains of social life. AI, as it is currently conceived, developed, and deployed, operates according to the same logic, relying on datafication, quantification, correlation, and optimization as exclusive criteria for decision-making and evaluation. Starting from a conceptual analogy between market rationality and computational problem-solving, the article shows how AI systems embody and reinforce a flattened notion of “the best outcome,” defined solely in measurable and calculable terms. This convergence becomes particularly visible in the public sector, where AI-based solutions are increasingly adopted to support or replace administrative decision-making. Because AI technologies are developed and supplied almost entirely by private actors operating within a neoliberal economic framework, their integration into public functions necessarily involves outsourcing, informational asymmetries, and a significant agenda-setting power exercised by technology providers. The analysis explores several hypotheses explaining the AI–neoliberalism symbiosis. First, the outsourcing of AI aligns with neoliberal doctrines of state downsizing and market primacy, limiting the autonomy of public authorities. Second, AI adoption tends to import business models centered on large-scale data extraction and surveillance, thereby reshaping public functions according to logics associated with surveillance capitalism. Third, from a critical political economy perspective, AI can be interpreted as a contingent but powerful vehicle for neoliberal capitalism, reinforcing techno-solutionism and the dominance of instrumental rationality in governance. The article also highlights the consequences of these dynamics for human actors, particularly public officials. Training humans to work effectively with AI systems often entails the internalization of the systems’ implicit assumptions, narrowing discretionary reasoning and transforming human oversight into a largely legitimizing function. This raises doubts about the effectiveness of “human-in-the-loop” safeguards in countering structural biases and ideological preconfigurations embedded in AI. Finally, the article addresses the question of whether an alternative, non-neoliberal AI is possible. Through historical case studies such as Chile’s Project Cybersyn and Japan’s Fifth Generation Computer Systems project, it shows that alternative approaches have existed but failed to consolidate due to broader geopolitical and ideological shifts favoring neoliberalism. The conclusion argues that the feasibility of alternative AI systems ultimately depends on transforming the ideological and institutional conditions that currently shape AI development, rather than on technical redesign alone
Artificial intelligence and neoliberalism: insights into a symbiotic relationship
ponti
2025
Abstract
This article examines the relationship between artificial intelligence (AI) and neoliberalism, arguing that their connection is neither incidental nor merely contextual, but rooted in a shared foundational logic: instrumental rationality. Neoliberalism is understood here as an ordering paradigm that extends market-based, quantitative, and efficiency-oriented reasoning to all domains of social life. AI, as it is currently conceived, developed, and deployed, operates according to the same logic, relying on datafication, quantification, correlation, and optimization as exclusive criteria for decision-making and evaluation. Starting from a conceptual analogy between market rationality and computational problem-solving, the article shows how AI systems embody and reinforce a flattened notion of “the best outcome,” defined solely in measurable and calculable terms. This convergence becomes particularly visible in the public sector, where AI-based solutions are increasingly adopted to support or replace administrative decision-making. Because AI technologies are developed and supplied almost entirely by private actors operating within a neoliberal economic framework, their integration into public functions necessarily involves outsourcing, informational asymmetries, and a significant agenda-setting power exercised by technology providers. The analysis explores several hypotheses explaining the AI–neoliberalism symbiosis. First, the outsourcing of AI aligns with neoliberal doctrines of state downsizing and market primacy, limiting the autonomy of public authorities. Second, AI adoption tends to import business models centered on large-scale data extraction and surveillance, thereby reshaping public functions according to logics associated with surveillance capitalism. Third, from a critical political economy perspective, AI can be interpreted as a contingent but powerful vehicle for neoliberal capitalism, reinforcing techno-solutionism and the dominance of instrumental rationality in governance. The article also highlights the consequences of these dynamics for human actors, particularly public officials. Training humans to work effectively with AI systems often entails the internalization of the systems’ implicit assumptions, narrowing discretionary reasoning and transforming human oversight into a largely legitimizing function. This raises doubts about the effectiveness of “human-in-the-loop” safeguards in countering structural biases and ideological preconfigurations embedded in AI. Finally, the article addresses the question of whether an alternative, non-neoliberal AI is possible. Through historical case studies such as Chile’s Project Cybersyn and Japan’s Fifth Generation Computer Systems project, it shows that alternative approaches have existed but failed to consolidate due to broader geopolitical and ideological shifts favoring neoliberalism. The conclusion argues that the feasibility of alternative AI systems ultimately depends on transforming the ideological and institutional conditions that currently shape AI development, rather than on technical redesign aloneI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


