In Resisting AI: An Anti-fascist Approach to Artificial Intelligence, Dan McQuillan calls for a restructuring of artificial intelligence (AI) which prioritises the common good over an algorithmic optimisation that reinforces the marginalisation of vulnerable groups. Though this book raises important concerns about the inbuilt biases and misuse of a rapidly developing technology, Milan Stürmer and Mark Carrigan find it lacks a detailed technical analysis to support its claims about AI’s social impacts.
This blogpost originally appeared on LSE Review of Books. If you would like to contribute to the series, please contact the managing editor at firstname.lastname@example.org.
Resisting AI: An Anti-fascist Approach to Artificial Intelligence. Dan McQuillan. Bristol University Press. 2022.
Since the release of ChatGPT 3.5 in November 2022, public discourse, both critical and affirmative, has mainly focused on the promises of generative Artificial Intelligence (AI). There is an immediacy to regular uses of conversational agents like ChatGPT and Claude that feels as if these offer real capabilities with significant consequences for the individuals choosing to use them, as well as the organisations they will come to be embedded within if the next, much predicted wave of automation comes to pass. Though the release of Dan McQuillan’s Resisting AI preceded these events by a matter of months, its language is different from that of the current hype, yet it remains relevant. In fact, it is in contrast with the current hype-cycle that McQuillan’s book unfolds its critical force to counter what he, with a cheeky nod to Mark Fisher, calls “AI Realism” (44). He has more recently described conversational AI systems like ChatGPT as ‘bullshit engines’ in an argument with clear roots in this book, even if the earlier generation of large language models was relatively peripheral within it.
McQuillan surfaces real concerns about how AI, with its inherent tendency to form positive feedback loops, is used to further marginalise vulnerable groups if deployed without care, such as by optimising welfare distribution based on preexisting injustices.
Even though it offers brilliantly clear explanations of what machine learning does on a technical level, its core argument is not very technical and feels rather, perhaps necessarily, deflating: “Rather than heralding an alternative sci-fi future, AI can be more plausibly understood as an upgrade to the existing bureaucratic order” (60). Just as familiar critiques of bureaucratic solutionism have taught us, McQuillan paints a view of the trajectory of AI deeply aligned with neoliberalism and susceptible to fascist ends. McQuillan surfaces real concerns about how AI, with its inherent tendency to form positive feedback loops, is used to further marginalise vulnerable groups if deployed without care, such as by optimising welfare distribution based on preexisting injustices. He also highlights the environmental footprint of large neural networks, vividly reminding us that “if artificial intelligence has a soundtrack, it’s the deafening whir of cooling fans in the server farms” (22). It is a powerful critique which could not be more relevant to our present situation where generative AI is widely claimed to be the next wave of disruptive innovation, following conveniently forgotten disappointments like web 3.0 and the metaverse.
Generative AI is widely claimed to be the next wave of disruptive innovation, following conveniently forgotten disappointments like web 3.0 and the metaverse.
Yet, the linkage of fascism and AI specifically is more tenuous. McQuillan rarely goes beyond the antifascist critique of bureaucracy familiar from twentieth-century social theory and political struggles. The upside of this approach is that we do not have to reinvent the wheel, but “we can build on the long history of community solidarity generated by people’s resistance to exclusion and enclosure.” (135) The downside is that it loses some of the specificity of the current technological condition, which is exactly what McQuillan’s expertise would enable him to speak to powerfully. McQuillan gestures briefly at a complex and original connection between AI and fascism, remarking that the “social contradictions that are amplified by AI, and so starkly highlighted by the disparities of COVID-19 and climate change, are the social contradictions that fascism will claim to solve” (99, our emphasis). The speculative and highly mediated temporal figuration of “will claim to solve” deserves much more attention, for it implies a historically dynamic, yet external relation between fascism and technology that goes beyond claims of mutual reinforcement. This is a fascinating and significant proposition, but one which remains incompletely developed, reflecting a broader tendency for the compelling thematics of the book to swamp the detailed analysis.
McQuillan rarely goes beyond the antifascist critique of bureaucracy familiar from twentieth-century social theory and political struggles.
So, to what extent is this book actually about AI? There seems to be a disconnect between the technical aspects of AI and the socio-political analysis McQuillan provides. Resisting AI can be read as reflecting broader tensions between technical and social perspectives on emerging technologies, rationality and bureaucracy. As such, AI becomes less a concrete object and more “an organizing idea – a framework that is used to make sense of the world in a particular way” (48).
McQuillan is more concerned with the social impacts of AI rather than the technical details.
McQuillan is more concerned with the social impacts of AI rather than the technical details. When delving into the political alternatives of a “new apparatus” (145) and possible reappropriations towards the end of the book, the language gets frustratingly metaphysical. We can read, for example, that the “framing of a new apparatus accepts that the diversity, variety and complexity of experience overflows representation and is therefore immune to abstraction” (148), which sounds as good as it remains nebulous. In sharp contrast to the admirable clarity of the majority of the book, these descriptions of alternatives are unconvincing. Rather than trying to explore how the technology could also enable, say, decentralised coordination, equitable resource allocation and, with the right governance, sustainability solutions, the ultimate advice for an anti-fascist approach to AI seems to be: Don’t do it. There is little to no emancipatory potential to AI, so it is something best resisted outright. He might be right on this one, and its message is a timely contribution in the current climate, but it does feel intellectually unsatisfying.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.