How the hype around AI helps to legitimise the militarisation of European academia
‘We are in an era of rearmament and Europe is ready to massively boost its defence spending,’ claimed President of the European Commission, Ursula von der Leyen, as she announced the ReArm Europe plan, which, by her own estimate, could mobilise up to 800 billion euro for military investment. European military spending has been steadily on the rise for the past decade, data from the Stockholm International Peace Institute shows, and has increased by as much as 30% since Russia’s attack on Ukraine. The Trump administration’s rapprochement with Putin’s Russia, and its threats of a possible US retreat from NATO, have further fuelled the EU drive towards rearmament. After decades of austerity and fiscal discipline, funds that could go into public schemes to tackle the housing crisis or provide high-quality public health and affordable education are now being funnelled into the arms industry.
Proponents of rearmament argue that, in addition to allowing the EU to protect itself and its allies, these investments will boost its industrial base and economic competitiveness. Militarisation, then, is presented as a quick fix to the problems of de-industrialisation. For now, however, EU member states are largely dependent on the US for the purchase of military technologies, and it seems unlikely that the European domestic industry will be able to absorb this sudden increase in capital. For this reason, a significant part of these investments are directed to the research and development (R&D) of cutting-edge military technologies, including but not limited to artificial intelligence (AI) and its security and warfare applications.
Historically, the EU has taken a cautious approach to the funding of defence R&D, in stark contrast with the US, where public agencies have long subsidised research collaborations between universities and industrial partners to advance military technoscientific innovation. This changed in 2017, with the establishment of the Preparatory Action in Research Defence (PADR), which had a budget of EUR 90 million for the period 2017-2019, followed by the European Defence Industrial Development Programme (EDIDP), which ran between 2019 and 2020, with a budget of EUR 500 million. In 2021, the EDIDP was replaced by the European Defence Fund (EDF), with a budget of 7.9 billion euro for 2021-2027. This exponentially growing budget is administered by the European Defence Agency, an institution demonstrably influenced by the arm industry through aggressive lobbying and revolving door practices.
The definition of AI and, for some, its very existence, is controversial: artificial intelligence is less a technical term to designate a specific set of technologies than it is a buzzword heavily pushed by the tech industry. Most commonly, the AI label refers to a neural networks model of machine learning that can be used to generate or analyse text and visual context, or to automate decision-making. In other words, when we talk about AI, we are talking about the development of mathematical models, and the sociotechnical infrastructure necessary for those models to operate. For instance, in project ASGARD (involving, among other partners, the University of Amsterdam, several European ministries and IBM) such methods are used to analyse Big Data for surveillance and counter-terrorism purposes. Another application is the operation of unmanned vehicles on battle fields and border areas, which was the focus of COMPASS2020, a collaboration between research institutions ((for example TNO, the Nederlandse Organisatie voor toegepast-natuurwetenschappelijk onderzoek), government agencies (notably the UK Home Office, Montenegro’s Administration for Maritime Safety and Port Management, and the Portuguese Coastal Guard) and private companies, among which are major arm producers Airbus and Naval Group SA.
It is no secret that we are in the midst of an AI hype, affecting academia, funding bodies, governments and the private sector, with significant social and environmental costs. This is true of all sorts of AI projects, yet this label has distinctive effects when it is applied to defence settings. To begin with, tech and corporate jargon often obscures what these projects are really about, using convoluted phrases such as ‘high value asset utilization’, ‘risk-based open smart spaces security management’ or ‘management network system for automatic restoration and intelligence reconfiguration of the SCGS network’. Increased surveillance becomes ‘progress in the processing of seized data’ and the lowering of standards is re-branded as the development of technologies ‘built under the maxim of “It works” over “It’s the best”’.
What is more, push to use AI serves to justify collaborations among a broad range of research partners, which in turn can lend legitimacy to projects and, by extension, their products and outcomes. On the academic side, computer scientists take the lion’s share of such projects, but the AI label also opens up possibilities for the participation of legal scholars, philosophers and social science scholars. As militarisation drains funds from other realms, including education and research, participating in these sorts of projects becomes more and more attractive for researchers. AI is viewed here as an autonomous force whose development is unstoppable: since ‘AI wars’ will happen no matter what we think of it, the thinking goes, it is important for these experts to weigh in and make it better. But this is a self-fulfilling prophecy: despite all the rhetoric surrounding autonomous machines, it is not robots who are developing AI weapons, it is humans, backed by ever more generous private and public funding.
Guest blog by Valentina Carraro
Reacties (0)
Voeg nieuwe reactie toe
Wij tolereren geen: racisme, seksisme, transfobie, antisemitisme, ableisme enz.