AWS in the Cross-Hairs: Data Centres as Targets

AWS in the Cross-Hairs: Data Centres as Targets

[Professor Luke Moffet is chair of human rights and international humanitarian law at Queen’s University Belfast and author of Algorithms of War (BUP 2026).

Alannah Travers is a PhD student at Queen’s University Belfast (QUB) School of Law in collaboration with the Ceasefire Centre for Civilian Rights, working on Algorithmic Warfare and Civilian Harm.]

On the 3rd March 2026 it was reported that Iran had hit two Amazon Web Services (AWS) data centres in the United Arab Emirates with drones and had damaged another in Bahrain – believed to be the world’s first direct military attack on a US hyperscaler, following reports of debris from a strike causing an AWS centre to shutdown the day before In response on the 5th March, US and Israeli forces bombed at least two data centres in Tehran. AWS is an on-demand cloud-computing services and application programming interface (API) that enables businesses to provide IT services without the need for investment in expensive hardware, like server farms. There are over 12,000 data centres globally, 45% in the US, and are critical to supporting our digital world. Data centres support everything from digital financial services, online gaming, cloud storage, machine learning and AI. While much attention has been on the use of AI in targeting with the likes of Maven and Lavender AI-decision support systems, there has been little attention on attacks against the physical infrastructure that enables such systems.

AWS is one of the largest cloud-computing providers with over 900 data centres in over 50 countries providing cloud computing to millions of companies and governments. AWS includes providing business and governments compute power to enable AI development, database storage and content delivery services to reduce latency in accessing content. When such data centres are damaged they can have repercussions on the digital world. In 2021 a fire at an OVHcloud data centre in Strasbourg, France knocked out over 3.6 million webpages it was hosting.

With the AWS attacks in UAE the damage remains unclear. Some banking operators in the UAE have reported outages and Amazon advised ‘customers to back up critical data and shift operations to servers in unaffected AWS regions.’ These attacks demonstrate that the often incorporeal, ethereal, magical quality of AI, disconnected from the physical world, is starting to collide into each other risking serious disruption to online services, unseen civilian harm and disruption to military use of cloud computing.

This post explores the legality of targeting data centres in armed conflict under international humanitarian law, as well as considering the impact of such kinetic attacks on civilians. It highlights the increasing gravity of AI in the military and the physical infrastructure it depends on that is increasingly interwoven into the everyday digital function of civilians. At the outset we should say that the Iranian attacks on data centres are likely war crimes of attacking a civilian population, with the response of US and Israeli strikes on Tehran, there is insufficient information to make that determination, but our analysis below indicates that such strikes risk being indiscriminate and unlawful under IHL.

Military Use of Cloud Computing

The ‘Cloud’ is a physical network of remote computing resources such as data centres largely owned by private predominately US owned hyperscalers like Amazon AWS, Microsoft Azure and Google. Today’s militaries are increasingly outsourcing their most sensitive operational logic to distributed cloud computing structures. For instance the US military’s Joint All-Domain Command and Control (JADC2), which Lisa Laing and Cian Westmoreland, former technical members of the US drone programme, titled the ‘kill cloud’, as militaries leveraging physical and digital networks of data, algorithms and cloud-based systems to target enemies to kill.

By using these private hyperscalers, militaries now have the ability to fuse vast streams of intelligence from satellite imagery to signals data, at what the Pentagon terms “the speed of relevance”, effectively turning the digital backbone of a nation into its most potent weapon. But because modern militaries produce more sensor data than they could ever process on their own servers, this has forced their reliance on commercial cloud providers. If militaries no longer own their infrastructure, but rather rent processing power from the private sector, this has effectively privatised their conflict architecture.

In the context of data centre targeting, this integration has created a dual-use trap when the same data centres providing the digital infrastructure for a country’s hospitals might also host its military targeting algorithms, making the distinction between civilian infrastructure and a legitimate military objective ever more dangerously intertwined – and a threat to drone strikes. The targeting of this infrastructure is possible therefore only as civilian systems are being hosted on the same backbone as military ones.

Readers would be familiar with the Israeli use of AI-decision support systems like Lavender and Gospel, used in Gaza in recent years, as well as the US Maven. Behind the AI-DSS user-interface is large cloud servers to store and transmit data, so that its algorithms can quickly sort through vast amounts of different kinds of information – like intelligence, surveillance, reconnaissance (ISR), multi-spectral intelligence to open-source data from social media posts. These cloud servers are supported by linked data centres and connected to compute (the calculation process), the AI hardware using graphic processing units, storage and memory to carry out to train and operate AI models.

Since 2021 Microsoft and AWS have been working with the British Ministry of Defence to develop ‘StormCloud’, a network mesh cloud-based computing system that can connect soldiers on the ground with commanders and assets, like missile frigates and reconnaissance drones, alongside AI-DSS programmes to identify targets. In November 2025 NATO signed a multimillion dollar deal with Google Cloud to provide a similar secure cloud service. In 2021 Google and Amazon signed a lucrative $3.3 billion deal with Israel providing cloud computing servers to the military and the financial sector as part of Project Nimbus/Selenite, by building server farms in Israel. Francesca Albanese, UN Special Rapporteur on Palestine, argues that international companies are complicit in an ‘economy of genocide’, including the provision of cloud services to the IDF for use in Gaza by Microsoft, Amazon Web Services and Google, as well as Palantir, which is involved in providing Artificial Intelligence Platform for real-time battlefield data integration in predictive identification of Palestinians to be detained.

Lawfulness of Targeting Data Centres

Concern over digital spaces in IHL have traditionally been explored in relation to cyberwarfare and hacking. Indeed the Tallinn manuals have made great strides to consider the application of IHL in cyberwarfare, as well as consideration by NATO and others on the impact of hybrid warfare on such infrastructure. The attacks on the AWS data centres through kinetic means raise more acute issues that may render attacking them legally complicated due to their potential dual civilian-military use. A civilian object is defined in the negative under IHL, as something that is not a military objective. A military objective is defined in the positive as objects that

‘by their nature, location, purpose or use make an effective contribution to military action and whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.’

The principle of distinction obliges belligerents to distinguish civilian objects from military objectives, and abstain from intentionally directing attacks against the former. When it comes to a data centre there is clearly an onus on the attacker to determine whether it ‘location’, ‘purpose’ or ‘use’ makes it a military objective. While there remains debate over the nature of data being an object, a data centre may be a military objective because it is enclosed in a secure military area (e.g. proximate for edge computing to effectively contribute to military action). The destruction or neutralisation of such a proximate military data centre would offer a definite military advantage.

More difficult cases, as with commercial cloud providers like AWS or Google, arise when assessing whether a facility becomes a target by its purpose—such as being explicitly contracted to host future military AI or cyber-offensive capabilities—or its use at present, where armed forces are actively leveraging normally civilian infrastructure to process targeting data, run command-and-control networks, or store military intelligence. With the UAE and Bahraini data centres there is no indication that they were being used or would in the future have a purpose for military AI. Iran has stated that the data centres were legitimate military objectives as they were supporting the ‘enemy’s military and intelligence activities’. If a civilian data centre is actively used to process data that makes an effective contribution to military action then it may be a military objective. However there is no public information to support such claims.

Even if military AI, applications and storage were being actively used by the US and Israel on such data centres, their dual use nature may make any attack on them indiscriminate. Parties to an armed conflict also are prohibited from indiscriminate attacks on civilian objects that are not directed at a specific military objective or the effects of such an attack cannot be limited. With datacentres The Tallinn Manual 2.0 (Rule 112) outlines that under the principle of distinction targeting a data centre in a cyber-attack in order to damage military servers that are held on a centre that mostly for civilian use would be indiscriminate and disproportionate, when more focused means of cyber weapon could be used. In relation to kinetic attacks these cannot be more surgical in their ability to target particular servers, unless they are separately housed from civilian ones. It would be impossible to physically target digital data in a data centre, which is physically indivisible from other data and may be intermingled with civilian data and distributed across different networks.

The proportionality calculus is difficult to justify in attacking a data centre where by its purpose or dual use is making an ‘effective contribution to military action’. As proportionality turns on whether an attack is indiscriminate where it is expected to cause incidental civilian harm that would be ‘excessive in relation to the concrete and direct military advantage anticipated.’ expected civilian harm that would not be ‘excessive in relation to the concrete and direct military advantage anticipated.’ Attacking a data centre when it runs public infrastructure, such as software for transport, water supplies/desalination and electricity, is connected to a range of digital tenants whose disruption will cause civilian harm. Indeed medical units such as hospitals and cultural property may use such data centres and be reliant on them for storing patient or archival data. Such attacks may also run afoul of the prohibition of ‘attack, destroy, remove or render useless objects indispensable to the survival of the civilian population’. While traditionally this was considered to include food, livestock and water, in our digital era the management of such indispensable objects have been digitised through software and algorithmic management. Indeed Dubai uses AI for a range of public service provision, which depend on datacentres, of which there are 23 around the city. In Tehran there are 15 data centres and it is unclear which have been destroyed or damaged. Such attacks are part of a wider problem of AI in the military domain which implicates vast networks of global civilian digital and physical infrastructure.

Reverberating effects and terror of datacentres targeting

On the eve of the strikes on Iran, the Centre for Strategic and International Studies warned that the significant investment in compute by Gulf States, as part of the global “AI scramble”, had created an infrastructural vulnerability. These sovereign AI hubs, intended to diversify economies and secure the future, have instead become high-value targets in a new phase of warfare. It seems that infrastructure is increasingly becoming a strategic target for bombing to  war to be violence to ‘compel our opponent to fulfil our will’. Yet the civilian harm caused by such strikes will have extensive reverberating effects.

The ICRC’s connectivity reports set out how being connected is an urgent need in times of conflict. Targeting a data centre also, perversely, disconnects humanitarian agencies from the cloud-based logistics and communications they need to cope with the dire consequences of the Kill Cloud. It’s an attack which disrupts the collective cognitive capacity of a population to respond to its own suffering. Within the first week following the outbreak of war, for instance, the World Health Organisation has been forced to suspend its Dubai logistics hub due to the physical closure of airspace and sea lanes, stranding tens of millions in medical supplies including medicines destined for Gaza. The invisible harm of data centre strikes, which we do not yet fully understand, could be even more profound.

On March 2, 2026, US Defense Secretary Pete Hegseth decried what he called the “stupid rules of engagement,” in rhetoric supporting the deliberate dismantling of civilian protection mechanisms which the US administration has already been removing. When the most powerful military in the world signals that it will not be restrained by legal distinctions, the already fragile protection of data centres globally as dual-use objects will suffer – and civilians even more so.

Conclusion

Needless to say that attacking data centres is generally prohibited under IHL. However there is a more important point to be made on the obligation of precaution and constant care – States should physically separate out their civilian data centres from those being used by the military. This is part of parties obligations to take precautions against the effects of attacks by  protecting the civilian population from the dangers of military operations whether through avoid locating military objectives near densely populated areas

The current pursuit of AI – and unregulated reliance on data centres – has created a dangerous physical and logical entanglement that demands a more rigorous application of the obligation of precaution and constant care. As militaries increasingly subscribe to the same cloud infrastructures that may power a nation’s hospitals and schools, are they at risk of using the civilian population as a digital shield? This entanglement not only creates the opportunity for this, but arguably actively invites the targeting of a society’s digital backbone under the guise of attacking a dual-use military objective – which it is.

Just as a State is prohibited from locating a munitions factory in residential areas, it should be held to the same standard here. States have a precautionary duty to physically separate civilian data centres from those utilised by the military kill-chain as part of this obligation to protect the civilian population from the effects of attacks. Most data centres are located in less populated industrial zones, given their size, power and water needs. Building regulations for the location of such data centres will not be thinking about how to zone such centres, so it is up to States to consider how they use cloud computing services and implications for precaution.

By embedding military logic into a digital backbone hosted by private hyperscalers in neutral third countries, we may also be witnessing the even more total deterritorialisation of the battlefield, where private technology companies, rather than states, hold the responsibility of deciding where the borders of modern conflict begin and end, and a disturbing reliance on the moral guardrails of CEOs, such as Anthropic’s recent refusal to grant the US Department of War unrestricted use of its models to seemingly little effect. To meaningfully prevent this threat from developing further than it already has, there must be a physical separation of civilian data centres from those being used by military kill-chains globally. Without this separation, strikes on a military’s digital sensors or shooters will only result in more catastrophic reverberating effects that could collapse the medical and humanitarian safety nets of entire regions who depend on them even more in times of war.

Print Friendly, PDF & Email
Topics
Featured, General, International Law

Leave a Reply

Please Login to comment
avatar
  Subscribe  
Notify of