06 Nov The Cost of Aid: When Humanitarian Data Collection Becomes a Method of Warfare
[Safia Southey is an independent researcher and consultant specializing in post-conflict justice and international human rights law, with current projects at International Center for Transitional Justice and the Quincy Institute]
In Gaza, starving Palestinians face a brutal choice: surrender your personal data, or go without food. Operating outside established UN coordination frameworks, the Gaza Humanitarian Foundation (GHF) has been accused of transforming aid distribution into a system that requires Palestinians to hand over personal data through “voluntary” reservation tools and to submit to biometric screening at aid sites.
Some form of data collection, such as managing queues or preventing duplication, has long been part of aid delivery. The problem, however, arises when “humanitarian data management” shifts into a security function, shaping who receives relief, generating risk labels that resemble civilian-combatant determinations, or feeding targeting decisions. Once that line is crossed, international humanitarian law (IHL), private military and security company (PMSC) regulation, and data-protection regimes impose more exacting obligations than those designed for routine beneficiary registration.
When contractors working GHF sites have reportedly used live ammunition against aid recipients, and UN investigators have documented hundreds of deaths near these facilities, the convergence of armed protection and data-driven access control reveals that existing regulatory frameworks were built for an earlier era of guards and guns.
GHF’s Model and the Problem of “Voluntary” Data
GHF, established in 2024 with U.S. government backing, operates large-scale aid facilities in Gaza outside UN deconfliction channels. Unlike traditional humanitarian organizations, GHF functions as a private entity with limited transparency regarding funding, procedures, or accountability.
In August 2025, GHF launched a pilot program allowing Palestinians to reserve aid packages online by creating accounts linked to personal information and photo ID. The system was framed as addressing disadvantages faced by those who may struggle in first-come-first-served distribution. In practice, it created a two-tiered access structure: those who provided data could secure aid; those who did not risked exclusion.
Reports also suggest that GHF has experimented with biometric collection, including potentially mandatory facial-recognition scans at distribution points. The legal question, however, is not whether data is collected, but whether collection and downstream use assume a security role – whether (1) the purpose extends beyond fair distribution to risk-scoring or watchlist checks; (2) access and sharing encompass armed actors or security systems; and (3) the effects predictably alter how people are treated as potential detainees or targets. When the answers trend in that direction, the governing legal regime, and the operator’s duties, shift accordingly.
Legal Consequences of the Security Turn
Relief and Starvation
International humanitarian law requires parties to “allow and facilitate rapid and unimpeded passage” of impartial relief operations and prohibits starvation of civilians as a method of warfare. Conditioning access to food on opaque biometric systems risks creating an arbitrary impediment to relief. If “voluntary” participation is illusory in practice, operators and donors must be able to show why identity capture at that granularity is strictly necessary to distribute food safely and why less intrusive measures would not suffice. If the condition predictably deters protected persons from seeking aid, it risks contravening the Geneva Convention’s prohibition on starvation as a weapon of warfare.
Distinction, Verification, and Targeting
These risks intensify if data generated at aid sites feed into targeting or detention. IHL requires parties to do “everything feasible” to verify that targets are military objectives and to distinguish civilians from combatants, with a presumption of civilian status in cases of doubt. Those standards are technology-neutral, but not technology-blind. Reliance on bulk analytics or error-prone facial-recognition systems to generate “threat” labels sits uneasily with the verification duty, especially when the underlying data are noisy, biased, or coerced.
Multiple investigations describe Israeli reliance on AI-assisted targeting in Gaza. Reports describe how the system known as Lavender flagged 37,000 Palestinians as suspected militants for elimination, how Gospel generated building targets, and how “Where’s Daddy?” tracked individuals through phone surveillance to cue strikes on their homes. In parallel, the IDF has acknowledged the use of facial recognition tools from private vendors such as Google Photos and Corsight to identify suspects from low-quality feeds, acknowledging high error rates and misidentifications, including during hostage searches. These systems depend on continuous inflows of personal data. If humanitarian-site data collected by GHF feeds into the same targeting pipelines, the boundary between aid distribution and battlefield intelligence effectively collapses, and civilians risk being killed or detained based on data they were compelled to provide in order to access food.
State Responsibility and Article 36 Review
If donor states know or should know that humanitarian-site data are routed into security or targeting functions, questions arise under the law of state responsibility. Even short of attribution, aiding or assisting may be engaged where a state has knowledge of the circumstances and contributes materially to an internationally wrongful act. Because the same systems used for targeting can be used to structure access to aid, states should also treat facial recognition and watchlist analytics used in targeting as a method of warfare subject to Article 36 review.
Governance and Safeguards: PMSC Regulation in a Data-Driven Environment
Once the legal stakes are clear, the governance gap comes into focus: private contractors remain civilians, but if they use aid-site data for targeting, they risk being deemed to directly participate in hostilities, underscoring the need for strict regulatory boundaries. Existing PMSC regulations were drafted with traditional guarding functions in mind and thus struggle to address GHF’s operational model. The Montreux Document outlines state responsibilities for PMSCs, but it does not squarely encompass intelligence-style data operations. The International Code of Conduct for Private Security Service Providers (ICoC) similarly defines “security services” in terms that may not reach biometric enrollment or watchlist analytics. The ICoC Association has taken steps to integrate technology and human rights into its monitoring, but its enforcement tools – certification and reputational sanctions – are poorly matched to entities whose leverage derives from information flows and technological coercion.
Filling that gap requires not only expanding the scope of PMSC regulation but also embedding operational data-protection safeguards. The operative standards are legality, necessity, proportionality, purpose-limitation, data minimization, safeguards, and remedies – principles reflected in Convention 108+, General Data Protection Regulation-style regimes, and UN “data responsibility” guidance. Applied to humanitarian sites, these principles mean biometric data should be collected only if strictly necessary for distribution, proportionate to a defined operational risk, insulated from belligerent access by enforceable non-disclosure, and subject to independent audit and deletion schedules.
Policy Recommendations
Current regulatory frameworks were designed for armed guards, not biometric enrollment and algorithmic screening. To close that gap, donors and regulators need actionable guardrails that (1) keep life-saving aid accessible without coercive data demands, (2) build enforceable standards into procurement, and (3) adapt international oversight to new forms of privatized security.
Immediate Safeguards
Donors and regulators should prohibit mandatory biometric data collection as a condition of humanitarian aid and require transparency in data-governance procedures for any aid distribution involving personal information. Procurement policies should condition funding on ICoC certification and require independent audits of technology systems to ensure compliance with international humanitarian and human rights law. Any departure should require a written, program-specific necessity finding showing (1) the concrete distribution risk addressed, (2) why non-biometric alternatives are insufficient, (3) the minimum data elements and shortest retention period, and (4) a sunset date and re-review trigger.
Procurement and Oversight Standards
Existing definitions of private military and security companies should be expanded to include surveillance-technology vendors and data processors whose services constitute security operations. Procurement policies can embed accountability ex ante by requiring independent technical audits of biometric and surveillance systems, with funding tranches contingent on passing those controls. Contracts should hard-wire purpose-limitation and segregation by prohibiting (1) onward transfer to belligerents, (2) requiring role-based access and breach notification, and (3) mandating provable deletion at close-out. Adapting such standards to humanitarian security contexts would constrain abusive deployments without waiting for downstream misconduct.
International Governance
Closing regulatory gaps requires coordinated action at the multilateral level. The UN Working Group on PMSCs’s Revised Fourth Draft Instrument attempts to address some gaps by identifying “prohibited activities,” including intelligence collection and detention. Yet earlier drafts’ broader conception of “inherently state functions” better captured the range of activities now privatized through technology platforms. Reviving that framing would recognize biometric enrollment, database management, and surveillance analytics as sovereign functions that should not be outsourced without strict legal limits. In parallel, states that deploy or fund targeting-support analytics (including facial recognition and watchlist matching) should treat them as a means or method subject to Article 36-style review, defining lawful use cases (authentication vs. population-wide identification) and exclusion of humanitarian-origin data from belligerent access.
Conclusion
GHF illustrates how quickly humanitarian technology can migrate from distribution to security, and how this evolution in the PMSC sector has outpaced regulatory frameworks. When those functions influence relief access and inform status determinations or targeting, IHL’s relief and distinction rules, doctrines of state responsibility, and updated PMSC standards must apply with real force. Given pressure to “streamline” aid delivery, aspects of this model are likely to spread. Enforcing the boundary will now determine whether humanitarian assistance remains a protected lifeline or becomes a transaction contingent on security-grade data surrender.

Leave a Reply