
16 Jul Symposium on PMSCs: AI Broke the Code – The International Code of Conduct for Private Security Service Providers and Emerging Technologies
[Dr Asaf Lubin is an associate professor at Indiana University Maurer School of Law, an affiliated faculty at the Hamilton Lugar School of Global and International Studies, a faculty associate at the Berkman Klein Center for Internet and Society at Harvard University, and an affiliated fellow at Yale Law School’s Information Society Project.]
This post has been cross-posted on Private Security Conversations.
Introduction
Twenty years ago, the phrase “private military and security contractor” (PMSC) summoned a particular kind of visual. A sand-choked boulevard in Baghdad, the midday sun sharp against concrete blast walls, the hum of idling armored SUVs thick in the air, and a convoy of men in tactical black. These men were not soldiers. They wore no flags. Answered to no brigadier. Their faces obscured by wraparound sunglasses. Weapons slung low. Radios crackling. They moved with authority—yet not the kind conferred by oath or insignia.
The name on everyone’s lips—the most notorious emblem of that world—was Blackwater. Founded by former Navy SEAL Erik Prince and flush with government contracts, the company quickly became the go-to provider for a wide range of outsourced military functions: ferrying diplomats through insurgent strongholds; securing the perimeters of embassies with ex-special forces operatives; training Iraqi police units in small arms tactics and counterinsurgency.
Then came Nisour Square. In September 2007, four Blackwater guards opened fire killing seventeen Iraqi civilians, including women and children, and injuring many others. in what witnesses described as an unprovoked assault. The incident sent diplomatic shockwaves. Civil lawsuits and criminal lawsuits followed. Congressional hearings were convened. Years later, convictions were handed down, then partially erased by presidential pardon from Trump at the end of his first term. But something deeper had already been exposed: that the post-9/11 era had quietly birthed a privatized architecture of violence, operating in the penumbra of law. As Harvard Professor Martha Minnow described, it marked “a new degree of privatization” and a “dangerous challenge to the aspirations of order in the world.”
That challenge is even more profound today, as the PMSC sector has undergone a profound technological metamorphosis in the past twenty years. Firms once known for boots-on-the-ground operations now offer end-to-end security ecosystems—integrating AI-powered surveillance, facial recognition, behavioral analytics, predictive policing algorithms, and biometric identification tools into their service portfolios. One need only look to Anduril Industries, a U.S.-based defense technology firm founded by Silicon Valley engineers and former military operatives, to grasp the scale of this shift. The company develops autonomous surveillance towers, sensor-laden autonomous drones and underwater systems, and is now even helping integrate augmented-reality headsets for frontline troops. At the heart of its operations lies the Lattice platform—a powerful dual-use AI-enabled operating system that fuses sensor inputs across domains, enabling autonomous threat detection, intelligence analysis, and response and mitigation. Anduril has also expanded into cybersecurity, partnering with Riverside Research—under a DARPA initiative—to harden critical systems against digital threats, reflecting the sector’s growing convergence of kinetic and cyber defense. Together, these shift mark new categorical challenges for international human rights law (IHRL) and international humanitarian law (IHL) protections.
The International Code of Conduct as a Model of Institutional Imagination
The International Code of Conduct for Private Security Service Providers (ICoC or the Code) emerged in November 2010 as a landmark attempt to impose normative order on a chaotic sector. Its seventy provisions addressed the conduct of personnel, the use of force and firearms, detention practices, incident reporting, and internal grievance procedures—grounding much of its orders in the language of both human rights law and humanitarian law. But the ICoC was more than a checklist of operational safeguards. It marked an attempt to reassert law’s relevance in a space where contractual relationships had long displaced public obligations. Cedric Ryngaert once described the Code as an experiment in “the re-entry of the state” into a domain of “stateless law,” one achieved by way of public procurement policies that aim to reward human rights respecting business initiatives.
Yet the very features that made the ICoC possible—its voluntarism, its multi-stakeholder architecture, its reliance on reputational enforcement—also circumscribed its authority. The Code gave rise to the ICoC Association (ICoCA), an oversight body tasked with certifying companies, monitoring compliance, and hearing grievances. But the Association’s mandate is limited: it lacks investigatory subpoena power, offers no binding dispute resolution mechanism, and remains dependent on states to integrate its standards into domestic regulatory frameworks. Critics have rightly questioned whether the ICoC can produce more than symbolic accountability in the absence of legal coercion or market incentives robust enough to discipline noncompliance. Indeed, the regime has seen a significant decline in the number of participating states and companies.
Still, to dismiss the ICoC as ineffectual is to overlook its deeper significance. Indeed, as I have written elsewhere, “[t]he lessons learned from regulating PMSCs through international standards, oversight mechanisms, and multistakeholder engagement can be adapted and applied” to address other evolving commercial, technological security concerns. In a geopolitical moment increasingly defined by the retrenchment of rights-based multilateralism, the ICoC remains one of the few surviving examples of pluralistic norm entrepreneurship—a testament to what can be achieved when states, corporations, and civil society aspire to act in concert.
In other words, for all its varied limitations, the ICoC endures as a model of institutional imagination. Or does it? As Vincent Bernard, Senior policy Advisor at ICoCA writes, now is the time “to revisit, interpret, and perhaps adapt the existing instruments of regulation and governance of private security.” Indeed, ICoCA’s Strategic Plan for 2024-2030 calls in Strategic Goal 4 for the integration into the Code of human rights standards relating to the incorporation of new technologies. This short blog post identifies three crucial steps that must be taken in order to succeed in achieving such an ambitious goal.
Step 1: Reconceptualizing Security Services
The current definition of “security services” under the Code is increasingly misaligned with the technological realities of the sector it purports to regulate. While the Code commendably encompasses “operational and logistical support for armed or security forces” including “intelligence, surveillance, and reconnaissance activities,” it remains largely tethered to a kinetic paradigm of risk—one centered on the physical presence of armed personnel. This framing no longer captures the breadth of commercial actors whose products and services shape security outcomes in the digital age. Today, with the datafication of armed conflict and humanitarian response infrastructural providers—cloud platforms, satellite operators, data centers, and encryption firms—construct the technological scaffolding necessary for modern military surveillance and targeting regimes. Cybersecurity firms, too, now play both offensive and defensive roles in intelligence-gathering and information operations, sometimes with direct implications for the conduct of hostilities. The result is an expanding perimeter of corporate and commercial actors whose participation in armed conflict is indirect, but no less consequential.
Consider Anduril again. Even if some of its business lines may be considered “operational and logistical support”—thus falling within the Code’s existing ambit—a substantial grey zone remains, particularly around dual-use technologies, databases, and services that transition seamlessly between commercial, law enforcement, and military contexts. The Code must therefore move beyond its legacy understanding of “security services” and embrace a functional definition rooted in effects rather than form (the definitional model offered by the U.S. Department of State 2020 Guidance on the implementation of the UN Guiding Principles for transactions relating to products or services with surveillance capabilities, offers a potential starting point).
Step 2: Introducing Obligations at the Design Phase
Article 25 of the Code requires Member and Affiliate Companies “to take reasonable steps to ensure that the goods and services they provide are not used to violate” IHRL or IHL, and that “such goods and services are not derived from such violations.” (emphasis added). But this use-based framing presumes a linear chain of causation between a deployed technology and a subsequent legal breach. That presumption fails to account for the layered and cumulative nature of digital systems, where critical decisions are made not at the moment of use, but at the point of design. As I have argued elsewhere, surveillance, cyber, and AI tools “inevitably involve thousands of design choices, both minor and significant, that hardcode policy rationales, legal interpretations, and value judgments into their hardware, software, and user interfaces.” These embedded decisions shape how a tool will operate under battlefield conditions—and, more troublingly, whether it can be audited or constrained when it veers off course.
If, as Rebecca Crootof and BJ Ard have suggested, technology “regulates through its ‘architecture’,” then the Code must shift upstream. It should impose obligations not merely on how technologies are used, but how they are conceived, developed, and trained throughout the lifecycle of a product or service. The existing text already offers a subtle foothold for such an expansion. By prohibiting goods and services “derived from” violations, Article 25 of the Code leaves open the possibility of regulating tools whose algorithms evolve through data gathered in unlawful ways. In other words, the Code might already justify algorithmic and model disgorgement as a remedy. In a world where machine learning systems continuously refine themselves mid-conflict, waiting until the moment of use may be too late. The Code must be reinterpreted to account for the reality that harms can be hardwired before a product ever ships.
Step 3: Reinvigorating Digital Rights Protection in times of Armed Conflict
Article 25 of the Code presupposes a level of doctrinal clarity as to what constitutes a violation of IHRL and IHL. Yet in the context of digital operations—automated decision making, data extraction, algorithmic targeting, biometric surveillance, and information warfare—such clarity remains elusive. As I have long argued, the treatises of IHL are mostly silent on issues of informational privacy, data protection, or cybersecurity. Key IHL concepts such as “attacks”, “military operations”, or “means and methods of warfare” struggle to accommodate the sprawling, distributed architecture of digital conflict, particularly when commercial services, rather than products, are in question. Article 36 weapons reviews—to the extent that they constitute customary international law—further struggle to keep up with dual-use software and autonomous decision-support systems.
Human rights law, meanwhile, offers limited refuge. Many of its core protections—against arbitrary interference with privacy, for example—are subject to national security limitations or emergency derogations. Of note, Article 23 of the Code forbids invoking such exceptions to justify violations of the UN Charter or to commit domestic or international crimes. But the Article stops short of introducing a broader prohibition. In other words, national security exceptionalism continues to serve a legitimate justification for corporate activity that ultimately harms digital rights. In a landscape where data has become both a target and a weapon, we urgently need to reconceptualize what digital rights protection entails and what the outer limits of existing IHRL are—not only for states and their militaries, but also for the private actors they enlist. The Code equally could benefit from such doctrinal elucidation.
Conclusions: A Return to the Age-Old Question of Privatization
The International Code of Conduct begins from the premise that privatization of security is a reality to be managed, not resisted. It assumes that military and security outsourcing are inevitable and seeks to constrain harm through industry standards, procurement policies, and reputation-oriented oversight mechanisms.
But that premise itself deserves interrogation. Not all the functions of the state are delegable—nor should they be. The March 2025 Revised Fourth Draft Instrument of the UN open-ended intergovernmental working group on PMSCs attempts to draw that line. It identifies as “prohibited activities” the contracting out of core sovereign powers, including the engagement in combat operations, detention, and interrogation. Yet, earlier articulations of these prohibited activities—then called “inherently state functions”—further encompassed intelligence collection, the wielding of police powers, and the transfer of military knowledge as non-delegable acts.
Today, those very activities are increasingly mediated by code—designed, maintained, and sometimes even deployed by PMSCs whose incentives and accountabilities differ radically from those of the state. As commercial actors move deeper into the heart of military decision-making and battlefield awareness, the time has come to ask not only how such activities are to be regulated within a possible Code of Conduct, but whether they should be outsourced at all.
Leave a Reply