International Law for a Fragile World: When Risk Outpaces Control – AI, Bioengineering, and Cyber-Autonomy

International Law for a Fragile World: When Risk Outpaces Control – AI, Bioengineering, and Cyber-Autonomy

[Dr Sergey Sayapin is Professor of Law at KIMEP University (Almaty, Kazakhstan) and Distinguished Visiting Global Scholar at the NUS Centre for International Law (2025)]

If climate change exposes the limits of consent-based governance in ecological systems, technological disruption reveals a parallel fragility in the architecture of international law. Artificial intelligence, bioengineering, and cyber-autonomous systems do not simply pose new regulatory challenges – they transform the underlying conditions under which harm occurs, responsibility can be attributed, and control can be exercised. Like ecological risk, technological risk is transboundary, cumulative, and potentially irreversible. Yet it introduces a further structural feature – autonomy. In this context, autonomy does not merely denote technical sophistication but a loosening of the link between human decision, operational control, and legally attributable outcomes. International law thus increasingly confronts systems operating at speeds, scales, and levels of complexity that strain – if not displace – traditional concepts of intention, foreseeability, and accountability.

International law was designed to regulate conduct by identifiable actors. Its core doctrines presuppose traceable decisions, attributable conduct, and legal consequences following breach. Technological systems unsettle these premises by redistributing agency across networks of designers, operators, infrastructures, and adaptive processes: development is distributed across jurisdictions, deployment is often both public and private, and operation may be partially or functionally autonomous. As a result, harm no longer appears as the linear consequence of a discrete act but emerges from interactions among code, data, institutional incentives, and interconnected infrastructures. Put differently, international law remains structured around actors, while technological risk arises from systems. In this respect, the mismatch identified in my earlier posts deepens: legality continues to track individualised conduct, whereas fragility is produced by systemic dynamics that resist reduction to any single decision-maker.

Artificial Intelligence and Diffused Agency

Artificial intelligence exemplifies this transformation in particularly acute form. AI systems are trained on vast datasets, refined through iterative learning, and deployed globally across finance, healthcare, security, migration control, and military operations. Their outputs are often probabilistic rather than deterministic, and even developers may not fully understand the internal logic of large-scale machine-learning models. In such contexts, autonomy manifests as a functional decoupling between design, operation, and outcome – systems generate effects that cannot be traced back to a single, intelligible decision. When predictive policing algorithms disproportionately target marginalised communities, when automated hiring systems filter candidates through opaque criteria, or when AI-assisted credit scoring embeds structural bias into financial systems, harm emerges from data architectures and optimisation logics rather than from identifiable discriminatory intent. Attribution becomes formally possible but substantively attenuated.

This diffusion of agency becomes even more visible in electoral contexts. Generative AI has already been used to produce deepfake audio of political candidates, synthetic campaign messaging, and automated bot amplification across social media platforms. Such tools can shape public discourse across multiple jurisdictions within hours, while attribution remains uncertain and regulatory responses fragmented. The vulnerability of democratic processes lies not in a single unlawful act but in the scalability and replicability of technological capacity itself. Here, the object of governance shifts from discrete conduct to systemic conditions of informational manipulation.

In military settings, the implications are more immediate and potentially irreversible. AI-enabled targeting systems are increasingly used to assist in identifying military objectives in armed conflicts. While states remain legally responsible for targeting decisions, reliance on algorithmic recommendation systems – often trained on historical operational data – compresses decision timeframes and shifts human involvement from substantive judgement to supervisory oversight. The debate within the Convention on Certain Conventional Weapons (CCW) concerning lethal autonomous weapons systems focuses on “meaningful human control”, yet the structural difficulty is deeper. International humanitarian law presumes human judgement capable of contextual interpretation and normative balancing. Where systems operate at speeds that exceed meaningful human deliberation, the legal architecture of distinction and proportionality is placed under strain. Responsibility persists in doctrine but control becomes operationally diluted.

Even outside conflict, the systemic character of AI-driven risk is evident. Algorithmic trading systems have triggered flash crashes in global financial markets within minutes, generating cascading disruption without malicious intent or centralised decision-making. In such cases, responsibility remains formally traceable to institutions, yet the causal chain is distributed across interacting algorithms, infrastructures, and market dynamics. The result is a shift from individualised agency to systemic causation. International law, built around the attribution of conduct to actors, struggles to capture this transformation, as harm increasingly arises from the behaviour of systems rather than from the decisions of any single participant.

Bioengineering and Irreversible Intervention

Bioengineering introduces a distinct dimension of fragility – the possibility of permanent and transgenerational alteration of biological systems. If artificial intelligence diffuses agency, bioengineering challenges reversibility. Advances in CRISPR-Cas9 gene editing have already enabled targeted modification of human embryos, as illustrated by the widely criticised 2018 case of gene-edited infants in China. That episode exposed not only ethical controversy but a structural lag between technological capability and legal consensus – that is, intervention became possible before its normative boundaries were collectively defined.

Synthetic biology further expands this horizon. The ability to reconstruct viral genomes from publicly available sequence data lowers technical barriers to pathogen research and disperses capacity beyond state-controlled environments. Dual-use experiments – such as gain-of-function research designed to increase transmissibility or virulence for scientific purposes – underscore the difficulty of distinguishing beneficial innovation from systemic risk. Existing legal frameworks, including the Biological Weapons Convention (BWC), were developed to regulate deliberate misuse by states. They are ill-suited to a decentralised landscape in which private laboratories, start-ups, and transnational research networks operate across fragmented regulatory regimes. Here again, risk arises not from a single prohibited act but from the cumulative effects of lawful and widely distributed activity.

Gene drive technologies provide an even clearer illustration of irreversibility. Designed to propagate specific genetic traits rapidly through wild populations, gene drives could eliminate malaria-carrying mosquitoes or control invasive species. Yet once released, they may permanently alter ecosystems in ways that cannot be confined territorially or temporally. Ecological systems do not recognise sovereign boundaries, and genetic interventions do not respect political jurisdiction. An intervention authorised by one state may transform biodiversity, agricultural systems, and ecological equilibria across entire regions for generations. In such cases, the premise that states can meaningfully consent to, or contain, the risks they generate becomes difficult to sustain. Consent operates at the level of decision, while consequences unfold at the level of planetary systems.

The COVID-19 pandemic further underscored the systemic nature of biological risk. Global supply chains fractured, healthcare systems were overwhelmed, and stark inequalities in vaccine distribution revealed deep structural asymmetries in resilience. Whether understood as zoonotic spillover or as implicating laboratory-related risk, the pandemic demonstrated that biological disruption propagates through interconnected infrastructures rather than remaining localised. The World Health Organization’s International Health Regulations (IHR) facilitated information-sharing and coordination but they operated largely reactively, after systemic disruption had already begun. As with other domains of technological risk, legality lagged behind the dynamics it sought to manage, revealing a persistent orientation toward response rather than anticipation.

Cyber-Autonomy and the Erosion of Territorial Boundaries

Cyber operations reveal with particular clarity how technological risk destabilises the territorial and jurisdictional assumptions embedded in international law. Malicious code can be deployed remotely, routed through compromised servers across multiple jurisdictions, and designed to propagate autonomously without further human intervention. In this domain, autonomy manifests not only as distributed agency but as operational independence from continuous human control. The result is a structural decoupling between action, location, and responsibility. Attribution may require months of technical investigation, and even where technical attribution is possible, political consensus on responsibility may remain elusive.

Prominent cyber incidents over the past decade illustrate the scale and systemic nature of this challenge. The 2017 NotPetya malware attack, initially directed at Ukrainian infrastructure, rapidly spread across global networks, causing billions of dollars in damage to multinational corporations, shipping logistics, and pharmaceutical production. The 2020 SolarWinds intrusion exposed how vulnerabilities embedded in widely used software can provide access to government and corporate systems across continents. The 2021 ransomware attack on Colonial Pipeline demonstrated that cyber operations can generate immediate physical-world consequences, disrupting fuel supply chains and triggering economic and social effects far beyond the initial point of intrusion. In each case, the harm was not confined to a territorial theatre or a single target but propagated through interconnected systems.

These dynamics expose a deeper structural tension. Cyber harm unfolds across networks rather than within clearly bounded spaces, yet international law continues to rely on doctrines – sovereignty, non-intervention, and use of force – developed in a physically territorial context. While it is widely accepted that these principles apply in cyberspace, their operationalisation becomes uncertain where conduct is persistent, automated, and embedded in civilian infrastructure. The Tallinn Manual process has advanced interpretative clarity but it also underscores the limits of doctrinal translation. State practice remains uneven, shaped as much by strategic ambiguity as by legal conviction.

Cyber-autonomy further intensifies escalation risk. Automated detection and response systems are increasingly deployed to counter perceived intrusions, sometimes with minimal human intervention. In highly networked environments, misinterpretation of signals, coding errors, or unintended system interactions may trigger defensive or retaliatory measures at machine speed. Under such conditions, escalation can occur before political authorities have the opportunity to assess intent, proportionality, or legality. Speed itself becomes a destabilising factor, compressing the space for deliberation on which international law traditionally depends.

From Regulation to Systemic Governance

The core problem is not that international law regulates too little but that it regulates the wrong object. It remains oriented toward discrete technologies and individual acts, while technological risk is generated by systems – interconnected infrastructures, adaptive processes, and cumulative interactions that no single actor controls. A risk-oriented approach therefore requires a decisive shift – from governing tools to governing systems, and from reacting to breaches to managing conditions under which harm becomes likely.

This shift has concrete legal consequences. Due diligence can no longer be understood as a duty to avoid clearly attributable harm once it becomes visible. It must be reformulated as a continuous obligation of anticipatory risk management. States should be required to establish and maintain institutional capacities capable of identifying, assessing, and mitigating systemic risks across the lifecycle of technological development. In practice, this entails mandatory ex ante risk assessments for high-impact AI systems, legally grounded biosafety and biosecurity regimes for advanced genetic research, and enforceable resilience standards for critical digital infrastructure. Due diligence, in this sense, becomes a duty of organised vigilance rather than episodic compliance.

At the same time, unilateral regulation is structurally insufficient. Because technological systems operate transnationally, their governance must be explicitly cooperative. This requires embedding transparency, information-sharing, and joint oversight into legal frameworks as primary obligations rather than optional measures. Coordinated cyber threat intelligence-sharing, multilateral supervision of high-risk biological experimentation, and common reporting standards for advanced AI deployment illustrate the direction of travel. More fundamentally, international law must move toward institutionalised forms of shared oversight in domains where risks are cumulative and irreversible. In such environments, reliance on ex post attribution is inadequate – by the time responsibility can be assigned, the harm may already have propagated across systems.

These adjustments do not eliminate uncertainty, nor do they displace responsibility – they redefine both. Responsibility must extend beyond the moment of breach to encompass the design, deployment, and governance of technological systems. Now, the central legal question is no longer only who is accountable after harm occurs but whether sufficient structures were in place to reduce exposure before critical thresholds were crossed. This marks a shift from reactive liability to proactive stewardship.

Autonomy, Fragility, and the Future of Legality

Technological risk ultimately poses a constitutional question: how much autonomy – machine, biological, or digital – is compatible with a fragile global order?

International law presumes intelligible agency and meaningful accountability. It is built on the idea that decisions can be traced, responsibility assigned, and conduct evaluated against legal standards. Yet this premise is increasingly strained: algorithmic systems execute financial trades in milliseconds, autonomous defence platforms may respond to perceived threats faster than human commanders can intervene, and biological interventions may permanently alter ecological systems across generations. The risk is not only material harm but a progressive displacement of legally consequential decision-making from accountable human institutions to opaque and adaptive technical architectures.

As autonomy expands, the connection between decision, control, and responsibility begins to loosen. Legal responsibility may remain formally intact but its substantive foundation erodes where control becomes indirect, distributed, or retrospective. States may continue to be held accountable in doctrine while, in practice, the systems through which harm is generated operate beyond meaningful supervision. A legal order that insists on responsibility while tolerating diminishing control risks becoming internally incoherent – normatively demanding, yet operationally detached. This tension exposes the limits of a framework anchored in retrospective responsibility and consent-based obligation. In environments defined by cumulative innovation and compressed timeframes, legality that activates only after breach arrives too late. By the time catastrophic escalation, algorithmic failure, or biological release occurs, harm may already be irreversible. The task of international law is therefore not only to respond to violations, but to operate before threshold – to anticipate, to structure, and, where necessary, to constrain. This requires confronting distributed agency, systemic interdependence, and the ethical boundaries of automation as central, rather than peripheral, legal concerns.

What is at issue is nothing less than the institutional and constitutional future of international law itself. If it does not evolve, autonomous technological systems will increasingly shape the structure of global order beyond meaningful collective oversight, while governance fragments into either insulated technocratic control or intensified geopolitical rivalry. If it does evolve, it must do so in concrete terms – by transforming due diligence into a genuinely anticipatory obligation grounded in continuous risk assessment; by institutionalising transparency and traceability requirements for high-impact technological systems; by developing shared oversight mechanisms for domains where risks are transboundary and irreversible; and by recognising that certain categories of technological intervention require collective constraint independent of unilateral consent. It must also recalibrate responsibility to account for distributed agency, embedding accountability not only at the point of breach but across the lifecycle of design, deployment, and operation. Through these shifts, international law can move from reactive adjudication to proactive stewardship and remain a central framework for organising authority under conditions of rapid transformation. Technological risk is therefore not merely another field of regulation. It is a defining moment – a test of whether international law can still claim normative relevance in a world where autonomy expands faster than the capacity to control it, and where fragility is no longer exceptional but permanent.

Print Friendly, PDF & Email
Topics
Featured, General, Symposia, Themes

Leave a Reply

Please Login to comment
avatar
  Subscribe  
Notify of