Neurotechnology and Cognitive Warfare

Neurotechnology and Cognitive Warfare

[Carl Emilio Lewis and Jonathan Kwik are researchers in international law at the TMC Asser Institute]

The ‘application of neurotechnology raises [various] ethical, legal and societal issues and questions related to human dignity and human rights’, as UNESCO’s recent Recommendation on the Ethics of Neurotechnology warns. One of these risks is the weaponisation of neurotech to amplify disinformation and influence campaigns. In combination with dynamics surrounding social media and the rise of AI as powerful processing tools for big data, authors have warned that the commercial proliferation of neurotech ‘could further exacerbate the impact and scope of state-sponsored disinformation and influence campaigns’. (p. 21) 

In this post, we explore in greater detail the dynamics surrounding commercially marketed neurotech as a significant risk-factor for enhanced state-sponsored disinformation and influence campaigns. We depart slightly here from many other works discussing the risks of neurotechnology, which often flag as most dangerous those neurotech devices with the ability to evoke (write) rather than just measure (read) brain activity, as these would be capable of ‘affect[ing] cognitive, emotional, and behavioural aspects’ of targets. In contrast, we focus on contemporary developments in non-medical, read-only, Brain Computer Interfaces (BCIs)─broadly conceived as neurotechnological devices that connect the brain to an external digital device.

We highlight that it is conceivable that we will soon see the commercial proliferation of BCIs in the form of in-ear electroencephalogram (EEG) devices, and that even the most basic neural data and inferences drawn from these devices could increase the efficiency of disinformation and influence campaigns. We argue that such devices will therefore likely attract the attention of states seeking to amplify their cognitive warfare capabilities—a risk further amplified by the fact that the collection of such data is likely to remain in the margins of international law, giving states incentives to deliberately exploit these legal grey zones to their advantage.

The Potential, Commercial Proliferation of BCIs

Recent advancements in neurotechnology, particularly implantable BCIs, have led to groundbreaking medical achievements. Mark Jackson, for instance, who was diagnosed with amyotrophic lateral sclerosis and had lost the ability to use his hands, recently spoke of how Synchron’s ‘Stentrode’ BCI implant gave him part of his life back, by enabling him to control an iPad through thought alone. Synchron’s vision is for ‘brain computer interfaces to become ubiquitous, like the keyboard and the mouse’. BCI implants promising the ability to control devices by thought or potentially engage in telepathy receive frequent media attention, but those interested in the near-future risks posed by the technology would do well to pay attention to a more humble, existing technology: in-ear EEG devices.  

In their 2023 US Patent application (p. 4), Apple presented various plans for equipping their flagship earbuds, the ‘Airpods’, with dynamic electrodes that would allow for electroencephalography (EEG) to read users’ brain activity and inform wearers ‘for various biosignal-driven use cases, such as a sleep monitoring or other anomalies, such as seizures.’ Such a device would fall within a growing, global wearable technology market targeting ordinary citizens interested in monitoring their physical and mental health via wearables. It would also compete with similar devices that have been on the market for some time. Emotiv’s MN8, for instance, is a 2-Channel Bluetooth in-ear EEG headset that has been available since 2018, and has been marketed as a non-medical device that ‘provides real-time insights into your mental state … so you can take control of your cognitive wellbeing.’ 

Millions of individuals already readily provide their biometric data to existing digital devices, such as smartwatches, rings and headphones. Through in-ear EEG technologies, companies offer their users real-time information about their mood patterns, stressors and (lack of) focus, as well as neurological insights into how they feel, think, and react to particular stimuli. Apple’s patent shows how easily an already globally successful product like Airpods can be updated to include in-ear EEG devices, which opens the door to the commercial proliferation of this technology. The technological and practical conceivability of such a scenario, in turn, allows us to conduct a more targeted evaluation of whether the commercial proliferation of this specific type of neurotechnology could exacerbate state-sponsored disinformation and influence campaigns.

The Value of Neural Data for Cognitive Warfare

Disinformation and influence campaigns fall under the broader concept of cognitive warfare. This is a relatively new designation for an age-old concept in war, namely that influencing perceptions and manipulating the flow of information can be just as important as winning on the battlefield. This form of ‘hybrid warfare’ leverages all technological means available (including cyber, artificial intelligence, neuroscience and psychology, social media, and information technologies), to collect data on targets, analyze their weaknesses, and deliver tailored messages to maximally impact the targets’ beliefs and cognition. What is more, this form of ‘warfare’ often remains ‘below the threshold of an armed conflict within the meaning of international law’—which raises unique problems, as we discuss below.

The aim of cognitive warfare is strategic dominance, and there are various ways of achieving this goal: eroding trust in public institutions, inducing polarization, and plunging society into a post-truth mindset (where no one knows what information to believe) are all valid means to break an adversary from within. And in today’s information age, where a considerable amount of social activity across many societies is conducted in cyberspace, hybrid warfare utilising information technologies has thrived

Digital influence operations, particularly influence campaigns conducted in cyberspace aimed at affecting the outcome of foreign elections, have drawn a lot of recent attention (for a recent example, consider the recent Romanian presidential elections and the complex international legal issues it raised). Concerns that the commercial proliferation of neurotechnology could exacerbate the impact and scope of such campaigns are grounded on the idea that neural data provides ‘valuable personal insights’ that could boost efficiency by refining the psychological profiles generated by hostile actors seeking to benefit from micro-targeting (p. 21).  

To understand the potentially transformative effect of (even currently-available) BCIs for cognitive warfare, consider the cognitive operation development chain—an iterative process which generally consists of three steps:

  1. Collection: Information is collected about a target audience (individual or group) to understand their beliefs, biases and psychological traits. Demographic data, online activity, physiological readings, and behavioural data from internet-of-things are all potential sources of such information. The more detailed the dataset, the better the next phase can be performed.
  2. Analysis: Based on this information, a profile of the target(s) is created or refined. AI can greatly assist in this process by making predictions and correlations on the target audience’s cognitive vulnerabilities, which potentially allows the author to construct tailored (even micro-targeted) cognitive attacks.
  3. Delivery: Finally, tailored material is delivered to the target audience to impact their perception and cognition. Any medium is conceivable, but social media and online platforms (like TikTok) are particularly effective conduits since they have become the primary source of information for many.

EEG technology ‘can’t read individual thoughts, but it can detect patterns of brain activity related to focus, fatigue, and even emotional states’.  Researchers have also recently shown the viability of using large language models (LLMs) to extract textual information from EEG readings. With developers working to increase the accuracy of the measurements these devices can achieve, Mahieu, Neurotechnology Director at the Centre for Future Generations, offers us a hypothetical scenario that captures well why even basic EEG data communicating real-time information on fatigue may be valuable to third parties. 

Imagine a future, she asks, where a social media platform pushes an advertisement for a product to a user because the platform knew, based on data from that user’s in-ear EEG device, that they were tired at that specific point in time and more vulnerable to the compulsion to buy. This (as of yet hypothetical) scenario is grounded in our lived experience of the power asymmetry between big data corporations and the users of their services, within what Zuboff identifies as the ‘logic of surveillance capitalism’. However, it also pushes us to acknowledge how social media could refine user profiles, behavioural predictions and strategic actions by incorporating EEG neural data on fatigue so to account for users’ vulnerability to heuristics. 

If access to neural data would benefit social media corporations within the logic of surveillance capitalism, the same logic will also apply to state actors interested in amplifying their cognitive operations. State-sponsored influence operations already take advantage of heuristics and human cognitive limitations (pp. 55-56), which may be especially effective for disseminating disinformation if states focus on analysing the neural data of ‘superusers’: popular individuals that ‘have been shown to have an unprecedented level of control over the spread of information.’ 

The commercial proliferation of in-ear EEG devices, even if they do nothing more than read and record users’ electrical brain activity, therefore would provide adversaries with the opportunity to collect a qualitatively different data set than what has been available to them in the past, which may provide excellent additional fuel for the ‘analysis’ phase of their cognitive operation development chain. This could potentially contribute to more effective and tailored cyber-enabled influence operations; a tempting prospect which, we argue, lends additional weight to calls for states to provide further clarity as to the application of international law to such activities in cyberspace.

Exploiting International Legal Grey Zones 

Neural activity measured by consumer in-ear EEG devices is not, nor will likely become, data that is publicly available. However, since existing and envisioned consumer in-ear EEG devices translate neural data into more readily understandable inferences (like graphs and charts) for users, we think it worthwhile to consider the possibility of hostile state actors remotely hacking these devices to access their neural data. Hacking personal devices for surveillance purposes would not be novel state practice. In 2023, the European Parliament found that EU and non-EU states had used Pegasus, a commercial spyware designed to infiltrate mobile phones and extract data, for both political and criminal purposes. The technology already exists for states to embark on exfiltration strategies for the collection of neural data from personal digital devices without their noticing. Not to forget the potential for hostile actors to gain access to these personal devices via employing classical phishing strategies.

A key issue, however, is that both hacking and cyber-enabled influence operations inhabit ‘grey zones’ of international law: areas of legal indeterminacy that states can exploit. There is, for instance, a lack of consensus amongst states and scholars ‘as to whether cyber operations aimed at influencing voters’ attitudes during an election constitute a violation of the principle of non-intervention.’ For instance, according to Roscini, a state’s cyber operation designed to ‘remotely access non-public data stored in cyber infrastructure located in another state’s territory without its valid consent’, would itself be coercive and a violation of the principle of non-intervention (p. 395). But Roscini acknowledges that this goes against the mainstream position, which would consider this as an extraterritorial enforcement of the hacker states’ power, in terms of violation of the other states’ territorial sovereignty (p. 392). A position which the UK and US, however, do not align with, as they do not consider cyber operations on another state’s territory to constitute violations of international law per se. 

Further grey areas include ambiguity as to how cyber activities are to be considered under the principles of sovereignty, self-determination and states’ existing international human rights obligations, to name but a few. The value that the commercial proliferation of neurotech could provide for hostile state actors’ disinformation and influence campaigns may further incentivise their exploitation of these grey zones, potentially contributing to international instability and escalating conflicts between states.

States ought to therefore strongly consider the value neural data could provide for hostile actors seeking to further refine their cognitive warfare capabilities, which also lends further urgency for clarifying the ‘grey zones’ states can currently exploit when developing their cognitive warfare strategies.

Print Friendly, PDF & Email
Topics
Artificial Intelligence, Featured, Technology

Leave a Reply

Please Login to comment
avatar
  Subscribe  
Notify of