‘The Time has Come for International Regulation on Artificial Intelligence’ – An Interview with Andrew Murray

‘The Time has Come for International Regulation on Artificial Intelligence’ – An Interview with Andrew Murray

[Dimitri van den Meerssche is a Researcher in the Dispute Settlement and Adjudication strand at the T.M.C. Asser Institute.]

On Thursday, 26 November, Prof. Andrew Murray, will deliver the Sixth T.M.C. Asser Lecture – ‘Almost Human: Law and Human Agency in the Time of Artificial Intelligence’. Asser Institute researcher Dr. Dimitri Van Den Meerssche had the opportunity to speak with professor Murray about his perspective on the challenges posed by Artificial Intelligence to our human agency and autonomy – the backbone of the modern rule of law. A conversation on algorithmic opacity, the peril of dehumanization, the illusionary ideal of the ‘human in the loop’ and the urgent need to go beyond ‘ethics’ in the international regulation of AI.

One central observation in your Lecture is how Artificial Intelligence threatens human agency. Could you elaborate on your understanding of human agency and how it is being threatened?

In my Lecture I refer to the definition of agency by legal philosopher Joseph Raz. He argues that to be fully in control of one’s own agency and decisions you need to have capacity, the availability of options and the freedom to exercise that choice without interference.

My claim is that there are four ways in which the adoption and use of algorithms affect our autonomy, and particularly Raz’s third requirement: that we are to be free from coercion.

First, there is an internal and positive impact. This happens when an algorithm gives us choices, which have been limited by pre-determined values – values that we cannot observe.

The second impact is internal and negative. In this scenario, choices are removed because of pre-selected values. This is about what the algorithm actively removes from consideration before the presentation occurs.

The third way in which algorithms are impacting our autonomy is external and positive: when they make decisions about me based on information gathered or supplied through observations or data requests. Here, the computer is profiling us.

Finally, there is an external negative way in which algorithms impact our choices, and this is where it makes decisions about me when there was other, perhaps more relevant, data that could have been available from other sources, but which it did not consider because it was not programmed to consider that data, or it was not allowed access to it.

To conclude: my Lecture argues that Artificial Intelligence interferes with human agency in these fundamental ways, and that we need to have this discussion on how algorithmic decision making is undermining our ability to exercise choice and freedom in Razian terms. The counterargument to this is, of course, that the world is so complex that the algorithm actually empowers us by giving us access to more choice architectures than we would otherwise have. I accept that may be objectively true but I think that we do not have sufficient framing to know what we are subjectively giving up, or giving over to algorithms.

STS studies have of course long argued that human agency is always technologically mediated and that our choice architectures are always structured by the tools we use. What distinguishes Artificial Intelligence in posing such a radical challenge to human agency and autonomy?

I think it revolves around what you might call the knowability. You are right, humans are part of complex environments, and if you subscribe to actor-network theory that sees the world as made up of active networks including technological architecture as well as human architecture, then you can see that human autonomy is limited by all forms of technological architecture.

The challenge about AI, and in particular machine learning, when compared to prior technologies, is that it is impossible to know how the algorithm is making its decisions. It is impossible to know for two reasons. With machine learning, it is impossible to explain to a human what data is being used to make decisions, because that dataset changes with every decision that its goes through. We would have to be constantly informed as to how the data set is learning and evolving, so you would have to have a constant stream, in essence, of the information the dataset is using. This is Barocas and Nissenbaum’s transparency paradox.

The second is Pasquale’s black box. We can see data going in and we can see data coming out, and from that we can make an assumption as to what is going on inside the actual algorithm, but the algorithm itself is mysterious to us.

Now, prior forms of technology are things which are explainable; which are explorable. We are aware of the limits of, let’s say, broadcast or radio. They are open to us and we can observe them. This makes it possible to make an informed decision about how much autonomy we can give up to that open book, if you will. AI is much more mysterious. We don’t exactly know, with machine learning, how the machine makes its decisions, which data set it is using, how it is combining these datasets.

As you know, in machine learning, at first you program the systems and you give it a learning dataset but after that it uses backpropagation to develop its modes of pattern recognition. So, once you go beyond two or three iterations, even the programmers cannot say with certainty how it is calculating the likelihood of something being yes or no.

So, the difference between AI and other technologies – and this is what we have to be aware of – is that we are giving up our autonomy to something that we do not fully understand in the same way that we understand other forms of technology. I am not saying that this is bad, but I am saying that we need to have a debate about what we are happy with. If people are happy to give up choice autonomy to AI, then that is absolutely fine with me, as long as it is properly regulated, and as long as people are aware of what they are doing.

How do these challenges of autonomy and knowability intersect with your ideal of the rule of law?

Law is a human construct. Any legal philosopher, no matter if they are interpretivists, realists or positivists, would say that law is constructed through human knowledge, human experience, human interaction and human communication. Law is the distillation of our experiences as a community, and as a society. And in a modern legal system, these experiences would be distilled through a law-making process.

My concern, and this is already happening in a limited sense, is that AI provides us with an intermediated view of the world around us. It generates a filter between us and what we are experiencing. All of us – lawmakers, lawyers, judges but also NGO’s or researchers – work with an intermediated understanding of what the current societal risks, challenges, opportunities or threats are. This informs our understanding of the law and what the law should be based on.

Let me give you an example, of where there is a threat to what we might call the ‘rule of law’. Let’s look at the huge populism movement we currently see in the US and in some European countries. This populist movement is heavily influenced by data that is being fed to people through social media services. These social media services use crude AI to promote stories into people’s news feeds. What we find is that these algorithms cause positive reinforcement bubbles where stories likely to be positively received by the reader are promoted into their news feed.

Trump supporters, or people who might find themselves on the populist end of things, find themselves tipped over into things like QAnon conspiracy theories, by something into their social media feeds. So, if I were a powerful right-wing populist and I’ve got a little bit of money, I just need to seed just a little bit of news into Facebook and watch it propagate as people start sharing it. Then what happens is that the algorithm picks it up and says: ‘Hey, this is popular, people like this’ and it starts amplifying that. This can lead to populist campaigns succeeding at the ballot box or even electors believing objectively false claims such as that an election was unfair or fraudulent. What we find here is that an error caused by the intermediation of the algorithm affects the underlying foundations of our law-making bodies.

The more AI becomes part of our environment, and the more we rely on intermediation and assisted decision-making, the greater these risks are. I am not saying AI is bad, I’m not saying that AI will definitely undermine the rule of law. What I am saying is we need to understand what it means when something else is pre-selecting data to present us. And we need to have a political discussion about this.

In the debate on lethal autonomous weapon systems – obviously a topic of great interest to international lawyers – one principle that we often encounter is the need to keep a ‘human in the loop’ to prevent fully autonomous decision-making by AI. How would you evaluate this principle?

That’s a really good question and I think it relates to this argument that I was raising in relation to rule of law. Currently the discourse (and especially at the UN-level, within the Group of Governmental Experts on Lethal Autonomous Weapons Systems), is focused on the potential of a ban on fully autonomous weapons systems, on the basis of this Terminator principle: we do not want to give a non-human actor the authority to kill humans. It should be noted, however, that that horse bolted the day we buried the first landmine and left it there. Landmines of course operate differently, they do not have so-called intelligence, but I think we are missing the point of the debate.

I agree that we should ban the research into and the development of such weapons. I am afraid, however, that this moves our gaze to the wrong place. I think we already have a real problem with current ‘human in the loop’ and ‘human on the loop’ systems, and in particular, with this idea that having a human looped in makes it fine.

Let us get back to what I said earlier about how algorithms pre-select the data that we are given. It is not so much about the action that is taken, it is about the selection of data on the basis of which it is taken. In my Lecture, I refer to the use of drones in Afghanistan, where the US DoD has admitted that they were using NSA mobile phone tracking technology to identify and track SIM-cards of high-value individuals that are viable targets for drone strikes. The problem here is that the human operative is being presented with this pre-selected information: that we have identified the SIM-card, which might be close to another SIM-card that we have also been looking for, which suggests with a high degree of probability that we have identified a person of interest.

This data is being given to a remote operator in the United States, who is being asked to authorise weapons release. Now, if you put yourself in the position of being, let’s say, a 23-year-old flight lieutenant in the US air force. And you have been told by the NSA, by the CIA, by the drone itself that these high-value targets have been identified and that weapons release is authorised under the chain of command, are you going to second guess that? Are you going to say ‘No, I don’t think this is right’?

In effect, I think the concept of the ‘human in the loop’ is there to make us feel better. It is to tell us that the human is doing this, rather than the machine doing this. But the human is just the biological switch that the machine is pressing, and I think it is very rare that the human will depart from what they are being told. That is not to say it doesn’t happen. I am sure everyone knows about Stanislav Petrov, the famous case of the Russian who saved the world from nuclear war when he was informed there was a massive incoming nuclear strike and he was to press the button and he did not, because he realised there was something wrong with the information he was presented with. And he turned out to be right, of course.

So, there are occasions where humans actually make a difference, but you need to have very strong autonomy and you need to have the right kind of data. The problem here is that I’m not sure the ‘human in the loop’ has the data that they need to countermand the weapon in live fire situations, where the decision has to be made very quickly. So, I think we should also be doing something to reduce the use of ‘human in the loop’ systems, apart from banning autonomous lethal weapons.

Gregor Noll, for example, has written that it is impossible to apply existing rules of international law to these human-machine assemblages, precisely because the moment of human agency, choice and intentionality is being eroded, as you describe. Is there any other way than abolitionist thinking in that context?

That’s a very good question… This is where we need to start having this debate more seriously. My argument would be that the current focus of the expert groups, which is much more focused on fully autonomous weapons systems, ought to be refocused on semi-autonomous weapons systems. We need new rules of engagement for semi-autonomous weapons.

What is needed, essentially, is two things. First, there is a need for greater transparency on the gathering and processing of the data points for the human operator. The human needs to be put back in command. They need to know not just what has been reported by the drone, they also need to know how this data is being gathered and what information is being processed. 

Secondly, we need to have a more robust human authorisation process. We need to have much more oversight of military technology by political masters. The concern is that these algorithms are being developed and deployed in a covert fashion, and there is insufficient oversight or political review. So, I think we, and in terms of law this is more domestic law than international law, we need to ensure more oversight. We need to know what military expenditure in these areas is doing, and how it is being spent. That way it allows, in theory, the re-assertation of domestic rule of law.

That brings us to more general questions of accountability. In your work, you explain how machine learning or deep learning AI builds and adapts its rules incrementally from exposure to the world. If such data driven systems provide an indicator for decision-making on the basis of which a human acts, then where do you think accountability should be attributed? With the human who executes? The algorithm? The coder? This links with scholars like Virginia Eubanks for example, who have written on the responsibility of software designers. Is that a path worthwhile pursuing?

I think there are two answers to this. There is the narrow answer and the bigger answer. First of all, the narrow answer relates to what you might call the procedural rule of law. An important thing here is the actual process of being able to identify the right actors. So, herein, we do have this problem of what you might call a ‘’shared responsibility.’’ I will talk in the lecture about how much we are involved today in assisted decision-making or supplementary decision-making, where the AI algorithm is essentially sharing the decision with the human brain.

In that case, the obvious thing to do would be to say: ‘’well, the ultimate responsibility in terms of tort, for example, is with the human actors. If an AI strongly advises me to do something, and I do it, and it causes a harm, then I am still the actor. I still made that decision.’’ So therefore, in some sense, it is quite straightforward.

But, as you point out, it is actually not that straightforward. Because maybe I did not have the information I needed to fully ascertain the risk of my decision. So, there is a discussion about how we assign responsibility here and this is as much a political decision as it is a legal one. Politicians will very soon be asked to consider who should have responsibility, for example, in the event that an autonomous or at least a heavily semi-autonomous vehicle is involved in an accident. Who is responsible? Should it be the driver of the vehicle? Should it be the designer of the vehicle? Should it be the software designer of the software that goes into the vehicle?

Now this is a political decision, about where you want the risk and the responsibility to lie. What is really weird about AI is that it is the first product that we don’t understand, going out into the marketplace. There are more complicated things, of course – modern jumbo jets are arguably more complicated. But they are complicated in a knowable way, whereas machine learning is more unknowable. Do you then want to make software designers responsible for software that they do not fully understand? This is a political choice about who you want to bear the risk, and who you are incentivizing to act in a particular way. Such decisions will determine the future shape of the AI sector. If you put all the risk into the companies who develop AI, then start-ups are not going to be able to afford this and the sector will end up like the modern pharmaceutical industry where we have five or six companies who really dominate. That is a political decision.

Secondly, related to the bigger answer, there are a lot of specific discussions on AI in torts or in employment law or intellectual property law etc. – this is what Frank Easterbrook called the Law of the Horse: looking at specific rules of specific applications in a specific sector. But there is a bigger societal question about how AI is changing our entire understanding of what it means to understand and apply law in general – an entire reconsideration of the rule of law.

There is a big discussion, driven by people such as Richard Susskind and others, on the idea of using AI to unblock the courts. The idea here is to employ AI decision-making system in the judicial process to allow people access to justice. I am not against the idea per se, but we have to accept that this is something different to appearing before a human judge. It takes something away from the process by reducing the parties to data points. This is slightly dehumanizing. I think we need to have a debate about this process of dehumanization and the loss of context when information is turned into data. AI is going to do fantastic things for us – it will be more convenient, it will be quicker – but we need to start having discussions about what we want it to do, and how we want it to be regulated. Otherwise, these whiz kids in California will go: ‘’Hey this is cool, I can do this’’ and nobody is telling them “no” – there are no adults in the room.

Picking up on dehumanization and your notion of the adult in the room… You have written and talked about dehumanization not only in terms of the role of the judge who gets traded for an algorithm, but also on how algorithmic decision-making breaks people up in data patterns and thereby dehumanizes them. I want to bring this in dialogue with what your colleague Mireille Hildebrandt writes about this threat to human autonomy in what she calls the ‘onlife world’, and the important role she sees for lawyers to participate in the design of data architectures. Do you share this perspective on what the role of the lawyer is?

Yes, it might be. I think her work is fantastic and I think she is having exactly the right kind of conversations with software developers. She is trying to teach computer science to lawyers and she is trying to teach law to computer scientists. I think that is certainly a key part of our ability to regulate the development of software tools.

However, there is a limitation that we need to address. She is trying to get in-house lawyers into design meetings, into early discourse, into that analysis of what ought or ought not to be done in terms of both ethics and law. Yet, smaller startup companies or companies that are just seeking an advantage will quite happily be willfully ignorant of the law. People have talked at length about how Uber have disrupted taxis, for example. But Uber did not disrupt. Uber failed to comply with regulation and it’s been very successful in that. This is an example of how disruptive businesses, especially startups, often gain an advantage by finding either gaps in the regulatory regime or simply not following it.

So, the work that Mireille is doing is fantastic and it is the starting point to have lawyers as part of the development process. However, we need people who are not commercially connected to the products as well. We need external regulation. We need a regulator and a legal code.

At the moment, my issue with how AI is being regulated externally is that the focus is almost exclusively on ethics and ethics only get you so far. There is a large grey area around what is ethically permissible and that varies significantly from place to place.

If we want to protect the rule of law – our autonomy and our freedom – we will need formal regulation on two levels. First, we need domestic development of proper regulations stating what you can or cannot do, with regulators enforcing this. I am thinking of initiatives in the style of current data protection laws, for example.

Secondly, we need international cooperation on the standards of regulation. We need a UN body – an international telecommunications union for AI or a body similar to that. We need a global standard setting body. Otherwise, what will happen is that in the commercial battlefield, the US and China will get involved in a battle to become the world market leader in AI. If we don’t have international standards, governments will develop the standard that is most beneficial to their industry sector and not most beneficial to us.

In terms of such regulation, we need to deal with the state and the private sector in the same way. Most of the harmful adoptions of AI for individuals will be state adoptions: systems that impact the right to immigrate and settle, systems involving access to social housing, housing support, disability and sickness benefits, even taxation. Yet, at the same time, we also cannot underestimate the emergence of surveillance capitalism and the ways in which companies like Facebook, Google, Microsoft and Apple have grown so powerful and dominant. Legal regulation will need to look both ways: both at the state and the private companies.

The geopolitical dimension you refer to here makes me think about the GDPR [the EU General Data Protection Regulation] and the CJEU’s case law in this context. You framed the problem of regulation in terms of competition between states, which creates a negative spiral of legal safeguards and standards. But are we not also seeing a different phenomenon, where the EU uses its consumer power to regulate in some ways also beyond its borders? Maybe we might not even need international standards in light of this?

This is true, and there are benefits and disbenefits to that. As you say, currently the only truly effective legal limitations on the employment and use of algorithmic decision-making systems is the GDPR – it is data protection. So, we do see states, as you say not infrequently, using elements of the GDPR, particularly Article 22 [on automated individual decision-making including profiling], to limit the application of automated algorithmic processing.

Yet, I want to pose questions to this on a number of levels. First, the GDPR is not a substitute for AI regulation as it only deals with one part of it, which is privacy. It doesn’t deal with autonomy and freedom and the other concerns I expressed before. The GDPR is less helpful about how data is presented to me by the algorithm and how that affects me. We still need something more.

Second, to address the part of your question about realpolitik and the use of consumer power in this context,  it is true that the bloc with the most potential to do this is the European Union. The EU is for most companies probably the most important consumer bloc (although that will now of course be lessened by the leaving of 65 million UK citizens). So, the GDPR is a good example of how countries and blocs can leverage power in this area.

Yet, I am actually also very concerned about that. My concern is that it’s perfectly fine for us as EU citizens (well I’m not technically any longer…) to say “Europe could take a lead on this” and leverage our power to get companies to pay attention. This is positive.

But I am uncomfortable about the global dimension of this, which is quite colonial: that somehow  Europe knows best and that our European standards would be beneficial for everybody globally. I’m not saying Europe is not doing this in a good way, especially in the area of human rights, but I am concerned about the colonial aspect of it. I would be particularly worried about what this would mean in terms of signaling to African countries, for instance, that basically Europe knows best how to do this and that they have to follow what we say.

I would much prefer, if possible, if this was a truly international (UN-led, for example) exercise wherein countries like China are on board and not feeling coerced into adopting something. This might be impossible – the political differences between China and the U.S. in particular may be insurmountable – but, if I was asked, I would rather see a truly international effort. I understand that Europe wants to be a global leader on this, but I would rather do it in a more collaborative way than kind of saying “if you want access to Europe’s 360 million customers you have to do it our way”. I am also concerned that companies comply with the EU GDPR model for the wrong reasons: I would rather they did it because they saw good data privacy as a public good rather than as a way to gain access to markets.

I want to conclude with one of your key critiques and interventions in this field: the ethics vs. law divide in algorithmic regulation. Of course, I am fully on board with the notion that states and corporations constantly employ ethics as a smokescreen that prevents them from having to make more fundamental changes. But I would like, a bit more critically, to question the role of the law that is invoked here and the assumption that law would be a better contravening power or a useful entry point of contestation. We have the work by scholars such as Julie Cohen showing us how law is deeply entangled with informational capitalism. Law is at the start and end of it. Do such observations not complicate the ethics vs law juxtaposition in your terms?

Myoverwhelming thought, first of all, is I wish I had written Between Truth and Power. It’s rather annoying to read something so well-written by someone else. That’s my first thought [laughs]. I accept what Julie puts forward and I think that it raises a bigger question for us as lawyers, which goes back to the rule of law question. I think what Julie is identifying, and not just Julie, is in essence the capture of law by particular actors, such as technology companies

In this sense, in thinking about the rule of law, law arguably has moved from being the rules of the game to being the tools of the game. So, we get things like strategic lawsuits against public participation and all these kinds of things (SLAPPS). Law has become a tool used in corporate development. It has become a tool in inter-state negotiations. It has become instrumental.

Now, that might be fine if we want law to be about controlling the relations between actors. This is essentially a very private law kind of view of what law is: law is there to control relations between private actors, and among private actors I am including here states when they are negotiating with other states and not fulfilling what you might call a state’s regulatory function. Unfortunately, that is what law has become: it has become not just the rules of the game, it has become the tool of the game, and I think Julie is entirely right there.

So, what we need to do is think if we are happy with that, or if we want to reframe what law is. Do we want to move law above the rules of the game, so to speak? Do we want law to be instead the reflection of values? Now, I would like it to be that. I think I’m probably in a minority, if I’m being honest. I think this sort of instrumentalisation of law is where it is going, and I think that the use of law as a regulatory framework will be designed around the functional market in AI.

Yet, I would like law to be something more. I would like it to be a reflection of our societal values. I would like to go back to a very classical view of what law is, back to Austin (without the authority element) and the view that law should reflect common values of society. I want to go back to the notion that the rule of law should be the ultimate distillation of these values.

What law has become, in most western democracies in particular, is captured and employed by commercial interests, market interests, investor interests and a number of other things. So, a law for AI in this context is probably not going to be any better than in any ethical framework. But I want to start a debate here.

My worry is informed by what we’ve seen with the internet. In the 1990s, there was a big debate about how and if we should regulate the internet. Should it be a controlled space like broadcast media or should it be an open and free space like print media? Essentially, the print media people won and they won strongly on the principle that there is a massive untapped value in this nascent technology and, to be honest, that was probably the right decision. If we’d locked it down like broadcast, the modern internet would have been terrible. There would have been very little in terms of creativity in the 1990s/2000s.

But we left that there for too long. We did not revisit that decision when we should have, maybe around 2005/2006. And so, today we have companies like Facebook, Google, Twitter, Microsoft, and others, relying on this old legislation and saying “we’re just mere conduits, we just pass things over our wires. We don’t really look at them”, while at the same time using complex algorithms to promote or hide things. As a result, these companies have become massively powerful and influential. There was a tipping point where we needed to intervene.

My worry is that with AI, we are roughly where we were in 1995-2000 with the internet. Lots of ideas, lots of nascent things, lots of “isn’t it cool, what we can do?” but still mostly quite small. Within the next ten years, we are going to start to see major AI companies develop. I mean, with Huawei and Google we are already there, but we will see more of these massive AI conglomerates develop. If we do not start acting now, ten years from now we will have the same discussions like we have now on how to regulate Facebook and Google. This is my worry.

We have seen it happen with the internet: “Let the technology develop. Let’s see where it goes”. I agree with all of that but there must be a point where we also are willing step in. We need to start building that institutional capacity now if it is going to be there in 5-10 years. Governments and international organisations need to be having these discussions in the next two years to build the institutional capacity that we will very much need in the decades to come.

With this call to action, let us thank you very much.

Print Friendly, PDF & Email
Topics
Featured, General, Organizations, Technology
No Comments

Sorry, the comment form is closed at this time.