AI Partnership for Defense is a Step in the Right Direction – But Will Face Challenges

AI Partnership for Defense is a Step in the Right Direction – But Will Face Challenges

[Lena Trabucco is a Research Assistant at the Centre for Military Studies at the University of Copenhagen. She is also a PhD candidate at Northwestern University and the University of Copenhagen.]

On September 16, 2020, the US Defense Department (DoD) announced the launch of the AI Partnership for Defense – a multi-national partnership which will “engage military and defense organizations from more than 10 nations with a focus on incorporating ethical principles into the AI delivery pipeline,” according to Secretary Esper. Secretary Esper noted in his announcement:

In February, we became the first military in the world to adopt ethical principles for the use of AI, based on core values of transparency, reliability, and governability. These principles make clear to the American people – and the world – that the United States will once again lead the way in the responsible development and application of emerging technologies, reinforcing our role as the global security partner of choice.

The US created the partnership in an effort to maintain healthy lead and competitive advantage over China and Russia, or “near-peer rivals,” in AI military innovation and development. The DoD hopes the AI partnership will maintain that competitive edge by offering opportunities for players in the AI space to engage with security partners committed to ethical and responsible AI development and application. The idea is the partnership could attract developers and innovators with a reliable alternative to Chinese and Russian working relationships.

On September 15 and 16, 2020, the Defense Department’s Joint Artificial Intelligence Center, or JAIC, held an symposium which hosted delegations from 12 nations including Australia, Canada, Denmark, Estonia, Finland, France, Israel, Japan, Norway, the Republic of Korea, Sweden, and the United Kingdom. According to the JAIC, the symposium gathered nations furthest along in their AI development to discuss “shared lessons learned and best practices in harnessing AI for their respective and shared defense missions.” The goal of the partnership, according to DoD officials, is to promote standards for responsible AI development and establish avenues and tools for data sharing, cooperative development, and enhanced interoperability.

The DoD offered few details about the framework and functioning of the partnership beyond a “forum [that] seeks to provide values-based global leadership in defense for policies and approaches in adopting AI,” according to a DoD statement.

Nevertheless, some benefits and challenges are clear from the outset.

The AI partnership is a step in the right direction. It offers an avenue of AI innovation that will signal consensus and cooperation between the US and crucial partners. At first glance, this cooperation may seem little more than a symbolic gesture, but this partnership was a necessary step in fostering a more reasoned and global approach to military AI development. Additionally, the partnership grants accessibility to a wider network of military and defense organizations to engage in research and development and draw on global AI talent. Adding diverse voices to the complex discussion of AI as a defense technology will yield more insightful dialogue and solutions.

Despite the many benefits and opportunities, the partnership will face challenges as it defines the boundaries and practices of the collaboration. I believe three challenges ought to be raised at the outset.

DoD officials have discussed interoperability as a an explicit goal of the partnership – interoperability being an umbrella term referring to group integration in order to operate cohesively and share information effectively. Interoperability is a necessary yet difficult achievement in any kind of partnership, but the AI partnership has a unique challenge. Namely, one of legal interoperability. Legal interoperability is one subset of the broader partnership interoperability and refers to the pursuit of the partnership’s goals within the participating state’s diverse legal obligations and interpretations. The partnership must find a way to achieve their ultimate goal in a manner consistent with domestic and international legal obligations of the participating states. Differing regulatory frameworks and data strategies among participants could lead to inadvertent challenges to the AI partnership legal interoperability.

This is particularly relevant for the European partners, which currently constitute over half of the AI partnership. The European Union has deliberately distinguished the European approach to AI from China and the US, instead situating itself as the global leader advancing responsible and trustworthy AI. As part of this strategy, the EU submitted substantial proposals for data regulation and restrictions. Public and private organizations in these states are subject to European data regulation and restriction.

It is not yet clear how these differences may affect the AI partnership; largely because it is still in its infancy and the DoD has not offered many specifics about the partnership in practice. But the issue could become a major one as the partnership moves forward. If the US hopes to expand the partnership to include more European partners, then the different approaches to data sharing could become a legal hurdle that hinders the legal interoperability – and thus partnership interoperability – potentially requiring special agreements.

The second challenge is about which states are – and which states are not – included in the partnership (so far). The larger trans-Atlantic implications of the partnership should not be neglected as the collaboration moves forward. Crucial European partners are missing, such as Germany and the Netherlands. Germany is the biggest economy in Europe and a readily acknowledges the economic possibilities of AI development. The Netherlands has yet to reach the AI development of some of its European counterparts, but the Dutch government has been explicit about investing into research initiatives to bolster the Netherlands in the AI landscape.

Mark Beall, JAIC Chief of Strategy, expressed his hope that more states will join the AI partnership in the future; but the absence of some of Europe’s heavy hitters could signal a bigger issue for the partnership. Commentators have expressed concern of a trans-Atlantic divide, and an intra-European divide, on AI in the military domain. See, for example, here and here.

The US needs greater European cooperation and engagement if the US wants to achieve its goal as the alternative security partner for the global AI industry. This means addressing competing views of military AI within Europe. France and Germany represent the competing visions of European military AI. France has been a strong advocate of developing and integrating military AI into the French defense strategy. In 2019, the Minister of the French Armed Forces launched an AI strategy that increased AI defense spending and created a committee advocating for ethical and controlled military AI development. Conversely, Germany has been more reluctant to embrace military AI, instead emphasizing the economic and societal opportunities and applications of AI. These competing visions from two of Europe’s largest economies puts the US, and the AI partnership, in a tricky position to navigate these disparate pathways for European contributions to the military AI space.

Few solutions have been offered to the US for maneuvering Europe’s competing visions for military AI, and how to effectively court hesitant European nations. And the current absence of Germany from the AI partnership suggests the US still hasn’t figured it out. If the goal is to expand the partnership as a counter-weight to Chinese and Russian AI military innovation, then the US will have to address the foundational differences that some European allies have regarding the role of military AI in order to bring the partnership under a unifying strategy.

Which brings me to the third challenge. The AI partnership symposium did not offer a coherent strategy for the partnership beyond advancing core values the participating nations find important to the AI pipeline. Peter Singer, New America Foundation fellow and strategist, noted that the US has not yet offered a coherent strategy to contrast its “near peers.” In one article, Singer said, “China has a fairly clear and robust vision of this [AI and its applications] and it is actively exporting that vision. There is absolutely no way the US can compete without offering a different and compelling vision and one that involves our friends and allies.”

On the one hand, the absence of an overarching strategy gives the DoD and the AI partnership latitude to address inevitable issues that will arise. Secretary Esper noted in his address that this kind of partnership is the first of its kind and needs time to operate in the face of unforeseen challenges. But to accomplish the goal as the preferred security partner in AI, the partnership will need to substantiate its agreement with a vision more concrete than just ethical values. The AI partnership accomplishes getting some allies on board, but does not clarify what vision is driving the newfound partnership. At some point, this will need to change.

In essence, the AI partnership is a necessary and welcome development in the US AI strategy, but significant legal and policy challenges are on the horizon. The three outlined here – legal interoperability, trans-Atlantic cooperation, and an uncertain coherent strategy – are certainly not exhaustive. But they represent a span of legal and policy issues the partnership are sure to encounter as it moves forward.

Nevertheless, the AI partnership is a shift in the evolution of AI in the geo-political landscape and it will likely be a crucial initiative for the coming years.

Print Friendly, PDF & Email
Topics
Featured, General, International Criminal Law, International Human Rights Law, International Humanitarian Law, National Security Law, Technology, Use of Force
No Comments

Sorry, the comment form is closed at this time.