An argument for ambition



Back in the liberal internationalist interregnum, before powerful states - the “great irresponsibles” as Hedley Bull once termed them - had saddled the vision of collective global responsibility with the base politics of the UN Security Council, proposals circulated for the formation of an international air force. Between 1908 and 1945 public intellectuals like Leonard Woolf, David Davies, Philip Noel-Baker, Norman Angell and Bertrand Russell argued that air power was both too dangerous and too important to the future of humanity to be left in the hands of bellicose states. The easier it became for states to resort to force, the more tenuous the rules prohibiting the use of force would become. To head off this possibility they called for ceding monopoly control of the technology to an international consortium of public and private trustees of global order. Staffed by people inherently committed to global goods and firewalled from states’ subjective interests, this Authority would be freed to objectively enforce rules promoting international peace and prosperity.

E. H. Carr, among others, tagged the liberal international vision as hopelessly grandiose and - because of this - infinitely dangerous. By mobilising around a fantasy world in which supranational institutions were capable of exercising the hard power needed to govern, ignoring the world “as it was” where the capability of any global organisations would always be subject to the whims of spoiler states, leaders might be convinced to adopt policies which would leave them catastrophically exposed to the regressive moves of less romantic players on the board. Actually believing that there was an alternative to statist power politics was a foundational mistake.

But this caricature of liberal proposals as a series of utopian fever dreams conveniently ignored the motivating claim: that radical policy challenges required radical policy solutions. Proposals for an international air force were conceived as a pragmatic and necessarily ambitious response to a previously unimagined threat to social order. The argument was, in effect, that policymakers were faced with a choice. Either watch as this radical new technology - air power - wrought a predictably chaotic impact on national and international society. Or get ahead of the curve and craft a newly powerful authority with the real capability to enforce international law and, in so doing, ensure that the transformation wrought by air power was in humanity’s best interests. In the face of transformative change maintaining the status quo, as Carr and other realists suggested, wasn’t the ethical or strategically sensible policy.

Of course the critics were right to stress that any policy proposals needed to be alive to the political dynamics which, given the structural incentives shaping international politics, undermined the workability of world government. But with the power of hindsight we know that Angell, Davies and co. were equally right to flag the radical transformation to be wrought by air power, with all the attendant changes to international law, global order and national security strategies. Indeed, it is remarkable how closely the imagined futures of the liberal internationalists of the early 20th Century - including the science fiction of people like H G Wells - anticipated the ethical conundra that abound today.

Consider how successive developments of air power - including its more recent cognate, drone warfare - cumulatively reshaped international norms. From the fire-bombing of Dresden to humanitarian war in Kosovo to the use of drones in Waziristan, innovations in air power, in the nature of the tools put at the disposal of military and political leaders, forced through strategic and, hence, normative change. The ability to carpet bomb Dresden moved the line on proportionality and the legitimacy of civilian mass casualties in war - a line that would subsequently be used by the US to justify the bombing of Hiroshima and Nagasaki. The availability of air power and ‘smart bombs’ in Kosovo - not having to put boots on the ground - made norms of humanitarian intervention saleable to global policymakers mindful of their domestic constituencies, but also helped entrench a preference for waging “virtual war” from the air despite clear data showing this raising the risk to civilians caught in the crossfire. More recently we’ve watched as drones expand the technical capability of states to accurately strike targets in remote territories of Waziristan, Yemen and Somalia. This has in turn led to the watering down of norms prohibiting assassination and extra-judicial killings and to the reinvention of the rules on non-intervention to fit the strategic and tactical parameters required by a global war on terror.

The point to draw from this potted history is that at every step the availability of a technological solution to a policy challenge introduces new contours to old debates over ethical killing. Some of these moves have strengthened progressive, liberal norms and the rules based international order. Many have not.  

Norms on the use of force are likely to change again with autonomous micro-drones, the next frontier for air power. We will at some time in the not-too-distant future see states capable of deploying swarms of near-invisible micro-drones, programmed with facial recognition software, carrying a lethal payload ready to tick off a list of high value targets. As with previous developments of air power, policymakers will be hard pressed to resist the lure of a technology that let’s them minimise civilian casualties, avoid exposing their soldiers to risk, is far less costly than traditionally war fighting, and can remove national and international security threats in real time.

Technology changes the future threats that rules, policies and laws must guard against. But this isn’t just a brute force attack against existing rules; what worried the liberal internationalists about air power, and what should worry us today about AI and other “futuretech” is that the rules which exist to contain, to limit the possibility of a dystopian future, become increasingly hollowed out and unworkable. As Captain Philip S. Mumford put it in 1936, air power, ‘one of the greatest scientific achievements of man is being prostituted to international standards of the sixteenth century - standards totally inapplicable to twentieth century conditions.’ His worry, and ours today, centres on the type of future created when a radical social transformation is governed piecemeal rather than by grand design. In other words, political expediency rather than policy ambition.

For a practical example take the ongoing challenges to the prohibition on extra-judicial killing. This remains a core prohibition of international law, still publicly subscribed to as a governing norm. But two decades of drone strikes and counterterrorism practice calls into question the real capability of the norm to govern, or inspire, future practice in any determinate way. The practice of targeted assassination has become “normal” as part of the ad hoc legal regime engineered to enable a global war on terrorism.  Technological capability - the data sources and drones which allow intelligence services to "find, fix and finish" targets - is by no means the cause of this norm regression. But it has helped to deepen and quicken the pace of norm regression.

Now imagine a different reality in which states routinely use the data and access provided by the consumer apps already embedded in our daily lives to influence, to silently modify a foreign population’s behaviour, nudging them into believing and doing things that will undermine their nation’s security. These are not the sort of external interventions that Article 2(4) and the UN collective security system protects against, yet the impact can be far more debilitating to a state’s continued prosperity and constitutional functioning than a military strike. This was the sales pitch of Cambridge Analytica to Russia but also to the UK and US intelligence services. As leaked information on the Pentagon’s “Outpost” program showed, behavioural intervention is an ongoing tactic in places like Iran, Egypt, Pakistan and Indonesia, couched in the language of counter-radicalisation. Some of the testimony to emerge from the “outing” of Cambridge Analytica strongly suggests complementary security goals - such as de-stabilizing unfriendly regimes - also being pursued by a wide range of actors, at scale. 

What happens next as this sort of social engineering becomes a mainstream way of war? Will we see wars of self-defence triggered by algorithmic interventions, by what Cathy O’Neil calls “weapons of maths destruction”? Or will ‘behavioural attacks’ become so normalized as to be part of the background noise of global society? What impact will this have on constitutional values, like democracy? Does democracy, by virtue of giving the people a voice, become a security vulnerability in an era governed by behavioural conflict? Illiberal responses to global terrorism have already shown how vulnerable our constitutional values are to narratives of crisis and exceptionalism; if autocracies have a strategic advantage in the new world, what actions will political leaders take to secure their state against futuretech?

The challenge of “futuretech”



Arguments for an air police force provide a useful touchstone as we think through how to shape the social impact of AI, blockchain and other “future tech” on our world. The radicalism of liberal internationalist proposals was grounded not in the threat of the technology but in the scope of normative change that would be ushered in. New innovations always require some regulatory tinkering; but the claim regarding air power was that tinkering, nibbling at the edges of existing rules, wouldn’t work. The call for a radical solution was premised on a claim about the transformative character of this technology.

First, we’re talking about a technology that is or will become ubiquitous. It’s applications will permeate society, changing possibilities across the military and civilian spectrum. There is no possibility of putting this innovation in a secured box. By the same token, there is no clear line between “vicious” and “virtuous” elements of the technology. Here’s Churchill, calling in inimitable style for the UN to be armed with a powerful international air force: ‘the Stone Age may return on the gleaming wings of science, and what might now shower immeasurable material blessings on mankind, may even bring about its total destruction’.

That sense of humanity walking a tightrope between immeasurable blessings and total destruction is echoed in current debates on tech governance. From a purely technological perspective there is no separating the artificial intelligence used in things like self-driving cars, or cancer screening, or improving global access to education from that powering killer robots, behavioural modification and mass surveillance. Technology is morally agnostic. This seeds ethical and regulatory complexity, as illustrated by the major worry in counter-terrorism circles about the vulnerabilities that autonomous vehicles introduce. The ability to hack a fleet of self-driving cars and trucks is the ability to create an army of “slaughterbots”

Second, the technology is constitutional, in that it shapes future agency in fundamental ways. Simply because a technology is widely used doesn’t mean it will change the underlying practices or institutions by which we order society. Commerce, diplomacy, war and law are all fundamental institutions that help define society, which would be reshaped in fundamental ways by the advent of air power. The constitutional nature of current technology is one of the reasons for thinking that we’re living through the “fourth industrial revolution”. When we’re seeing AI deep learning techniques reshaping conceptions of autonomy, or blockchain companies promising to re-engineer the need for social trust, or biotech companies talking about ending death, it’s hard to think that the impacts of these products, should they ever come to market, wouldn’t put pressure on institutions and values which keep social order ticking along in a mostly unified way.

It may be fiction but ask yourself how the existence of predictive, “pre-crime” policing imagined in the  Philip K. Dick story and Spielberg film Minority Report changes the institutional architecture of this imagined society. What does criminal liability look like in the shadow of a technology that prevents or pre-empts criminal activity? Presumably policing becomes far less dangerous; instead of enforcing the law in dangerous situations officers are able to take their targets in bed, or at the breakfast table. Hospitals must be much quieter places without a parade of gunshot victims. Is society itself less violent? What are the people of this imagined future doing with their time freed from worry about random acts of violence? Richard and Daniel Susskind raise a similar point in asking us to consider ‘the future of the professions’. As technology transforms the work of lawyers, doctors, accountants, teachers, bankers, engineers, police, and politicians, the crucial role played by these experts in structuring society and social norms also changes.

Third, there’s a claim that this technology is global, that it necessarily over-spills national boundaries and jurisdictions. Policymakers can’t escape the transnational dimension when it comes to thinking through their regulatory regimes. The point is that the progressive development of a truly disruptive technology in one part of the world will eventually be replicated in other parts of the world. No matter how stringent the local regulations are preventing the development or release of a particularly dangerous application, if the incentives are large enough, someone will develop it in another, less stringently regulated jurisdiction. (This is linked to the “openness” of the field, both in the sense that the tools and data needed for development are available to a global public and in the sense that intellectual property strategy generally fosters, rather than freezes, innovation.) To put this another way, effective regulation of a global technology requires globally effective regulation.

How, then, do you regulate and contain the impact of tech that has these characteristics? Can existing legal systems be extended and amended to cope with the challenge? 

Building on failure


Legal systems fail for the same reason any system fails. Either there is one catastrophic failure that causes the system to crash, or there is a series of failures which cascade over time, resulting in a system crash. When this occurs we call it war or revolution or some other form of constitutional reinvention. Legal systems succeed on much the same terms; its architects embed mechanisms for correction able to resolve discrete failures before they can have a catastrophic system-wide effect.

So: by this measure, how resilient is international law? Is it a system that we should trust to secure global society against some of the more debilitating and harmful impacts of technology?

The traditional challenge to international law is that its weakness stems from a lack of enforcement. This claim is not entirely true: international law has enforcement mechanisms. These include formal institutions like criminal and civil courts and tribunals, jurisdictional rules and processes that allow states and other actors to claim a right of enforcement (e.g. universal jurisdiction; complementarity), treaty-based sanctioning regimes, suspensive clauses in trade agreements, and tipping points where the use of force becomes legitimate (e.g. self-defence). We can also view the social pressure international law enables as a discrete form of enforcement, which Oona Hathaway and Scott Shapiro term “outcasting”. These mechanisms of enforcement have been effective in backstopping the normative authority of certain parts of international law, including the bans on chemical, biological and nuclear weapons, landmines, cluster bombs, blinding lasers and others which can’t be used without causing disproportionate or indiscriminate suffering. This localised success has fed hope that more sweeping enforcement regimes could emerge.

That said, this enforcement architecture often - some would say invariably - fails to bite when faced with “hard cases”, where action needs to be compelled contrary to powerful political and strategic interests. The Ottawa Convention banning landmines may have 133 signatories, but these don’t include the United States, Russia, China, Egypt, India, Israel, and Iran. The International Criminal Court continues its work but its perceived inefficiencies, illegitimacies and failures have seen it increasingly sidelined as a diplomatic force. This sense that available enforcement protocols and institutions don’t and can’t extend to the hard cases means that even as advocates can point to the possibility of enforcement, detractors are able to point to the messy reality of failure. When failures occur, violations of international law don’t provoke urgent questions about how to improve enforcement because the enforcement regimes which do exist are largely ad hoc, heavily politicized, tainted by hypocrisy and often ineffective if not downright counter-productive to the purpose the law is trying to achieve. The dominant mindset is that in a world of powerful, political states, a rump form of enforcement is all that can ever exist.

To push the point, consider the “failure rate” of the anti-torture regime. As absolute and inviolable a prohibition as this is on paper, in 2016 some form of state-sanctioned torture occurred in 43 states, with many more states complicit in enabling this through inadequate protections of the principle that individuals at risk of torture shouldn’t be returned to their country of origin (non-refoulement). What enforcement actions did these violations trigger? How many people or states were charged, prosecuted, or sanctioned in some way for torturing another individual? The simple answer is none, because there are no “hard” enforcement mechanisms attached to the rule. If you were engineering, say, an airplane, would you accept a failure rate of 22%? Or would you look at your designs and think that something radical needed to change? That, perhaps, users might not be happy with the product if every 5th flight was crashing on takeoff. It says something that most international law scholars would count the anti-torture regime among the major successes of the past 60 years.

This is the system that those calling for an international ban on killer robots are relying on to secure the future threat from lethal autonomous weapons systems. The core principles of humanitarian law are robust: don't use weapons or act in ways that cause indiscriminate or excessive harm; ensure any use of force is proportional to the military ends to be achieved; prevent and punish any violations of the rules. A treaty banning lethal autonomous weapons systems would do two things. First it would reiterate these principles as part of a claim that killer robots inevitably violate and undermine the protections and principles of IHL. Second, it would create enforcement protocols - including explicitly extending penalties for use to international criminal law - aimed at preventing killer robots from being developed and deployed.

All of which sounds good, until you look at the failure rates attached to international law enforcement. Would sanctioning a country like Russia or China for developing or using autonomous weapons change behaviour? Judging from past practice, would improving the rules of international law in this area have a deterrent effect?

Here, then, is the challenge to be addressed in regulating AI and other tech innovations for the good of human society. If policymakers and proposals for tech regulation fail to address the political limits of the current system, particularly around enforcement, resulting regulation - no matter how cleverly conceived - will inherit a high failure rate. The technologies being developed now and, even more-so, over the next 20 years, will expose existing gaps in the global system  - to debilitating if not catastrophic effect.

We’ve been able to muddle along with the system we have for a variety of reasons. First, most of the things international law regulates tend to have a localised effect; when human rights law fails, the impact of failure is national, bilateral, perhaps regional, but rarely global, rarely catastrophic for humanity at large. Second, international law’s champions have managed to cobble together a pseudo-enforcement regime, alerting the world to violations and triggering a range of social and diplomatic pressures which, in some cases, have led to some form of redress. This has been instrumental in establishing the possibility of enforceable, effective international law. But there are good reasons to think this approach won’t continue to work. Too many failures, too many “fudges” are stacking up. And futuretech is only just beginning to layer on the complexity.

The shape of ambition



Where does this leave things? What kind of conversations should ambitious, forward-thinking policymakers be engaging in? How and where should ambition be focused? For Bertrand Russell writing in the shadow of WW2, the inescapable conclusion of living in an era to be defined by the destructive potential of air warfare was the strategic sense of a policy of pacifism. What’s more, once you bought into the strategic necessity pacifism, this inevitably led ‘to a complete programme, involving an international government and international ownership of raw materials.’ That is to say, once you appreciate the need to <do something> to control the threat of technological change it becomes difficult to rationalise settling for policy proposals which tinker at the edges.

The first step is understanding where today’s - and tomorrow’s - political power lies. Technology is facilitating a shift away from a global order based on states’ territorial or juridical sovereignty and towards a global order built around functional sovereignty, around the practical benefits that a state can get for their citizens. Where technological change has made public law ineffective, privatised, corporate regimes have been emerging to fill the void. Reflecting this, Frank Pasquale points to dispute resolution schemes adjudicated and enforced by powerful companies like Amazon. The noteworthy thing is that an administrative, company-initiated regime can have an immediate global effect because it is used to govern a global customer base. The capacity to develop and implement global rules, independent of states, is something that companies have historically been reluctant to leverage or which have been sunk by questions around the legitimacy and independence of self-regulation. Nevertheless, it is a unique, largely untapped resource when it comes to anchoring radical, global policy change. 

There is clearly a difference between industry-led enforcement centred on the mediation of small scale administrative and commercial disputes and the public law challenge of imposing global controls on lethal autonomous weapons systems or hate speech. But if crafting effective, global-scale enforcement regimes becomes the norm in commercial AI, it is plausible to think we’ll see a spillover to government and security, the traditionally “hard cases” for public international law enforcement. The apex companies and investors involved are largely the same, shaped by similar transnational commercial pressures and incentives. From an asset management perspective continued global growth requires preventing the use of tech for illiberal, destructive ends. The same dual-use that makes the social, economic and political impact of AI and other futuretech so difficult to contain may well force private and public interests to converge around supporting more effective international law enforcement. In this regard calls for better global IP protection are on the same spectrum as calls to ban killer robots; both show a functional demand for enforceable global rules.

Second, we need to be looking at how strong local and regional enforcement regimes can have a global effect, both in terms of signalling the status of a norm and in terms of hard enforcement. Particularly significant among the current regulatory initiatives are the enforcement clauses contained in the EU’s General Data Protection Regulation (GDPR) and in Germany’s NetzDG (or “Enforcement on Social Networks”). Under the GDPR, companies can face fines of up to €20 million, or 4% annual global turnover – whichever is higher. Under the NetzDG rules, companies that fail to remove hate speech within 24 hours for simple cases up to a maximum of 7 days for more complex cases will face fines of up to €50m, with an additional penalty of €5m personally levied at the person within the company put in charge of compliance.

These are not magic bullets. The NetzDG rules have been heavily criticised for their potential to freeze free speech, and for putting what amounts to judicial power to determine whether a statement constitutes hate speech in the hands of private companies. And the GDPR still needs to figure out how to avoid fragmentation and a lack of enforcement "vigour".

There is, however, a regulatory ambition here that begins to match the scale of technological change. One crucially important feature is that the GDPR has been engineered with a form of extra-territorial effect; the rules apply to any companies that hold the data of European citizens, not to European companies. Because of this many companies have found themselves re-engineering their entire data management system to be GDPR compliant, rather than treating the data rights of their European users as a distinct case. As such, these seemingly localised rules are spurring global innovation in how companies handle rights to data and privacy. Again, the litmus test will be in how effective the administrators are at enforcing the rules in practice, but having strict, functionally universal enforcement mechanisms in place is a useful start.

Third, new technologies have opened the door to new, more effective methods of oversight, accountability and global law enforcement. We might, for example, conceive a transparency regime which mandated that all autonomic systems operating “in the wild” - especially those explicitly sold or amendable for use in security contexts - had the data showing their decision-making processes uploaded to a secured, anonymized ledger or database. Where the governing algorithm flags the integrity of a particular decision or event (for example: “this device was used to commit a crime”), a judicial authority - human or machine - might be empowered to investigate further and, if a true breach of the rules is identified, sanction the responsible parties. These are the sort of regulatory futures those thinking and working on the convergence of artificial intelligence, smart contracts and distributed ledger technologies are beginning to talk about. The point is that ambitiously minded policymakers can begin to think through, collaborate on and fund engineered solutions to the functional challenge of implementing impartial, global scale law enforcement.

Fourth, ambition and foresight needs to be aligned with a realistic understanding that this will be a long-term process of embedding a culture of enforcement in global society. There will be spoilers resistant to change. Of course powerful states - China, Russia and the US are top of most lists - will resist any regulatory measures which seem to limit their sovereign capabilities, especially in the context of national security. The Italian philosopher Antonio Gramsci has a line about hegemons seeking to “absorb threats like a pillow”, and those who have gained most from the political control afforded by the ad hoc nature of the current enforcement system will, even while paying lip service to the idea of radical change, support measures that keep the status quo intact.

Increasingly, however, corporate, private actors can and will take the lead, especially where the rules aren't working to prevent dangerous or damaging social action. See, for a prime example, the decisions by Twitter, Facebook and Amazon to silence Trump and his supporters in the wake of the US Capitol Hill attack.

The thread in all of this is that business innovation and transnational corporate power represents a viable path to global law enforcement, perhaps for the first time since colonial powers exercised imperial control through trading entities like the East India Company (the “company state”, as Philip Stern puts it in his excellent book on the subject). Being involved in creating and enforcing “rules for the world” is a legitimate ambition for companies that have, by virtue of their technological supremacy, become de facto “rulers of the world”.


Conclusion


In developing the technology that will govern the world, the companies involved need to begin to think of themselves as “global governors”. They have the authority, the power, and hence the responsibility to create a more effective enforcement regime for international law. Perhaps this is saying something that tech leaders already know; Microsoft has called for tech collaboration on a new digital Geneva Convention, based on the reality that tech leaders are the people in place to provide humanitarian triage for the digital conflicts of the future, the modern equivalent of the Red Cross. Committments from the Partnership on AI and myriad other initiatives - including Adjective Ventures - to think through and build future tech to the benefit of humanity further suggest a converging, systemic ambition. 

If this argument for ambition is shooting at an open goal, amazing. But my suspicion is that there will be many more battles to fight before effective enforcement - and the radical steps required to build this capability - is firmly entrenched as a necessary, prudential goal of global policymakers.  

THINKING    ︎   PROJECTS ︎   ABOUT    ︎   WORK    ︎    CONNECT