By Professor Gillian Hadfield / Illustration by Mathilde Aubier

From the Fall/Winter 2018 issue of Nexus

Illustration - robot holding map of human brainFrom Frankenstein to I, Robot, we have for centuries been intrigued with and terrified of creating beings that might develop autonomy and free will.

And now that we stand on the cusp of the age of ever-more-powerful artificial intelligence, the urgency of developing ways to ensure our creations always do what we want them to do is growing.

For some in AI, like Mark Zuckerberg, AI is just getting better all the time and if problems come up, technology will solve them. But for others, like Elon Musk, the time to start figuring out how to regulate powerful machine-learning-based systems is now.

On this point, I’m with Musk. Not because I think the doomsday scenario that Hollywood loves to scare us with is around the corner but because Zuckerberg’s confidence that we can solve any future problems is contingent on Musk’s insistence that we need to “learn as much as possible” now.

And among the things we urgently need to learn more about is not just how artificial intelligence works, but how humans work.

Humans are the most elaborately cooperative species on the planet. We outflank every other animal in cognition and communication—tools that have enabled a division of labour and shared living in which we have to depend on others to do their part. That’s what our market economies and systems of government are all about.

But sophisticated cognition and language—which AI systems are already starting to use—are not the only features that make humans so wildly successful at cooperation.

Humans are also the only species to have developed “group normativity” —an elaborate system of rules and norms that designate what is collectively acceptable and not acceptable for other people to do, kept in check by group efforts to punish those who break the rules.

Many of these rules can be enforced by officials with prisons and courts but the simplest and most common punishments are enacted in groups through criticism and exclusion—refusing to play, in the park, market, or workplace, with those who violate norms.

When it comes to the risks of AI systems exercising free will, then, what we are really worried about is whether or not they will continue to play by and help enforce our rules.

So far the AI community and the donors funding AI safety research—investors like Musk and several foundations—have mostly turned to ethicists and philosophers to help think through the challenge of building AI that plays nice.  Thinkers like Nick Bostrom have raised important questions about the values AI, and AI researchers, should care about.

But our complex normative social orders are less about ethical choices than they are about the coordination of billions of people making millions of choices on a daily basis about how to behave.

How that coordination is accomplished is something we don’t really understand. Culture is a set of rules, but what makes it change—sometimes slowly, sometimes quickly—is something we have yet to fully comprehend. Law is another set of rules that we can change simply in theory but less so in reality.

As the newcomers to our group, therefore, AI systems are a cause for suspicion: what do they know and understand, what motivates them, how much respect will they have for us, and how willing will they be to find constructive solutions to conflicts? AIs will only be able to integrate into our elaborate normative systems if they are built to read, and participate in, that system.

In a future with more pervasive AI, people will be interacting with machines on a regular basis—sometimes without even knowing it. What will happen to our willingness to drive or follow traffic laws when some of the cars are autonomous and speaking to each other but not us? Will we trust a robot to care for our children in school or our aging parents in a nursing home?

Social psychologists and roboticists are thinking about these questions, but we need more research of this type, and more that focuses on the features of a system, not just the design of an individual machine or process. This will require expertise from people who think about the design of normative systems.

Are we prepared for AI systems that start building their own normative systems—their own rules about what is acceptable and unacceptable for a machine to do—in order to coordinate their own interactions? I expect this will happen: like humans, AI agents will need to have a basis for predicting what other machines will do.

To build smart machines that follow the rules that multiple, conflicting, and sometimes inchoate human groups help to shape, we will need to understand a lot more about what makes each of us willing to do that, every day.

Gillian Hadfield is Professor of Law and Professor of Strategic Management. She is a faculty affiliate at the Vector Institute for Artificial Intelligence in Toronto and a senior policy advisor with OpenAI in San Francisco. Her current research is focused on innovative design for legal and dispute resolution systems in advanced and developing market economies, particularly governance for artificial intelligence (AI and the markets for law, lawyers, and dispute resolution. Her book, Rules for a Flat World:  Why Humans Invented Law and How to Reinvent It for a Complex Global Economy was published by Oxford University Press in 2017. A slightly revised version of this piece originally appeared in TechCrunch.com