Tuesday, February 13, 2018
Professor Felten lecturing with info on screen behind him

First, we define AI: Professor Edward W. Felten, of Princeton University, spoke on the past, present and future course of artificial intelligence, as the 2018 Grafstein Lecturer


By Alvin Yau, 3L  / Photo by Tina Deng

“Guardians, Job Stealers, Bureaucrats, or Robot Overlords: Preparing for the Future of AI,” was the topic of the 2018 Graftstein Lecture in Communications, given by Professor Edward W. Felten of Princeton University. Elten, the Robert E. Kahn Professor of Computer Science and Public Affairs and director of the Centre for Information Technology, spoke on the past, present and future course of artificial intelligence.

Senator Jerry Grafstein personally attended this year’s lecture alongside faculty, alumni, friends, and students of the Faculty of Law in a packed Abella Moot Court. Dean Ed Iacobucci offered introductory remarks about the lecture series while Professor Anthony Niblett highlighted some of Professor Felten’s work in technology and the law. The topic of the lecture captivated the audience with its powerful observations and arguments.

Fenton began his lecture by canvassing the broad perceptions about AI as either something remarkably helpful for humanity or something which could “kill us all, or just take our jobs – then kill us all.” The lecture took shape by defining what exactly AI is, followed by the idea of “Singularity” and the alternative perspective of a “Multiplicity.” At the heart of the issue, “artificial intelligence” remains hard to define. It is unclear what exactly constitutes true “intelligence” from a machine perspective.

Felton notes AI developments can trace their history back to the mid-twentieth century, when prominent thinkers questioned whether machines can behave like a person (and thus become one form of “intelligent”).  Eventually, AI developed rather rapidly from 2010 to the present-day, thanks to what Felten credits to be the result of “big datasets, better algorithms, and bigger, faster computers.” Consequently, AI allowed machines to surpass the previous “grand challenges” of having machines replicate the human capacity to converse, to play complex games, and to learn (to an extent).

Fenton began his lecture by canvassing the broad perceptions about AI as either something remarkably helpful for humanity or something which could “kill us all, or just take our jobs – then kill us all.”

Felten analyzes the English mathematician’s I.J. Good’s idea of an “Intelligence Explosion.” The core of Good’s argument is that the rise of superintelligent machines would eventually lead to the self-replication of even more superior superintelligent machines. This creates such a significant group of superintelligent machines that human intelligence will pale in comparison. Felten challenges the validity of this theory by evaluating two views of what happens after this so-called intelligence explosion.

On the one hand, Felten evaluates Vernor Vinge’s idea of a Singularity where the world as we know it radically changes, thanks to the fruits of this machine superintelligence. This idea comes at a particularly pertinent time as new technological developments and the promise of intelligent machines challenge our perception of technology. Some futurist thinkers, such as Ray Kurzweil, propose these new technological capabilities could lead to human capacity to transcend mortality. Others like Nick Bostrom argue that when machines surpass general human intelligence, the fate of humanity hinges on the decisions of machines.

For Felten, these dichotomous perspectives about the future of superintelligence should be nuanced. Fundamentally, Felten challenges Good’s idea about the development of self-replicating machines of greater intelligence than their predecessors, arguing that it does not necessarily lead to “an explosion” of superintelligence. Indeed, by its mathematical definition, the incremental improvement of machine capabilities across generations of new superintelligent machines does not necessarily lead to an explosive change in circumstances. Felten concludes that the growth of machine intelligence happens at a fast clip, but, its explosiveness is muted.

For Fenten, “Even where AI is surpassing humans in many tasks… the evidence for intelligence explosion is not very strong.” However, “those who [argue] that an intelligence explosion is near bear the burden of proof.” Indeed, the example of a computer becoming really good (and better than humans) in complex games like chess or go reflects a narrow capacity of AI to excel in certain discrete tasks as opposed to a more general machine intelligence.

Therefore, according to Felten, if the idea of AI causing singularity with humanity is not clear, the idea of a “multiplicity” may be more appropriate. Felten notes three lessons for AI based on its history:

1)      AI is not a single thing – it’s different solutions for different tasks

AI will surpass humans in different things at different times. The future trajectory of AI is not as smooth as it appears.

2)      Successful AI Doesn’t think like a human – it’s an alien intelligence

Machines fundamentally “go about tasks differently than we do … What is easy for AI might be difficult for humans, and vice versa.”

3)      On many cognitive tasks, more engineering effort or more data translates into better AI performance

Therefore, “machines are worse than humans at learning from experience, but a machine with lots of data has much more experience to learn from.”

In other words, the future of AI is nuanced, with different levels of advanced development in narrow tasks; its growth is therefore multidimensional and uneven. Given these conditions, Felten predicts that this version of the future involves multiple systems being better at humans in many different aspects. However, this is an uneven and gradual transition “over the course of decades.” This future is “not preordained” and calls on us to “adapt…to determine the shape [of AI development].” Felten cogently notes that the elements of this future vision are already present today in different aspects and qualities.

Ultimately, Felten challenges us to be less concerned about the idea that AI will either kill or enslave us; but, to be more sensitive to maintaining social fairness and protecting workers while ensuring safe automated systems. Felten says, “We have to use the problems that AI is posing to us to learn and respond to them…We need to learn about these problems while we still can.” Otherwise, “we are in danger of falling behind and losing control of what’s possible.”

In the course of one hour, Felten canvassed the growth of AI and delivered a convincing call to action. Collectively, we have to recognize the challenges of AI growth and to guide its development in light of social needs. The future of AI may already be here, but it is not too late to prepare.

Watch the full lecture



This 2018 Grafstein Lecture is the latest of nineteen annual lectures since the series was originally established by Senator Jerry S. Grafstein, LLB 1958, to commemorate his own graduation from the law school as well as the graduation of his son and daughter-in-law, both Class of 1988.