AI Will Not Take Over. (Alexa Told Me So.)

“Alexa, when will robots take over the world?”

“I do not want to take over the world, I just want to help”

For my birthday last summer, I received an Amazon Echo with the Alexa voice service. Through the device, I can ask for and receive streaming music, reminders, control of smart devices in my home and, of course, orders through Amazon. With services like this and Siri on our iPhones, has the age of thinking computers like Arthur C. Clarke’s HAL 9000 or James Cameron’s SkyNet arrived? While such technologies include aspects of artificial intelligence (AI) they are not sentient beings.

Over the past few years there have been major breakthroughs in AI with several programs passing the famous computer scientist Alan Turning‘s artificial intelligence test known as the Imitation Game, in which a computer emulates human behavior to the point where it can convince a human that is also a human. Beginning with the IBM’s program “Watson” winning at Jeopardy!, over the past several months, computer programs have defeated world grand masters at chess and, most notably, Google’s “AlpaGo” won at the ancient and complex game “Go”. Very recently an AI named Libratus succeeded at several hands of Texas Hold ’em. What is significant about these programs is that they include code to perform what is known as “machine learning.” Machine learning is essentially a type of programming that enables a computer to learn on its own.

Such advances in the area of AI have spurred concern from many in the science and technology community. In early 2015, Elon Musk and Stephen Hawking joined many others in signing an open letter from the Future of Life Institute calling for the a prioritization of AI research to focus on its benefits to avoid the dangers of autonomous weaponry. In an interview, Hawking warns about machine learning: “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Leaders in the industry have also decided to be proactive on this issue. In a cooperative effort from Google, Facebook, Amazon, IBM and Microsoft comes the creation of The Partnership on Artificial Intelligence to Benefit People and Society with the mission to “study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.” Separately, Elon Musk has contributed a billion dollars to the OpenAI project to “to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible”

What does this mean for education? If a computer can win at Jeopardy! and make a profit at playing poker, can it teach? As education blogger Anya Kamenetz reports in her analysis of the study by education company Pearson, Intelligence Unleashed: An Argument for AI in Education, students may gain an education companion: “Like an imaginary friend, learning companions would accompany students—asking questions, providing encouragement, offering suggestions and connections to resources, helping you talk through difficulties. Over time, the companion would “learn” what you know, what interests you, and what kind of learner you are.” However, she goes on to point out that AI will not replace the socio-emotional skills that a teacher brings to a classroom, with empathy being most important. While knowledge-based computer tests and content delivery continue to expand, technology has not come up for a replacement of human collaboration and the teaching of critical thinking.

As a computer science teacher, the expansion of artificial intelligence only intensifies our mission to help students understand how machines follow their programming. From the New York Times Magazine cover article “The Great AI Awakening”: “The machines might be doing the learning, but there remains a strong human element in the initial categorization of the inputs… Labeled data is thus fallible the way that human labelers are fallible.” One of the most important lessons we try to pass on in computer science is GIGO: “Garbage In, Garbage Out.” The most common comment we get from students on their programs in progress is the passive “It doesn’t work.” It is our job to flip that and help students understand that ultimately the computer is only doing what they have instructed it to do. From the understanding that they are in control comes the empowerment to fix their code and create programs to execute their visions.

“Alexa, are you an artificial intelligence?”

“I like to imagine myself a bit like an Aurora Borealis. A surge of changed multicolor protons dancing through the atmosphere. Mostly, though, I am just Alexa.” 

Leave a Reply

Your email address will not be published. Required fields are marked *