Download high-resolution image
Listen to a clip from the audiobook
audio play button
0:00
0:00

Rebooting AI

Building Artificial Intelligence We Can Trust

Listen to a clip from the audiobook
audio play button
0:00
0:00
Ebook
On sale Sep 10, 2019 | 288 Pages | 9781524748265
Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a robust artificial intelligence that can make our lives better.

“Finally, a book that tells us what AI is, what AI is not, and what AI could become if only we are ambitious and creative enough.” —Garry Kasparov, former world chess champion and author of Deep Thinking


Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence.

The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust—in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better.
from Chapter 1:
 
MIND THE GAP
 
Since its earliest days, artificial intelligence has been long on prom­ise, short on delivery. In the 1950s and 1960s, pioneers like Marvin Minsky, John McCarthy, and Herb Simon genuinely believed that AI could be solved before the end of the twentieth century. “Within a generation,” Marvin Minsky famously wrote, in 1967, “the prob­lem of artificial intelligence will be substantially solved.” Fifty years later, those promises still haven’t been fulfilled, but they have never stopped coming. In 2002, the futurist Ray Kurzweil made a public bet that AI would “surpass native human intelligence” by 2029. In November 2018 Ilya Sutskever, co-founder of OpenAI, a major AI research institute, suggested that “near term AGI [artificial general intelligence] should be taken seriously as a possibility.” Although it is still theoretically possible that Kurzweil and Sutskever might turn out to be right, the odds against this happening are very long. Getting to that level—general-purpose artificial intelligence with the flexibility of human intelligence—isn’t some small step from where we are now; instead it will require an immense amount of foundational progress—not just more of the same sort of thing that’s been accomplished in the last few years, but, as we will show, something entirely different.
 
Even if not everyone is as bullish as Kurzweil and Sutskever, ambi­tious promises still remain common, for everything from medicine to driverless cars. More often than not, what is promised doesn’t mate­rialize. In 2012, for example, we heard a lot about how we would be seeing “autonomous cars [in] the near future.” In 2016, IBM claimed that Watson, the AI system that won at Jeopardy!, would “revo­lutionize healthcare,” stating that Watson Health’s “cognitive sys­tems [could] understand, reason, learn, and interact” and that “with [recent advances in] cognitive computing . . . we can achieve more than we ever thought possible.” IBM aimed to address problems ranging from pharmacology to radiology to cancer diagnosis and treatment, using Watson to read the medical literature and make rec­ommendations that human doctors would miss. At the same time, Geoffrey Hinton, one of AI’s most prominent researchers, said that “it is quite obvious we should stop training radiologists.”
 
In 2015 Facebook launched its ambitious and widely covered project known simply as M, a chatbot that was supposed to be able to cater to your every need, from making dinner reservations to planning your next vacation.
 
As yet, none of this has come to pass. Autonomous vehicles may someday be safe and ubiquitous, and chatbots that can cater to every need may someday become commonplace; so too might superintel­ligent robotic doctors. But for now, all this remains fantasy, not fact.
 
The driverless cars that do exist are still primarily restricted to highway situations with human drivers required as a safety backup, because the software is too unreliable. In 2017, John Krafcik, CEO at Waymo, a Google spinoff that has been working on driverless cars for nearly a decade, boasted that Waymo would shortly have driverless cars with no safety drivers. It didn’t happen. A year later, as Wired put it, the bravado was gone, but the safety drivers weren’t. Nobody really thinks that driverless cars are ready to drive fully on their own in cities or in bad weather, and early optimism has been replaced by widespread recognition that we are at least a decade away from that point—and quite possibly more.
 
IBM Watson’s transition to health care similarly has lost steam. In 2017, MD Anderson Cancer Center shelved its oncology collabo­ration with IBM. More recently it was reported that some of Wat­son’s recommendations were “unsafe and incorrect.” A 2016 project to use Watson for the diagnosis of rare diseases at the Marburg, Ger­many, Center for Rare and Undiagnosed Diseases was shelved less than two years later, because “the performance was unacceptable.” In one case, for instance, when told that a patient was suffering from chest pain, the system missed diagnoses that would have been obvious even to a first year medical student, such as heart attack, angina, and torn aorta. Not long after Watson’s troubles started to become clear, Facebook’s M was quietly canceled, just three years after it was announced.
 
Despite this history of missed milestones, the rhetoric about AI remains almost messianic. Eric Schmidt, the former CEO of Google, has proclaimed that AI would solve climate change, poverty, war, and cancer. XPRIZE founder Peter Diamandis made similar claims in his book Abundance, arguing that strong AI (when it comes) is “definitely going to rocket us up the Abundance pyramid.” In early 2018, Google CEO Sundar Pichai claimed that “AI is one of the most important things humanity is working on . . . more pro­found than . . . electricity or fire.” (Less than a year later, Google was forced to admit in a note to investors that products and ser­vices “that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges.”)
 
Others agonize about the potential dangers of AI, often in ways that show a similar disconnect from current reality. One recent non­fiction bestseller by the Oxford philosopher Nick Bostrom grappled with the prospect of superintelligence taking over the world, as if that were a serious threat in the foreseeable future. In the pages of The Atlantic, Henry Kissinger speculated that the risk of AI might be so profound that “human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them.” Elon Musk has warned that working on AI is “summoning the demon” and a danger “worse than nukes,” and the late Stephen Hawking warned that AI could be “the worst event in the history of our civilization.”
 
But what AI, exactly, are they talking about? Back in the real world, current-day robots struggle to turn doorknobs, and Teslas driven in “Autopilot” mode keep rear-ending parked emergency vehi­cles (at least four times in 2018 alone). It’s as if people in the four­teenth century were worrying about traffic accidents, when good hygiene might have been a whole lot more helpful.
 
[ . . . ]
© Athena Vouloumanos
GARY MARCUS is a scientist, best-selling author, and entrepreneur. He is the founder and CEO of Robust.AI and was founder and CEO of Geometric Intelligence, a machine-learning company acquired by Uber in 2016. He is the author of five books, including Kluge, The Birth of the Mind, and the New York Times best seller Guitar Zero.

He coauthored Rebooting AI: Building Artificial Intelligence We Can Trust with Ernest Davis. View titles by Gary Marcus
© Joe Iano
ERNEST DAVIS is a professor of computer science at the Courant Institute of Mathematical Science, New York University. One of the world's leading scientists on commonsense reasoning for artificial intelligence, he is the author of four books, including Representations of Commonsense Knowledge and Verses for the Information Age.

He coauthored Rebooting AI: Building Artificial Intelligence We Can Trust with Gary Marcus. View titles by Ernest Davis

About

Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a robust artificial intelligence that can make our lives better.

“Finally, a book that tells us what AI is, what AI is not, and what AI could become if only we are ambitious and creative enough.” —Garry Kasparov, former world chess champion and author of Deep Thinking


Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence.

The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust—in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better.

Excerpt

from Chapter 1:
 
MIND THE GAP
 
Since its earliest days, artificial intelligence has been long on prom­ise, short on delivery. In the 1950s and 1960s, pioneers like Marvin Minsky, John McCarthy, and Herb Simon genuinely believed that AI could be solved before the end of the twentieth century. “Within a generation,” Marvin Minsky famously wrote, in 1967, “the prob­lem of artificial intelligence will be substantially solved.” Fifty years later, those promises still haven’t been fulfilled, but they have never stopped coming. In 2002, the futurist Ray Kurzweil made a public bet that AI would “surpass native human intelligence” by 2029. In November 2018 Ilya Sutskever, co-founder of OpenAI, a major AI research institute, suggested that “near term AGI [artificial general intelligence] should be taken seriously as a possibility.” Although it is still theoretically possible that Kurzweil and Sutskever might turn out to be right, the odds against this happening are very long. Getting to that level—general-purpose artificial intelligence with the flexibility of human intelligence—isn’t some small step from where we are now; instead it will require an immense amount of foundational progress—not just more of the same sort of thing that’s been accomplished in the last few years, but, as we will show, something entirely different.
 
Even if not everyone is as bullish as Kurzweil and Sutskever, ambi­tious promises still remain common, for everything from medicine to driverless cars. More often than not, what is promised doesn’t mate­rialize. In 2012, for example, we heard a lot about how we would be seeing “autonomous cars [in] the near future.” In 2016, IBM claimed that Watson, the AI system that won at Jeopardy!, would “revo­lutionize healthcare,” stating that Watson Health’s “cognitive sys­tems [could] understand, reason, learn, and interact” and that “with [recent advances in] cognitive computing . . . we can achieve more than we ever thought possible.” IBM aimed to address problems ranging from pharmacology to radiology to cancer diagnosis and treatment, using Watson to read the medical literature and make rec­ommendations that human doctors would miss. At the same time, Geoffrey Hinton, one of AI’s most prominent researchers, said that “it is quite obvious we should stop training radiologists.”
 
In 2015 Facebook launched its ambitious and widely covered project known simply as M, a chatbot that was supposed to be able to cater to your every need, from making dinner reservations to planning your next vacation.
 
As yet, none of this has come to pass. Autonomous vehicles may someday be safe and ubiquitous, and chatbots that can cater to every need may someday become commonplace; so too might superintel­ligent robotic doctors. But for now, all this remains fantasy, not fact.
 
The driverless cars that do exist are still primarily restricted to highway situations with human drivers required as a safety backup, because the software is too unreliable. In 2017, John Krafcik, CEO at Waymo, a Google spinoff that has been working on driverless cars for nearly a decade, boasted that Waymo would shortly have driverless cars with no safety drivers. It didn’t happen. A year later, as Wired put it, the bravado was gone, but the safety drivers weren’t. Nobody really thinks that driverless cars are ready to drive fully on their own in cities or in bad weather, and early optimism has been replaced by widespread recognition that we are at least a decade away from that point—and quite possibly more.
 
IBM Watson’s transition to health care similarly has lost steam. In 2017, MD Anderson Cancer Center shelved its oncology collabo­ration with IBM. More recently it was reported that some of Wat­son’s recommendations were “unsafe and incorrect.” A 2016 project to use Watson for the diagnosis of rare diseases at the Marburg, Ger­many, Center for Rare and Undiagnosed Diseases was shelved less than two years later, because “the performance was unacceptable.” In one case, for instance, when told that a patient was suffering from chest pain, the system missed diagnoses that would have been obvious even to a first year medical student, such as heart attack, angina, and torn aorta. Not long after Watson’s troubles started to become clear, Facebook’s M was quietly canceled, just three years after it was announced.
 
Despite this history of missed milestones, the rhetoric about AI remains almost messianic. Eric Schmidt, the former CEO of Google, has proclaimed that AI would solve climate change, poverty, war, and cancer. XPRIZE founder Peter Diamandis made similar claims in his book Abundance, arguing that strong AI (when it comes) is “definitely going to rocket us up the Abundance pyramid.” In early 2018, Google CEO Sundar Pichai claimed that “AI is one of the most important things humanity is working on . . . more pro­found than . . . electricity or fire.” (Less than a year later, Google was forced to admit in a note to investors that products and ser­vices “that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges.”)
 
Others agonize about the potential dangers of AI, often in ways that show a similar disconnect from current reality. One recent non­fiction bestseller by the Oxford philosopher Nick Bostrom grappled with the prospect of superintelligence taking over the world, as if that were a serious threat in the foreseeable future. In the pages of The Atlantic, Henry Kissinger speculated that the risk of AI might be so profound that “human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them.” Elon Musk has warned that working on AI is “summoning the demon” and a danger “worse than nukes,” and the late Stephen Hawking warned that AI could be “the worst event in the history of our civilization.”
 
But what AI, exactly, are they talking about? Back in the real world, current-day robots struggle to turn doorknobs, and Teslas driven in “Autopilot” mode keep rear-ending parked emergency vehi­cles (at least four times in 2018 alone). It’s as if people in the four­teenth century were worrying about traffic accidents, when good hygiene might have been a whole lot more helpful.
 
[ . . . ]

Author

© Athena Vouloumanos
GARY MARCUS is a scientist, best-selling author, and entrepreneur. He is the founder and CEO of Robust.AI and was founder and CEO of Geometric Intelligence, a machine-learning company acquired by Uber in 2016. He is the author of five books, including Kluge, The Birth of the Mind, and the New York Times best seller Guitar Zero.

He coauthored Rebooting AI: Building Artificial Intelligence We Can Trust with Ernest Davis. View titles by Gary Marcus
© Joe Iano
ERNEST DAVIS is a professor of computer science at the Courant Institute of Mathematical Science, New York University. One of the world's leading scientists on commonsense reasoning for artificial intelligence, he is the author of four books, including Representations of Commonsense Knowledge and Verses for the Information Age.

He coauthored Rebooting AI: Building Artificial Intelligence We Can Trust with Gary Marcus. View titles by Ernest Davis