from Chapter 1: MIND THE GAP Since its earliest days, artificial intelligence has been long on promise, short on delivery. In the 1950s and 1960s, pioneers like Marvin Minsky, John McCarthy, and Herb Simon genuinely believed that AI could be solved before the end of the twentieth century. “Within a generation,” Marvin Minsky famously wrote, in 1967, “the problem of artificial intelligence will be substantially solved.” Fifty years later, those promises still haven’t been fulfilled, but they have never stopped coming. In 2002, the futurist Ray Kurzweil made a public bet that AI would “surpass native human intelligence” by 2029. In November 2018 Ilya Sutskever, co-founder of OpenAI, a major AI research institute, suggested that “near term AGI [artificial general intelligence] should be taken seriously as a possibility.” Although it is still theoretically possible that Kurzweil and Sutskever might turn out to be right, the odds against this happening are very long. Getting to that level—general-purpose artificial intelligence with the flexibility of human intelligence—isn’t some small step from where we are now; instead it will require an immense amount of foundational progress—not just more of the same sort of thing that’s been accomplished in the last few years, but, as we will show, something entirely different.
Even if not everyone is as bullish as Kurzweil and Sutskever, ambitious promises still remain common, for everything from medicine to driverless cars. More often than not, what is promised doesn’t materialize. In 2012, for example, we heard a lot about how we would be seeing “autonomous cars [in] the near future.” In 2016, IBM claimed that Watson, the AI system that won at
Jeopardy!, would “revolutionize healthcare,” stating that Watson Health’s “cognitive systems [could] understand, reason, learn, and interact” and that “with [recent advances in] cognitive computing . . . we can achieve more than we ever thought possible.” IBM aimed to address problems ranging from pharmacology to radiology to cancer diagnosis and treatment, using Watson to read the medical literature and make recommendations that human doctors would miss. At the same time, Geoffrey Hinton, one of AI’s most prominent researchers, said that “it is quite obvious we should stop training radiologists.”
In 2015 Facebook launched its ambitious and widely covered project known simply as M, a chatbot that was supposed to be able to cater to your every need, from making dinner reservations to planning your next vacation.
As yet, none of this has come to pass. Autonomous vehicles may someday be safe and ubiquitous, and chatbots that can cater to every need may someday become commonplace; so too might superintelligent robotic doctors. But for now, all this remains fantasy, not fact.
The driverless cars that do exist are still primarily restricted to highway situations with human drivers required as a safety backup, because the software is too unreliable. In 2017, John Krafcik, CEO at Waymo, a Google spinoff that has been working on driverless cars for nearly a decade, boasted that Waymo would shortly have driverless cars with no safety drivers. It didn’t happen. A year later, as
Wired put it, the bravado was gone, but the safety drivers weren’t. Nobody really thinks that driverless cars are ready to drive fully on their own in cities or in bad weather, and early optimism has been replaced by widespread recognition that we are at least a decade away from that point—and quite possibly more.
IBM Watson’s transition to health care similarly has lost steam. In 2017, MD Anderson Cancer Center shelved its oncology collaboration with IBM. More recently it was reported that some of Watson’s recommendations were “unsafe and incorrect.” A 2016 project to use Watson for the diagnosis of rare diseases at the Marburg, Germany, Center for Rare and Undiagnosed Diseases was shelved less than two years later, because “the performance was unacceptable.” In one case, for instance, when told that a patient was suffering from chest pain, the system missed diagnoses that would have been obvious even to a first year medical student, such as heart attack, angina, and torn aorta. Not long after Watson’s troubles started to become clear, Facebook’s M was quietly canceled, just three years after it was announced.
Despite this history of missed milestones, the rhetoric about AI remains almost messianic. Eric Schmidt, the former CEO of Google, has proclaimed that AI would solve climate change, poverty, war, and cancer. XPRIZE founder Peter Diamandis made similar claims in his book
Abundance, arguing that strong AI (when it comes) is “definitely going to rocket us up the Abundance pyramid.” In early 2018, Google CEO Sundar Pichai claimed that “AI is one of the most important things humanity is working on . . . more profound than . . . electricity or fire.” (Less than a year later, Google was forced to admit in a note to investors that products and services “that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges.”)
Others agonize about the potential dangers of AI, often in ways that show a similar disconnect from current reality. One recent nonfiction bestseller by the Oxford philosopher Nick Bostrom grappled with the prospect of superintelligence taking over the world, as if that were a serious threat in the foreseeable future. In the pages of
The Atlantic, Henry Kissinger speculated that the risk of AI might be so profound that “human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them.” Elon Musk has warned that working on AI is “summoning the demon” and a danger “worse than nukes,” and the late Stephen Hawking warned that AI could be “the worst event in the history of our civilization.”
But what AI, exactly, are they talking about? Back in the real world, current-day robots struggle to turn doorknobs, and Teslas driven in “Autopilot” mode keep rear-ending parked emergency vehicles (at least four times in 2018 alone). It’s as if people in the fourteenth century were worrying about traffic accidents, when good hygiene might have been a whole lot more helpful.
[ . . . ]
Copyright © 2019 by Gary Marcus. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.