Thinking Machines

The Quest for Artificial Intelligence--and Where It's Taking Us Next

Look inside
Paperback
$16.00 US
On sale Mar 07, 2017 | 288 Pages | 978-0-14-313058-1
A fascinating look at Artificial Intelligence, from its humble Cold War beginnings to the dazzling future that is just around the corner.

When most of us think about Artificial Intelligence, our minds go straight to cyborgs, robots, and sci-fi thrillers where machines take over the world. But the truth is that Artificial Intelligence is already among us. It exists in our smartphones, fitness trackers, and refrigerators that tell us when the milk will expire.  In some ways, the future people dreamed of at the World's Fair in the 1960s is already here. We're teaching our machines how to think like humans, and they're learning at an incredible rate.

In Thinking Machines, technology journalist Luke Dormehl takes you through the history of AI and how it makes up the foundations of the machines that think for us today. Furthermore, Dormehl speculates on the incredible--and possibly terrifying--future that's much closer than many would imagine. This remarkable book will invite you to marvel at what now seems commonplace and to dream about a future in which the scope of humanity may need to broaden itself to include intelligent machines.
Whatever Happened to Good Old-Fashioned AI?
 
It was the first thing people saw as they drew close: a shining, stainless steel globe called the Unisphere, rising a full twelve stories into the air. Around it stood dozens of fountains, jetting streams of crystal-clear water into the skies of Flushing Meadows Corona Park, in New York’s Queens borough. At various times during the day, a performer wearing a rocket outfit developed by the US military jetted past the giant globe—showing off man’s ability to rise above any and all challenges.
 
The year was 1964 and the site, the New York World’s Fair. During the course of the World’s Fair, an estimated 52 million people descended upon Flushing Meadows’ 650 acres of pavilions and public spaces. It was a celebration of a bright present for the United States and a tantalizing glimpse of an even brighter future: one covered with multilane motorways, glittering skyscrapers, moving pavements and underwater communities. Even the possibility of holiday resorts in space didn’t seem out of reach for a country like the United States, which just five years later would successfully send man to the Moon. New York City’s “Master Builder” Robert Moses referred to the 1964 World’s Fair as “the Olympics of Progress.”
 
Wherever you looked there was some reminder of America’s post-war global dominance. The Ford Motor Company chose the World’s Fair to unveil its latest automobile, the Ford Mustang, which rapidly became one of history’s best-selling cars. New York’s Sinclair Oil Corporation exhibited “Dinoland,” an animatronic recreation of the Mesozoic age, in which Sinclair Oil’s brontosaurus corporate mascot towered over every other prehistoric beast. At the NASA pavilion, fairgoers had the chance to glimpse a fifty-one-foot replica of the Saturn V rocket ship boat-tail, soon to help the Apollo space missions reach the stars. At the Port Authority Building, people lined up to see architects’ models of the spectacular “Twin Towers” of the World Trade Center, which was set to break ground two years later in 1966.
 
Today, many of these advances evoke a nostalgic sense of technological progress. In all their “bigger, taller, heavier” grandeur, they speak to the final days of an age that was, unbeknownst to attendees of the fair, coming to a close. The Age of Industry was on its way out, to be superseded by the personal computer– driven Age of Information. For those children born in 1964 and after, digits would replace rivets in their engineering dreams. Apple’s Steve Jobs was only nine years old at the time of the New York World’s Fair. Google’s cofounders, Larry Page and Sergey Brin, would not be born for close to another decade; Facebook’s Mark Zuckerberg for another ten years after that.
 
As it turned out, the most forward-looking section of Flushing Meadows Corona Park turned out to be the exhibit belonging to International Business Machines Corporation, better known as IBM. IBM’s mission for the 1964 World’s Fair was to cement computers (and more specifically Artificial Intelligence) in the public consciousness, alongside better-known wonders like space rockets and nuclear reactors. To this end, the company selected the fair as the venue to introduce its new System/360 series of computer mainframes: machines supposedly powerful enough to build the first prototype for a sentient computer.
 
IBM’s centerpiece at the World’s Fair was a giant, egg-shaped pavilion, designed by the celebrated husband and wife team of Charles and Ray Eames. The size of a blimp, the egg was erected on a forest of forty-five stylized, thirty-two-foot-tall sheet metal trees; a total of 14,000 gray and green Plexiglas leaves fanning out to create a sizable, one-acre canopy. Reachable only via a specially installed hydraulic lift, the egg welcomed in excited fair attendees so that they could sit in a high-tech screening room and watch a video on the future of Artificial Intelligence. “See it, THINK, and marvel at the mind of man and his machine,” wrote one giddy reviewer, borrowing the “Think” tagline that had been IBM’s since the 1920s.
 
IBM showed off several impressive technologies at the event. One was a groundbreaking handwriting recognition computer, which the official fair brochure referred to as an “Optical Scanning and Information Retrieval” system. This demo allowed visitors to write an historical date of their choosing (post-1851) in their own handwriting on a small card. That card was then fed into an “optical character reader,” where it was converted into digital form, and then relayed once more to a state-of-the-art IBM 1460 computer system. Major news events were stored on disk in a vast database and the results were then printed onto a commemorative punch-card for the amazement of the user. A surviving punch-card reads as follows:
THE FOLLOWING NEWS EVENT WAS REPORTED IN THE NEW YORK TIMES ON THE DATE THAT YOU REQUESTED:
 
APRIL 14, 1963: 30,000 PILGRIMS VISIT JERUSALEM FOR EASTER; POPE JOHN XXIII PRAYS FOR TRUTH & LOVE IN MAN.
 
Should a person try and predict the future—as, of course, some wag did on the very first day—the punch-card noted: “Since this date is still in the future, we will not have access to the events of this day for [insert number] days.”
 
Another demo featured a mechanized puppet show, apparently “fashioned after eighteenth-century prototypes,” depicting Sherlock Holmes solving a case using computer logic.
 
Perhaps most impressive of all, however, was a computer that bridged the seemingly unassailable gap between the United States and Soviet Union by translating effortlessly (or what appeared to be effortlessly) between English and Russian. This miraculous technology was achieved thanks to a dedicated data connection between the World’s Fair’s IBM exhibit and a powerful IBM mainframe computer 114 miles away in Kingston, New York, carrying out the heavy lifting.
 
Machine translation was a simple, but brilliant, summation of how computers’ clear-thinking vision would usher us toward utopia. The politicians may not have been able to end the Cold War, but they were only human—and with that came all the failings one might expect. Senators, generals and even presidents were severely lacking in what academics were just starting to call “machine intelligence.” Couldn’t smart machines do better? At the 1964 World’s Fair, an excitable public was being brought up to date on the most optimistic vision of researchers. Artificial Intelligence brought with it the suggestion that, if only the innermost mysteries of the human brain could be eked out and replicated inside a machine, global harmony was somehow assured.
 
Nothing summed this up better than the official strapline of the fair: “Peace Through Understanding.”
 
Predicting the Future
 
Two things stand out about the vision of Artificial Intelligence as expressed at the 1964 New York World’s Fair. The first is how bullish everyone was about the future that awaited them. Despite the looming threat of the Cold War, the 1960s was an astonishingly optimistic decade in many regards. This was, after all, the ten-year stretch that began with President John F. Kennedy announcing that, within a decade, man would land on the moon—and ended with exactly that happening. If that was possible, there seemed no reason why unraveling and re-creating the mind should be any tougher to achieve. “Duplicating the problem-solving And information-handling capabilities of the [human] brain is not far off,” claimed political scientist and one of AI’s founding fathers, Herbert Simon, in 1960. Perhaps borrowing a bit of Kennedy-style gauntlet-throwing, he casually added his own timeline: “It would be surprising if it were not accomplished within the next decade.”
 
Simon’s prediction was hopelessly off, but as it turns out, the second thing that registers about the World’s Fair is that IBM wasn’t wrong. All three of the technologies that dropped jaws in 1964 are commonplace today—despite our continued insistence that AI is not yet here. The Optical Scanning and Information Retrieval has become the Internet: granting us access to more information at a moment’s notice than we could possibly hope to absorb in a lifetime. While we still cannot see the future, we are making enormous advances in this capacity, thanks to the huge data sets generated by users that offer constant forecasts about the news stories, books or songs that are likely to be of interest to us. This predictive connectivity isn’t limited to what would traditionally be thought of as a computer, either, but is embedded in the devices, vehicles and buildings around us thanks to a plethora of smart sensors and devices.
 
The Sherlock Holmes puppet show was intended to demonstrate how a variety of tasks could be achieved through computer logic. Our approach to computer logic has changed in some ways, but Holmes may well have been impressed by the modern facial recognition algorithms that are more accurate than humans when it comes to looking at two photos and saying whether they depict the same person. Holmes’s creator, Arthur Conan Doyle, a trained doctor who graduated from Edinburgh (today the location of one of the UK’s top AI schools), would likely have been just as dazzled by Modernizing Medicine, an AI designed to diagnose diseases more effectively than many human physicians.
 
Finally, the miraculous World’s Fair Machine Translator is most familiar to us today as Google Translate: a free service that offers impressively accurate probabilistic machine translation between some fifty-eight different languages—or 3,306 separate translation services in total. If the World’s Fair imagined instantaneous translation between Russian and English, Google Translate goes further still by also allowing translation between languages like Icelandic and Vietnamese, or Farsi and Yiddish, which have had historically limited previous translations. Thanks to cloud computing, we don’t even require stationary mainframes to carry it out, but rather portable computers, called smartphones, no bigger than a deck of cards.
 
In some ways, the fact that all these technologies now exist—not just in research labs, but readily available to virtually anyone who wants to use them—makes it hard to argue that we do not yet live in a world with Artificial Intelligence. Like many of the shifting goalposts we set for ourselves in life, it underlines the way that AI represents computer science’s Neverland: the fantastical “what if” that is always lurking around the next corner.
 
With that said, anyone thinking that the development of AI from its birth sixty years ago to where it is today is a straight line is very much mistaken. Before we get to the rise of the massive “deep learning neural networks” that are driving many of our most notable advances in the present, it’s important to understand a bit more about the history of Artificial Intelligence.
 
And how, for a long time, it all seemed to go so right before going wrong.
LUKE DORMEHL is a technology journalist, filmmaker and author, who has written for Fast Company, Wired, Consumer Reports, Politico, The L.A. Times, and other publications. He is also the author of The Apple Revolution and The Formula: How Algorithms Solve All Our Problems... And Create More. View titles by Luke Dormehl

About

A fascinating look at Artificial Intelligence, from its humble Cold War beginnings to the dazzling future that is just around the corner.

When most of us think about Artificial Intelligence, our minds go straight to cyborgs, robots, and sci-fi thrillers where machines take over the world. But the truth is that Artificial Intelligence is already among us. It exists in our smartphones, fitness trackers, and refrigerators that tell us when the milk will expire.  In some ways, the future people dreamed of at the World's Fair in the 1960s is already here. We're teaching our machines how to think like humans, and they're learning at an incredible rate.

In Thinking Machines, technology journalist Luke Dormehl takes you through the history of AI and how it makes up the foundations of the machines that think for us today. Furthermore, Dormehl speculates on the incredible--and possibly terrifying--future that's much closer than many would imagine. This remarkable book will invite you to marvel at what now seems commonplace and to dream about a future in which the scope of humanity may need to broaden itself to include intelligent machines.

Excerpt

Whatever Happened to Good Old-Fashioned AI?
 
It was the first thing people saw as they drew close: a shining, stainless steel globe called the Unisphere, rising a full twelve stories into the air. Around it stood dozens of fountains, jetting streams of crystal-clear water into the skies of Flushing Meadows Corona Park, in New York’s Queens borough. At various times during the day, a performer wearing a rocket outfit developed by the US military jetted past the giant globe—showing off man’s ability to rise above any and all challenges.
 
The year was 1964 and the site, the New York World’s Fair. During the course of the World’s Fair, an estimated 52 million people descended upon Flushing Meadows’ 650 acres of pavilions and public spaces. It was a celebration of a bright present for the United States and a tantalizing glimpse of an even brighter future: one covered with multilane motorways, glittering skyscrapers, moving pavements and underwater communities. Even the possibility of holiday resorts in space didn’t seem out of reach for a country like the United States, which just five years later would successfully send man to the Moon. New York City’s “Master Builder” Robert Moses referred to the 1964 World’s Fair as “the Olympics of Progress.”
 
Wherever you looked there was some reminder of America’s post-war global dominance. The Ford Motor Company chose the World’s Fair to unveil its latest automobile, the Ford Mustang, which rapidly became one of history’s best-selling cars. New York’s Sinclair Oil Corporation exhibited “Dinoland,” an animatronic recreation of the Mesozoic age, in which Sinclair Oil’s brontosaurus corporate mascot towered over every other prehistoric beast. At the NASA pavilion, fairgoers had the chance to glimpse a fifty-one-foot replica of the Saturn V rocket ship boat-tail, soon to help the Apollo space missions reach the stars. At the Port Authority Building, people lined up to see architects’ models of the spectacular “Twin Towers” of the World Trade Center, which was set to break ground two years later in 1966.
 
Today, many of these advances evoke a nostalgic sense of technological progress. In all their “bigger, taller, heavier” grandeur, they speak to the final days of an age that was, unbeknownst to attendees of the fair, coming to a close. The Age of Industry was on its way out, to be superseded by the personal computer– driven Age of Information. For those children born in 1964 and after, digits would replace rivets in their engineering dreams. Apple’s Steve Jobs was only nine years old at the time of the New York World’s Fair. Google’s cofounders, Larry Page and Sergey Brin, would not be born for close to another decade; Facebook’s Mark Zuckerberg for another ten years after that.
 
As it turned out, the most forward-looking section of Flushing Meadows Corona Park turned out to be the exhibit belonging to International Business Machines Corporation, better known as IBM. IBM’s mission for the 1964 World’s Fair was to cement computers (and more specifically Artificial Intelligence) in the public consciousness, alongside better-known wonders like space rockets and nuclear reactors. To this end, the company selected the fair as the venue to introduce its new System/360 series of computer mainframes: machines supposedly powerful enough to build the first prototype for a sentient computer.
 
IBM’s centerpiece at the World’s Fair was a giant, egg-shaped pavilion, designed by the celebrated husband and wife team of Charles and Ray Eames. The size of a blimp, the egg was erected on a forest of forty-five stylized, thirty-two-foot-tall sheet metal trees; a total of 14,000 gray and green Plexiglas leaves fanning out to create a sizable, one-acre canopy. Reachable only via a specially installed hydraulic lift, the egg welcomed in excited fair attendees so that they could sit in a high-tech screening room and watch a video on the future of Artificial Intelligence. “See it, THINK, and marvel at the mind of man and his machine,” wrote one giddy reviewer, borrowing the “Think” tagline that had been IBM’s since the 1920s.
 
IBM showed off several impressive technologies at the event. One was a groundbreaking handwriting recognition computer, which the official fair brochure referred to as an “Optical Scanning and Information Retrieval” system. This demo allowed visitors to write an historical date of their choosing (post-1851) in their own handwriting on a small card. That card was then fed into an “optical character reader,” where it was converted into digital form, and then relayed once more to a state-of-the-art IBM 1460 computer system. Major news events were stored on disk in a vast database and the results were then printed onto a commemorative punch-card for the amazement of the user. A surviving punch-card reads as follows:
THE FOLLOWING NEWS EVENT WAS REPORTED IN THE NEW YORK TIMES ON THE DATE THAT YOU REQUESTED:
 
APRIL 14, 1963: 30,000 PILGRIMS VISIT JERUSALEM FOR EASTER; POPE JOHN XXIII PRAYS FOR TRUTH & LOVE IN MAN.
 
Should a person try and predict the future—as, of course, some wag did on the very first day—the punch-card noted: “Since this date is still in the future, we will not have access to the events of this day for [insert number] days.”
 
Another demo featured a mechanized puppet show, apparently “fashioned after eighteenth-century prototypes,” depicting Sherlock Holmes solving a case using computer logic.
 
Perhaps most impressive of all, however, was a computer that bridged the seemingly unassailable gap between the United States and Soviet Union by translating effortlessly (or what appeared to be effortlessly) between English and Russian. This miraculous technology was achieved thanks to a dedicated data connection between the World’s Fair’s IBM exhibit and a powerful IBM mainframe computer 114 miles away in Kingston, New York, carrying out the heavy lifting.
 
Machine translation was a simple, but brilliant, summation of how computers’ clear-thinking vision would usher us toward utopia. The politicians may not have been able to end the Cold War, but they were only human—and with that came all the failings one might expect. Senators, generals and even presidents were severely lacking in what academics were just starting to call “machine intelligence.” Couldn’t smart machines do better? At the 1964 World’s Fair, an excitable public was being brought up to date on the most optimistic vision of researchers. Artificial Intelligence brought with it the suggestion that, if only the innermost mysteries of the human brain could be eked out and replicated inside a machine, global harmony was somehow assured.
 
Nothing summed this up better than the official strapline of the fair: “Peace Through Understanding.”
 
Predicting the Future
 
Two things stand out about the vision of Artificial Intelligence as expressed at the 1964 New York World’s Fair. The first is how bullish everyone was about the future that awaited them. Despite the looming threat of the Cold War, the 1960s was an astonishingly optimistic decade in many regards. This was, after all, the ten-year stretch that began with President John F. Kennedy announcing that, within a decade, man would land on the moon—and ended with exactly that happening. If that was possible, there seemed no reason why unraveling and re-creating the mind should be any tougher to achieve. “Duplicating the problem-solving And information-handling capabilities of the [human] brain is not far off,” claimed political scientist and one of AI’s founding fathers, Herbert Simon, in 1960. Perhaps borrowing a bit of Kennedy-style gauntlet-throwing, he casually added his own timeline: “It would be surprising if it were not accomplished within the next decade.”
 
Simon’s prediction was hopelessly off, but as it turns out, the second thing that registers about the World’s Fair is that IBM wasn’t wrong. All three of the technologies that dropped jaws in 1964 are commonplace today—despite our continued insistence that AI is not yet here. The Optical Scanning and Information Retrieval has become the Internet: granting us access to more information at a moment’s notice than we could possibly hope to absorb in a lifetime. While we still cannot see the future, we are making enormous advances in this capacity, thanks to the huge data sets generated by users that offer constant forecasts about the news stories, books or songs that are likely to be of interest to us. This predictive connectivity isn’t limited to what would traditionally be thought of as a computer, either, but is embedded in the devices, vehicles and buildings around us thanks to a plethora of smart sensors and devices.
 
The Sherlock Holmes puppet show was intended to demonstrate how a variety of tasks could be achieved through computer logic. Our approach to computer logic has changed in some ways, but Holmes may well have been impressed by the modern facial recognition algorithms that are more accurate than humans when it comes to looking at two photos and saying whether they depict the same person. Holmes’s creator, Arthur Conan Doyle, a trained doctor who graduated from Edinburgh (today the location of one of the UK’s top AI schools), would likely have been just as dazzled by Modernizing Medicine, an AI designed to diagnose diseases more effectively than many human physicians.
 
Finally, the miraculous World’s Fair Machine Translator is most familiar to us today as Google Translate: a free service that offers impressively accurate probabilistic machine translation between some fifty-eight different languages—or 3,306 separate translation services in total. If the World’s Fair imagined instantaneous translation between Russian and English, Google Translate goes further still by also allowing translation between languages like Icelandic and Vietnamese, or Farsi and Yiddish, which have had historically limited previous translations. Thanks to cloud computing, we don’t even require stationary mainframes to carry it out, but rather portable computers, called smartphones, no bigger than a deck of cards.
 
In some ways, the fact that all these technologies now exist—not just in research labs, but readily available to virtually anyone who wants to use them—makes it hard to argue that we do not yet live in a world with Artificial Intelligence. Like many of the shifting goalposts we set for ourselves in life, it underlines the way that AI represents computer science’s Neverland: the fantastical “what if” that is always lurking around the next corner.
 
With that said, anyone thinking that the development of AI from its birth sixty years ago to where it is today is a straight line is very much mistaken. Before we get to the rise of the massive “deep learning neural networks” that are driving many of our most notable advances in the present, it’s important to understand a bit more about the history of Artificial Intelligence.
 
And how, for a long time, it all seemed to go so right before going wrong.

Author

LUKE DORMEHL is a technology journalist, filmmaker and author, who has written for Fast Company, Wired, Consumer Reports, Politico, The L.A. Times, and other publications. He is also the author of The Apple Revolution and The Formula: How Algorithms Solve All Our Problems... And Create More. View titles by Luke Dormehl

Books that Can Help Students Learn About Artificial Intelligence

Artificial Intelligence is being used as a tool in colleges and universities for automating tasks, from teaching assistance to Chatbots to detecting plagiarism, and beyond. As educational institutions become more reliant on AI, we are looking to the future and providing resources on this topic for educators who want to inform their students on the

Read more