Download high-resolution image
Listen to a clip from the audiobook
audio play button
0:00
0:00

The Future of the Mind

The Scientific Quest to Understand, Enhance, and Empower the Mind

Read by Feodor Chin
Listen to a clip from the audiobook
audio play button
0:00
0:00
NOW A #1 NEW YORK TIMES BESTSELLER

“Compelling….Kaku thinks with great breadth, and the vistas he presents us are worth the trip”
—The New York Times Book Review


The New York Times
best-selling author of PHYSICS OF THE IMPOSSIBLE, PHYSICS OF THE FUTURE and HYPERSPACE tackles the most fascinating and complex object in the known universe: the human brain.
        
For the first time in history, the secrets of the living brain are being revealed by a battery of high tech brain scans devised by physicists. Now what was once solely the province of science fiction has become a startling reality. Recording memories, telepathy, videotaping our dreams, mind control, avatars, and telekinesis are not only possible; they already exist. 
 
THE FUTURE OF THE MIND gives us an authoritative and compelling look at the astonishing research being done in top laboratories around the world—all based on the latest advancements in neuroscience and physics.  One day we might have a "smart pill" that can enhance our cognition; be able to upload our brain to a computer, neuron for neuron; send thoughts and emotions around the world on a "brain-net"; control computers and robots with our mind; push the very limits of immortality; and perhaps even send our consciousness across the universe. 
          
Dr. Kaku takes us on a grand tour of what the future might hold, giving us not only a solid sense of how the brain functions but also how these technologies will change our daily lives. He even presents a radically new way to think about "consciousness" and applies it to provide fresh insight into mental illness, artificial intelligence and alien consciousness.  

With Dr. Kaku's deep understanding of modern science and keen eye for future developments, THE FUTURE OF THE MIND is a scientific tour de force--an extraordinary, mind-boggling exploration of the frontiers of neuroscience.
Houdini believed that telepathy was impossible. But science is proving
Houdini wrong.

   Telepathy is now the subject of intense research at universities around
the world, where scientists have already been able to use advanced sensors to
read individual words, images, and thoughts in a person’s brain. This could
alter the way we communicate with stroke and accident victims who are
“locked in” their bodies, unable to articulate their thoughts except through
blinks. But that’s just the start. Telepathy might also radically change the way
we interact with computers and the outside world.
   
   Indeed, in a recent “Next 5 in 5 Forecast,” which predicts five revolutionary
developments in the next five years, IBM scientists claimed that we will
be able to mentally communicate with computers, perhaps replacing the
mouse and voice commands. This means using the power of the mind to call
people on the phone, pay credit card bills, drive cars, make appointments,
create beautiful symphonies and works of art, etc. The possibilities are endless,
and it seems that everyone— from computer giants, educators, video
game companies, and music studios to the Pentagon— is converging on this
technology.

   True telepathy, found in science-fiction and fantasy novels, is not possible
without outside assistance. As we know, the brain is electrical. In general,
anytime an electron is accelerated, it gives off electromagnetic radiation. The
same holds true for electrons oscillating inside the brain, which broadcasts
radio waves. But these signals are too faint to be detected by others, and
even if we could perceive these radio waves, it would be difficult to make
sense of them. Evolution has not given us the ability to decipher this collection
of random radio signals, but computers can. Scientists have been able
to get crude approximations of a person’s thoughts using EEG scans. Subjects
would put on a helmet with EEG sensors and concentrate on certain
pictures— say, the image of a car. The EEG signals were then recorded for
each image and eventually a rudimentary dictionary of thought was created,
with a one- to- one correspondence between a person’s thoughts and the EEG
image. Then, when a person was shown a picture of another car, the computer
would recognize the EEG pattern as being from a car.

   The advantage of EEG sensors is that they are noninvasive and quick.
You simply put a helmet containing many electrodes onto the surface of the
brain and the EEG can rapidly identify signals that change every millisecond.
But the problem with EEG sensors, as we have seen, is that electromagnetic
waves deteriorate as they pass through the skull, and it is difficult to locate
their precise source. This method can tell if you are thinking of a car or a
house, but it cannot re- create an image of the car. That is where Dr. Jack Gallant’s
work comes in.
 
VIDEOS OF THE MIND

The epicenter for much of this research is the University of California at
Berkeley, where I received my own Ph.D. in theoretical physics years ago. I
had the pleasure of touring the laboratory of Dr. Gallant, whose group has
accomplished a feat once considered to be impossible: videotaping people’s
thoughts. “This is a major leap forward reconstructing internal imagery. We
are opening a window into the movies in our mind,” says Gallant.
   
   When I visited his laboratory, the first thing I noticed was the team of
young, eager postdoctoral and graduate students huddled in front of their
computer screens, looking intently at video images that were reconstructed
from someone’s brain scan. Talking to Gallant’s team, you feel as though you
are witnessing scientific history in the making.

   Gallant explained to me that first the subject lies flat on a stretcher, which
is slowly inserted headfirst into a huge, state- of- the- art MRI machine, costing
upward of $3 million. The subject is then shown several movie clips (such
as movie trailers readily available on YouTube). To accumulate enough data,
the subject has to sit motionless for hours watching these clips, a truly arduous
task. I asked one of the postdocs, Dr. Shinji Nishimoto, how they found
volunteers who were willing to lie still for hours on end with only fragments
of video footage to occupy the time. He said the people in the room, the grad
students and postdocs, volunteered to be guinea pigs for their own research.
As the subject watches the movies, the MRI machine creates a 3- D image
of the blood flow within the brain. The MRI image looks like a vast collection
of thirty thousand dots, or voxels. Each voxel represents a pinpoint of neural energy, and the color of the dot corresponds to the intensity of the signal and blood flow. Red dots represent points of large neural activity, while blue dots represent points of less activity. (The final image looks very much like thousands of Christmas lights in the shape of the brain. Immediately you can see that the brain is concentrating most of its mental energy in the visual cortex, which is located at the back of the brain, while watching these videos.)

   Gallant’s MRI machine is so powerful it can identify two to three hundred distinct regions of the brain and, on average, can take snapshots that have one hundred dots per region of the brain. (One goal for future generations of MRI technology is to provide an even sharper resolution by increasing the number of dots per region of the brain.)

   At first, this 3- D collection of colored dots looks like gibberish. But after
years of research, Dr. Gallant and his colleagues have developed a mathematical
formula that begins to find relationships between certain features of a picture (edges, textures, intensity, etc.) and the MRI voxels. For example, if you look at a boundary, you’ll notice it’s a region separating lighter and darker areas, and hence the edge generates a certain pattern of voxels. By having subject after subject view such a large library of movie clips, this mathematical formula is refined, allowing the computer to analyze how all sorts of images are converted into MRI voxels. Eventually the scientists were able to ascertain a direct correlation between certain MRI patterns of voxels
and features within each picture.

   At this point, the subject is then shown another movie trailer. The computer
analyzes the voxels generated during this viewing and re- creates a rough approximation of the original image. (The computer selects images from one hundred movie clips that most closely resemble the one that the subject just saw and then merges images to create a close approximation.) In this way, the computer is able to create a fuzzy video of the visual imagery going through your mind. Dr. Gallant’s mathematical formula is so versatile that it can take a collection of MRI voxels and convert it into a picture, or it can do the reverse, taking a picture and then converting it to MRI voxels.

   I had a chance to view the video created by Dr. Gallant’s group, and it was
very impressive. Watching it was like viewing a movie with faces, animals,
street scenes, and buildings through dark glasses. Although you could not
see the details within each face or animal, you could clearly identify the kind
of object you were seeing.

   Not only can this program decode what you are looking at, it can also
decode imaginary images circulating in your head. Let’s say you are asked to
think of the Mona Lisa. We know from MRI scans that even though you’re
not viewing the painting with your eyes, the visual cortex of your brain will
light up. Dr. Gallant’s program then scans your brain while you are thinking
of the Mona Lisa and flips through its data files of pictures, trying to find the
closest match. In one experiment I saw, the computer selected a picture of
the actress Salma Hayek as the closest approximation to the Mona Lisa. Of
course, the average person can easily recognize hundreds of faces, but the
fact that the computer analyzed an image within a person’s brain and then
picked out this picture from millions of random pictures at its disposal is
still impressive.

   The goal of this whole process is to create an accurate dictionary that
allows you to rapidly match an object in the real world with the MRI pattern
in your brain. In general, a detailed match is very difficult and will take years,
but some categories are actually easy to read just by flipping through some
photographs. Dr. Stanislas Dehaene of the Collège de France in Paris was
examining MRI scans of the parietal lobe, where numbers are recognized,
when one of his postdocs casually mentioned that just by quickly scanning
the MRI pattern, he could tell what number the subject was looking at. In
fact, certain numbers created distinctive patterns on the MRI scan. He notes,
“If you take 200 voxels in this area, and look at which of them are active
and which are inactive, you can construct a machine-learning device that
decodes which number is being held in memory.”

   This leaves open the question of when we might be able to have picture quality
videos of our thoughts. Unfortunately, information is lost when a
person is visualizing an image. Brain scans corroborate this. When you compare
the MRI scan of the brain as it is looking at a flower to an MRI scan
as the brain is thinking about a flower, you immediately see that the second
image has far fewer dots than the first. So although this technology will
vastly improve in the coming years, it will never be perfect. (I once read a
short story in which a man meets a genie who offers to create anything that
the person can imagine. The man immediately asks for a luxury car, a jet
plane, and a million dollars. At first, the man is ecstatic. But when he looks at
these items in detail, he sees that the car and the plane have no engines, and
the image on the cash is all blurred. Everything is useless. This is because our
memories are only approximations of the real thing.)

   But given the rapidity with which scientists are beginning to decode the
MRI patterns in the brain, will we soon be able to actually read words and
thoughts circulating in the mind?

READING THE MIND


In fact, in a building next to Gallant’s laboratory, Dr. Brian Pasley and his
colleagues are literally reading thoughts— at least in principle. One of the
postdocs there, Dr. Sara Szczepanski, explained to me how they are able to
identify words inside the mind.

   The scientists used what is called ECOG (electrocorticogram) technology,
which is a vast improvement over the jumble of signals that EEG scans
produce. ECOG scans are unprecedented in accuracy and resolution, since
signals are directly recorded from the brain and do not pass through the
skull. The flipside is that one has to remove a portion of the skull to place a
mesh, containing sixty-four electrodes in an eight-by-eight grid, directly on
top of the exposed brain.

   Luckily they were able to get permission to conduct experiments with
ECOG scans on epileptic patients, who were suffering from debilitating seizures.
The ECOG mesh was placed on the patients’ brains while open- brain
surgery was being performed by doctors at the nearby University of California
at San Francisco.

   As the patients hear various words, signals from their brains pass through
the electrodes and are then recorded. Eventually a dictionary is formed,
matching the word with the signals emanating from the electrodes in the
brain. Later, when a word is uttered, one can see the same electrical pattern. This correspondence also means that if one is thinking of a certain word, the
computer can pick up the characteristic signals and identify it.
With this technology, it might be possible to have a conversation that
takes place entirely telepathically. Also, stroke victims who are totally paralyzed
may be able to “talk” through a voice synthesizer that recognizes the
brain patterns of individual words.

   Not surprisingly, BMI (brain- machine interface) has become a hot field,
with groups around the country making significant breakthroughs. Similar
results were obtained by scientists at the University of Utah in 2011. They
placed grids, each containing sixteen electrodes, over the facial motor cortex
(which controls movements of the mouth, lips, tongue, and face) and
Wernicke’s area, which processes information about language. The person was then asked to say ten common words, such as “yes” and “no,” “hot” and “cold,” “hungry” and “thirsty,” “hello” and “good- bye,” and “more” and “less.” Using a computer to record the brain signals when these words were uttered, the scientists were able to create a rough one- to- one correspondence between spoken words and computer signals from the brain.

   Later, when the patient voiced certain words, they were able to correctly
identify each one with an accuracy ranging from 76 percent to 90 percent.
The next step is to use grids with 121 electrodes to get better resolution.
In the future, this procedure may prove useful for individuals suffering
from strokes or paralyzing illnesses such as Lou Gehrig’s disease, who would
be able to speak using the brain- to- computer technique.

TYPING WITH THE MIND


At the Mayo Clinic in Minnesota, Dr. Jerry Shih has hooked up epileptic
patients via ECOG sensors so they can learn how to type with the mind.
The calibration of this device is simple. The patient is first shown a series
of letters and is told to focus mentally on each symbol. A computer records
the signals emanating from the brain as it scans each letter. As with the other
experiments, once this one- to- one dictionary is created, it is then a simple
matter for the person to merely think of the letter and for the letter to be
typed on a screen, using only the power of the mind.

   Dr. Shih, the leader of this project, says that the accuracy of his machine
is nearly 100 percent. Dr. Shih believes that he can next create a machine to
record images, not just words, that patients conceive in their minds. This
could have applications for artists and architects, but the big drawback of
ECOG technology, as we have mentioned, is that it requires opening up
patients’ brains.

   Meanwhile, EEG typewriters, because they are noninvasive, are entering
the marketplace. They are not as accurate or precise as ECOG typewriters,
but they have the advantage that they can be sold over the counter. Guger
Technologies, based in Austria, recently demonstrated an EEG typewriter at
a trade show. According to their officials, it takes only ten minutes or so for
people to learn how to use this machine, and they can then type at the rate
of five to ten words per minute.
© AsianBoston/Rob Klein

MICHIO KAKU is a professor of physics at the City University of New York, cofounder of string field theory, and the author of several widely acclaimed science books, including Hyperspace, Beyond Einstein, Physics of the Impossible, and Physics of the Future. He is the science correspondent for CBS's This Morning and host of the radio programs Science Fantastic and Explorations in Science.

View titles by Michio Kaku

About

NOW A #1 NEW YORK TIMES BESTSELLER

“Compelling….Kaku thinks with great breadth, and the vistas he presents us are worth the trip”
—The New York Times Book Review


The New York Times
best-selling author of PHYSICS OF THE IMPOSSIBLE, PHYSICS OF THE FUTURE and HYPERSPACE tackles the most fascinating and complex object in the known universe: the human brain.
        
For the first time in history, the secrets of the living brain are being revealed by a battery of high tech brain scans devised by physicists. Now what was once solely the province of science fiction has become a startling reality. Recording memories, telepathy, videotaping our dreams, mind control, avatars, and telekinesis are not only possible; they already exist. 
 
THE FUTURE OF THE MIND gives us an authoritative and compelling look at the astonishing research being done in top laboratories around the world—all based on the latest advancements in neuroscience and physics.  One day we might have a "smart pill" that can enhance our cognition; be able to upload our brain to a computer, neuron for neuron; send thoughts and emotions around the world on a "brain-net"; control computers and robots with our mind; push the very limits of immortality; and perhaps even send our consciousness across the universe. 
          
Dr. Kaku takes us on a grand tour of what the future might hold, giving us not only a solid sense of how the brain functions but also how these technologies will change our daily lives. He even presents a radically new way to think about "consciousness" and applies it to provide fresh insight into mental illness, artificial intelligence and alien consciousness.  

With Dr. Kaku's deep understanding of modern science and keen eye for future developments, THE FUTURE OF THE MIND is a scientific tour de force--an extraordinary, mind-boggling exploration of the frontiers of neuroscience.

Excerpt

Houdini believed that telepathy was impossible. But science is proving
Houdini wrong.

   Telepathy is now the subject of intense research at universities around
the world, where scientists have already been able to use advanced sensors to
read individual words, images, and thoughts in a person’s brain. This could
alter the way we communicate with stroke and accident victims who are
“locked in” their bodies, unable to articulate their thoughts except through
blinks. But that’s just the start. Telepathy might also radically change the way
we interact with computers and the outside world.
   
   Indeed, in a recent “Next 5 in 5 Forecast,” which predicts five revolutionary
developments in the next five years, IBM scientists claimed that we will
be able to mentally communicate with computers, perhaps replacing the
mouse and voice commands. This means using the power of the mind to call
people on the phone, pay credit card bills, drive cars, make appointments,
create beautiful symphonies and works of art, etc. The possibilities are endless,
and it seems that everyone— from computer giants, educators, video
game companies, and music studios to the Pentagon— is converging on this
technology.

   True telepathy, found in science-fiction and fantasy novels, is not possible
without outside assistance. As we know, the brain is electrical. In general,
anytime an electron is accelerated, it gives off electromagnetic radiation. The
same holds true for electrons oscillating inside the brain, which broadcasts
radio waves. But these signals are too faint to be detected by others, and
even if we could perceive these radio waves, it would be difficult to make
sense of them. Evolution has not given us the ability to decipher this collection
of random radio signals, but computers can. Scientists have been able
to get crude approximations of a person’s thoughts using EEG scans. Subjects
would put on a helmet with EEG sensors and concentrate on certain
pictures— say, the image of a car. The EEG signals were then recorded for
each image and eventually a rudimentary dictionary of thought was created,
with a one- to- one correspondence between a person’s thoughts and the EEG
image. Then, when a person was shown a picture of another car, the computer
would recognize the EEG pattern as being from a car.

   The advantage of EEG sensors is that they are noninvasive and quick.
You simply put a helmet containing many electrodes onto the surface of the
brain and the EEG can rapidly identify signals that change every millisecond.
But the problem with EEG sensors, as we have seen, is that electromagnetic
waves deteriorate as they pass through the skull, and it is difficult to locate
their precise source. This method can tell if you are thinking of a car or a
house, but it cannot re- create an image of the car. That is where Dr. Jack Gallant’s
work comes in.
 
VIDEOS OF THE MIND

The epicenter for much of this research is the University of California at
Berkeley, where I received my own Ph.D. in theoretical physics years ago. I
had the pleasure of touring the laboratory of Dr. Gallant, whose group has
accomplished a feat once considered to be impossible: videotaping people’s
thoughts. “This is a major leap forward reconstructing internal imagery. We
are opening a window into the movies in our mind,” says Gallant.
   
   When I visited his laboratory, the first thing I noticed was the team of
young, eager postdoctoral and graduate students huddled in front of their
computer screens, looking intently at video images that were reconstructed
from someone’s brain scan. Talking to Gallant’s team, you feel as though you
are witnessing scientific history in the making.

   Gallant explained to me that first the subject lies flat on a stretcher, which
is slowly inserted headfirst into a huge, state- of- the- art MRI machine, costing
upward of $3 million. The subject is then shown several movie clips (such
as movie trailers readily available on YouTube). To accumulate enough data,
the subject has to sit motionless for hours watching these clips, a truly arduous
task. I asked one of the postdocs, Dr. Shinji Nishimoto, how they found
volunteers who were willing to lie still for hours on end with only fragments
of video footage to occupy the time. He said the people in the room, the grad
students and postdocs, volunteered to be guinea pigs for their own research.
As the subject watches the movies, the MRI machine creates a 3- D image
of the blood flow within the brain. The MRI image looks like a vast collection
of thirty thousand dots, or voxels. Each voxel represents a pinpoint of neural energy, and the color of the dot corresponds to the intensity of the signal and blood flow. Red dots represent points of large neural activity, while blue dots represent points of less activity. (The final image looks very much like thousands of Christmas lights in the shape of the brain. Immediately you can see that the brain is concentrating most of its mental energy in the visual cortex, which is located at the back of the brain, while watching these videos.)

   Gallant’s MRI machine is so powerful it can identify two to three hundred distinct regions of the brain and, on average, can take snapshots that have one hundred dots per region of the brain. (One goal for future generations of MRI technology is to provide an even sharper resolution by increasing the number of dots per region of the brain.)

   At first, this 3- D collection of colored dots looks like gibberish. But after
years of research, Dr. Gallant and his colleagues have developed a mathematical
formula that begins to find relationships between certain features of a picture (edges, textures, intensity, etc.) and the MRI voxels. For example, if you look at a boundary, you’ll notice it’s a region separating lighter and darker areas, and hence the edge generates a certain pattern of voxels. By having subject after subject view such a large library of movie clips, this mathematical formula is refined, allowing the computer to analyze how all sorts of images are converted into MRI voxels. Eventually the scientists were able to ascertain a direct correlation between certain MRI patterns of voxels
and features within each picture.

   At this point, the subject is then shown another movie trailer. The computer
analyzes the voxels generated during this viewing and re- creates a rough approximation of the original image. (The computer selects images from one hundred movie clips that most closely resemble the one that the subject just saw and then merges images to create a close approximation.) In this way, the computer is able to create a fuzzy video of the visual imagery going through your mind. Dr. Gallant’s mathematical formula is so versatile that it can take a collection of MRI voxels and convert it into a picture, or it can do the reverse, taking a picture and then converting it to MRI voxels.

   I had a chance to view the video created by Dr. Gallant’s group, and it was
very impressive. Watching it was like viewing a movie with faces, animals,
street scenes, and buildings through dark glasses. Although you could not
see the details within each face or animal, you could clearly identify the kind
of object you were seeing.

   Not only can this program decode what you are looking at, it can also
decode imaginary images circulating in your head. Let’s say you are asked to
think of the Mona Lisa. We know from MRI scans that even though you’re
not viewing the painting with your eyes, the visual cortex of your brain will
light up. Dr. Gallant’s program then scans your brain while you are thinking
of the Mona Lisa and flips through its data files of pictures, trying to find the
closest match. In one experiment I saw, the computer selected a picture of
the actress Salma Hayek as the closest approximation to the Mona Lisa. Of
course, the average person can easily recognize hundreds of faces, but the
fact that the computer analyzed an image within a person’s brain and then
picked out this picture from millions of random pictures at its disposal is
still impressive.

   The goal of this whole process is to create an accurate dictionary that
allows you to rapidly match an object in the real world with the MRI pattern
in your brain. In general, a detailed match is very difficult and will take years,
but some categories are actually easy to read just by flipping through some
photographs. Dr. Stanislas Dehaene of the Collège de France in Paris was
examining MRI scans of the parietal lobe, where numbers are recognized,
when one of his postdocs casually mentioned that just by quickly scanning
the MRI pattern, he could tell what number the subject was looking at. In
fact, certain numbers created distinctive patterns on the MRI scan. He notes,
“If you take 200 voxels in this area, and look at which of them are active
and which are inactive, you can construct a machine-learning device that
decodes which number is being held in memory.”

   This leaves open the question of when we might be able to have picture quality
videos of our thoughts. Unfortunately, information is lost when a
person is visualizing an image. Brain scans corroborate this. When you compare
the MRI scan of the brain as it is looking at a flower to an MRI scan
as the brain is thinking about a flower, you immediately see that the second
image has far fewer dots than the first. So although this technology will
vastly improve in the coming years, it will never be perfect. (I once read a
short story in which a man meets a genie who offers to create anything that
the person can imagine. The man immediately asks for a luxury car, a jet
plane, and a million dollars. At first, the man is ecstatic. But when he looks at
these items in detail, he sees that the car and the plane have no engines, and
the image on the cash is all blurred. Everything is useless. This is because our
memories are only approximations of the real thing.)

   But given the rapidity with which scientists are beginning to decode the
MRI patterns in the brain, will we soon be able to actually read words and
thoughts circulating in the mind?

READING THE MIND


In fact, in a building next to Gallant’s laboratory, Dr. Brian Pasley and his
colleagues are literally reading thoughts— at least in principle. One of the
postdocs there, Dr. Sara Szczepanski, explained to me how they are able to
identify words inside the mind.

   The scientists used what is called ECOG (electrocorticogram) technology,
which is a vast improvement over the jumble of signals that EEG scans
produce. ECOG scans are unprecedented in accuracy and resolution, since
signals are directly recorded from the brain and do not pass through the
skull. The flipside is that one has to remove a portion of the skull to place a
mesh, containing sixty-four electrodes in an eight-by-eight grid, directly on
top of the exposed brain.

   Luckily they were able to get permission to conduct experiments with
ECOG scans on epileptic patients, who were suffering from debilitating seizures.
The ECOG mesh was placed on the patients’ brains while open- brain
surgery was being performed by doctors at the nearby University of California
at San Francisco.

   As the patients hear various words, signals from their brains pass through
the electrodes and are then recorded. Eventually a dictionary is formed,
matching the word with the signals emanating from the electrodes in the
brain. Later, when a word is uttered, one can see the same electrical pattern. This correspondence also means that if one is thinking of a certain word, the
computer can pick up the characteristic signals and identify it.
With this technology, it might be possible to have a conversation that
takes place entirely telepathically. Also, stroke victims who are totally paralyzed
may be able to “talk” through a voice synthesizer that recognizes the
brain patterns of individual words.

   Not surprisingly, BMI (brain- machine interface) has become a hot field,
with groups around the country making significant breakthroughs. Similar
results were obtained by scientists at the University of Utah in 2011. They
placed grids, each containing sixteen electrodes, over the facial motor cortex
(which controls movements of the mouth, lips, tongue, and face) and
Wernicke’s area, which processes information about language. The person was then asked to say ten common words, such as “yes” and “no,” “hot” and “cold,” “hungry” and “thirsty,” “hello” and “good- bye,” and “more” and “less.” Using a computer to record the brain signals when these words were uttered, the scientists were able to create a rough one- to- one correspondence between spoken words and computer signals from the brain.

   Later, when the patient voiced certain words, they were able to correctly
identify each one with an accuracy ranging from 76 percent to 90 percent.
The next step is to use grids with 121 electrodes to get better resolution.
In the future, this procedure may prove useful for individuals suffering
from strokes or paralyzing illnesses such as Lou Gehrig’s disease, who would
be able to speak using the brain- to- computer technique.

TYPING WITH THE MIND


At the Mayo Clinic in Minnesota, Dr. Jerry Shih has hooked up epileptic
patients via ECOG sensors so they can learn how to type with the mind.
The calibration of this device is simple. The patient is first shown a series
of letters and is told to focus mentally on each symbol. A computer records
the signals emanating from the brain as it scans each letter. As with the other
experiments, once this one- to- one dictionary is created, it is then a simple
matter for the person to merely think of the letter and for the letter to be
typed on a screen, using only the power of the mind.

   Dr. Shih, the leader of this project, says that the accuracy of his machine
is nearly 100 percent. Dr. Shih believes that he can next create a machine to
record images, not just words, that patients conceive in their minds. This
could have applications for artists and architects, but the big drawback of
ECOG technology, as we have mentioned, is that it requires opening up
patients’ brains.

   Meanwhile, EEG typewriters, because they are noninvasive, are entering
the marketplace. They are not as accurate or precise as ECOG typewriters,
but they have the advantage that they can be sold over the counter. Guger
Technologies, based in Austria, recently demonstrated an EEG typewriter at
a trade show. According to their officials, it takes only ten minutes or so for
people to learn how to use this machine, and they can then type at the rate
of five to ten words per minute.

Author

© AsianBoston/Rob Klein

MICHIO KAKU is a professor of physics at the City University of New York, cofounder of string field theory, and the author of several widely acclaimed science books, including Hyperspace, Beyond Einstein, Physics of the Impossible, and Physics of the Future. He is the science correspondent for CBS's This Morning and host of the radio programs Science Fantastic and Explorations in Science.

View titles by Michio Kaku