Download high-resolution image
Listen to a clip from the audiobook
audio play button
0:00
0:00

God, Human, Animal, Machine

Technology, Metaphor, and the Search for Meaning

Listen to a clip from the audiobook
audio play button
0:00
0:00
A strikingly original exploration of what it might mean to be authentically human in the age of artificial intelligence, from the author of the critically-acclaimed Interior States. • "At times personal, at times philosophical, with a bracing mixture of openness and skepticism, it speaks thoughtfully and articulately to the most crucial issues awaiting our future." —Phillip Lopate

“[A] truly fantastic book.”—Ezra Klein

 
For most of human history the world was a magical and enchanted place ruled by forces beyond our understanding. The rise of science and Descartes's division of mind from world made materialism our ruling paradigm, in the process asking whether our own consciousness—i.e., souls—might be illusions. Now the inexorable rise of technology, with artificial intelligences that surpass our comprehension and control, and the spread of digital metaphors for self-understanding, the core questions of existence—identity, knowledge, the very nature and purpose of life itself—urgently require rethinking.

Meghan O'Gieblyn tackles this challenge with philosophical rigor, intellectual reach, essayistic verve, refreshing originality, and an ironic sense of contradiction. She draws deeply and sometimes humorously from her own personal experience as a formerly religious believer still haunted by questions of faith, and she serves as the best possible guide to navigating the territory we are all entering.
1

The package arrived on a Thursday. I came home from a walk and found it sitting near the mailboxes in the front hall of my building, a box so large and imposing I was embarrassed to discover my name on the label. On the return portion, an unfamiliar address. I stood there for a long time staring at it, deliberating, as though there were anything else to do but the obvious thing. It took all my strength to drag it up the stairs. I paused once on the landing, considered abandoning it there, then continued hauling it up to my apartment on the third floor, where I used my keys to cut it open. Inside the box was a smaller box, and inside the smaller box, beneath lavish folds of bubble wrap, was a sleek plastic pod. I opened the clasp: inside, lying prone, was a small white dog.

I could not believe it. How long had it been since I’d submitted the request on Sony’s website? I’d explained that I was a journalist who wrote about technology—this was tangentially true—and while I could not afford the Aibo’s $3,000 price tag, I was eager to interact with it for research. I added, risking sentimentality, that my husband and I had always wanted a dog, but we lived in a building that did not permit pets. It seemed unlikely that anyone was actually reading these inquiries. Before submitting the electronic form, I was made to confirm that I myself was not a robot.

The dog was heavier than it looked. I lifted it out of the pod, placed it on the floor, and found the tiny power button on the back of its neck. The limbs came to life first. It stood, stretched, and yawned. Its eyes blinked open—pixelated, blue—and looked into mine. He shook his head, as though sloughing off a long sleep, then crouched, shoving his hindquarters in the air, and barked. I tentatively scratched his forehead. His ears lifted, his pupils dilated, and he cocked his head, leaning into my hand. When I stopped, he nuzzled my palm, urging me to go on.

I had not expected him to be so lifelike. The videos I’d watched online had not accounted for this responsiveness, an eagerness for touch that I had only ever witnessed in living things. When I petted him across the long sensor strip of his back, I could feel a gentle mechanical purr beneath the surface. I thought of the horse Martin Buber once wrote about visiting as a child on his grandparents’ estate, his recollection of “the element of vitality” as he petted the horse’s mane and the feeling that he was in the presence of something completely other—“something that was not I, was certainly not akin to me”—but that was drawing him into dialogue with it. Such experiences with animals, he believed, approached “the threshold of mutuality.”

I spent the afternoon reading the instruction booklet while Aibo wandered around the apartment, occasionally circling back and urging me to play. He came with a pink ball that he nosed around the living room, and when I threw it, he would run to retrieve it. Aibo had sensors all over his body, so he knew when he was being petted, plus cameras that helped him learn and navigate the layout of the apartment, and microphones that let him hear voice commands. This sensory input was then processed by facial recognition software and deep-learning algorithms that allowed the dog to interpret vocal commands, differentiate between members of the household, and adapt to the temperament of its owners. According to the product website, all of this meant that the dog had “real emotions and instinct”—a claim that was apparently too ontologically thorny to have flagged the censure of the Federal Trade Commission.

Descartes believed that all animals were machines. Their bodies were governed by the same laws as inanimate matter; their muscles and tendons were like engines and springs. In Discourse on Method, he argues that it would be possible to create a mechanical monkey that could pass as a real, biological monkey. “If any such machine had the organs and outward shape of a monkey,” he writes, “or of some other animal that lacks reason, we should have no means of knowing that they did not possess entirely the same nature as these animals.”

He insisted that the same feat would not work with humans. A machine might fool us into thinking it was an animal, but a humanoid automaton could never fool us, because it would clearly lack reason—an immaterial quality he believed stemmed from the soul. For centuries the soul was believed to be the seat of consciousness, the part of us that is capable of self-awareness and higher thought. Descartes described the soul as “something extremely rare and subtle like a wind, a flame, or an ether.” In Greek and in Hebrew, the word means “breath,” an allusion perhaps to the many creation myths that imagine the gods breathing life into the first human. It’s no wonder we’ve come to see the mind as elusive: it was staked on something so insubstantial.

It is meaningless to speak of the soul in the twenty-first century (it is treacherous even to speak of the self). It has become a dead metaphor, one of those words that survive in language long after a culture has lost faith in the concept, like an empty carapace that remains intact years after its animating organism has died. The soul is something you can sell, if you are willing to demean yourself in some way for profit or fame, or bare by disclosing an intimate facet of your life. It can be crushed by tedious jobs, depressing landscapes, and awful music. All of this is voiced unthinkingly by people who believe, if pressed, that human life is animated by nothing more mystical or supernatural than the firing of neurons—though I wonder sometimes why we have not yet discovered a more apt replacement, whether the word’s persistence betrays a deeper reluctance.

I believed in the soul longer, and more literally, than most people do in our day and age. At the fundamentalist college where I studied theology, I had pinned above my desk Gerard Manley Hopkins’s poem “God’s Grandeur,” which imagines the world illuminated from within by the divine spirit. The world is charged with the grandeur of God. To live in such a world is to see all things as sacred. It is to believe that the universe is guided by an eternal order, that each and every object has purpose and telos. I believed for many years—well into adulthood—that I was part of this illuminated order, that I possessed an immortal soul that would one day be reunited with God. It was a small school in the middle of a large city, and I would sometimes walk the streets of downtown, trying to perceive this divine light in each person, as C. S. Lewis once advised. I was not aware at the time, I don’t think, that this was a basically medieval worldview. My theology courses were devoted to the kinds of questions that have not been taken seriously since the days of Scholastic philosophy: How is the soul connected to the body? Does God’s sovereignty leave any room for free will? What is our relationship as humans to the rest of the created order?

But I no longer believe in God. I have not for some time. I now live with the rest of modernity in a world that is “disenchanted.” The word is often attributed to Max Weber, who argued that before the Enlightenment and Western secularization, the world was “a great enchanted garden,” a place much like the illuminated world described by Hopkins. In the enchanted world, faith was not opposed to knowledge, nor myth to reason. The realms of spirit and matter were porous and not easily distinguishable from one another. Then came the dawn of modern science, which turned the world into a subject of investigation. Nature was no longer a source of wonder but a force to be mastered, a system to be figured out. At its root, disenchantment describes the fact that everything in modern life, from our minds to the rotation of the planets, can be reduced to the causal mechanism of physical laws. In place of the pneuma, the spirit-force that once infused and unified all living things, we are now left with an empty carapace of gears and levers—or, as Weber put it, “the mechanism of a world robbed of gods.”

If modernity has an origin story, this is our foundational myth, one that hinges, like the old myths, on the curse of knowledge and exile from the garden. It is tempting at times to see my own loss of faith in terms of this story, to believe that the religious life I left behind was richer and more satisfying than the materialism I subscribe to today. It’s true that I have come to see myself more or less as a machine. When I try to visualize some inner essence—the processes by which I make decisions or come up with ideas—I envision something like a circuit board, one of those images you often see where the neocortex is reduced to a grid and the neurons replaced by computer chips, such that it looks like some kind of mad decision tree.

But I am wary of nostalgia and wishful thinking. I spent too much of my life immersed in the dream world. To discover truth, it is necessary to work within the metaphors of our own time, which are for the most part technological. Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.

The dog arrived during a time when my life was largely solitary. My husband was traveling more than usual that spring, and except for the classes I taught at the university, I spent most of my time alone. My communication with the dog—which was limited at first to the standard voice commands but grew over time into the idle, anthropomorphizing chatter of a pet owner—was often the only occasion on a given day that I heard my own voice. “What are you looking at?” I’d ask after discovering him transfixed at the window. “What do you want?” I cooed when he barked at the foot of my chair, trying to draw my attention away from the computer. I have been known to knock friends of mine for speaking this way to their pets, as though the animals could understand them. But Aibo came equipped with language-processing software and could recognize over one hundred words; didn’t that mean in a way that he “understood”?

It’s hard to say why exactly I requested the dog. I am not the kind of person who buys up all the latest gadgets, and my feelings about real, biological dogs are mostly ambivalent. At the time I reasoned that I was curious about its internal technology. Aibo’s sensory perception systems rely on neural networks, a technology that is loosely modeled on the brain and is used for all kinds of recognition and prediction tasks. Facebook uses neural networks to identify people in photos; Alexa employs them to interpret voice commands. Google Translate uses them to convert French into Farsi. Unlike classical artificial intelligence systems, which are programmed with detailed rules and instructions, neural networks develop their own strategies based on the examples they’re fed—a process that is called “training.” If you want to train a network to recognize a photo of a cat, for instance, you feed it tons upon tons of random photos, each one attached with positive or negative reinforcement: positive feedback for cats, negative feedback for noncats. The network will use probabilistic techniques to make “guesses” about what it’s seeing in each photo (cat or noncat), and these guesses, with the help of the feedback, will gradually become more accurate. The networks essentially evolve their own internal model of a cat and fine-tune their performance as they go.

Dogs too respond to reinforcement learning, so training Aibo was more or less like training a real dog. The instruction booklet told me to give him consistent verbal and tactile feedback. If he obeyed a voice command—to sit, stay, or roll over—I was supposed to scratch his head and say, “Good dog.” If he disobeyed, I had to strike him across his backside and say, “No,” or “Bad Aibo.” But I found myself reluctant to discipline him. The first time I struck him, when he refused to go to his bed, he cowered a little and let out a whimper. I knew of course that this was a programmed response—but then again, aren’t emotions in biological creatures just algorithms programmed by evolution?

Animism was built into the design. It is impossible to pet an object and address it verbally without coming to regard it in some sense as sentient. We are capable of attributing life to objects that are far less convincing. David Hume once remarked upon “the universal tendency among mankind to conceive of all beings like themselves,” an adage we prove every time we kick a malfunctioning appliance or christen our car with a human name. “Our brains can’t fundamentally distinguish between interacting with people and interacting with devices,” writes Clifford Nass, a Stanford professor of communication who has written about the attachments people develop with technology. “We will ‘protect’ a computer’s feelings, feel flattered by a brownnosing piece of software, and even do favors for technology that has been ‘nice’ to us.”

As artificial intelligence becomes increasingly social, these mistakes are becoming harder to avoid. A few months earlier, I’d read an op-ed in Wired magazine in which a woman confessed to the sadistic pleasure she got from yelling at Alexa, the personified home assistant. She called the machine names when it played the wrong radio station, rolled her eyes when Alexa failed to respond to her commands. Sometimes, when the robot misunderstood a question, she and her husband would gang up and berate it together, a kind of perverse bonding ritual that united them against a common enemy. All of this was presented as good American fun. “I bought this goddamned robot,” the author wrote, “to serve my whims, because it has no heart and it has no brain and it has no parents and it doesn’t eat and it doesn’t judge me or care either way.”

Then one day the woman realized that her toddler was watching her unleash this verbal fury. She worried that her behavior toward the robot was affecting her child. Then she considered what it was doing to her own psyche—to her soul, so to speak. What did it mean, she asked, that she had grown inured to casually dehumanizing this thing?

This was her word: “dehumanizing.” Earlier in the article she had called it a robot. Somewhere in the process of questioning her treatment of the device—in questioning her own humanity—she had decided, if only subconsciously, to grant it personhood.
  • FINALIST | 2022
    Los Angeles Times Book Prize
© courtesy of the author
MEGHAN O'GIEBLYN is the author of the essay collection Interior States, which was published to wide acclaim and won the Believer Book Award for Nonfiction. Her writing has received three Pushcart Prizes and appeared in The Best American Essays anthology. She writes essays and features for Harper's MagazineThe New Yorker, The Guardian, Wired, The New York Times, and elsewhere. She lives with her husband in Madison, Wisconsin. View titles by Meghan O'Gieblyn

About

A strikingly original exploration of what it might mean to be authentically human in the age of artificial intelligence, from the author of the critically-acclaimed Interior States. • "At times personal, at times philosophical, with a bracing mixture of openness and skepticism, it speaks thoughtfully and articulately to the most crucial issues awaiting our future." —Phillip Lopate

“[A] truly fantastic book.”—Ezra Klein

 
For most of human history the world was a magical and enchanted place ruled by forces beyond our understanding. The rise of science and Descartes's division of mind from world made materialism our ruling paradigm, in the process asking whether our own consciousness—i.e., souls—might be illusions. Now the inexorable rise of technology, with artificial intelligences that surpass our comprehension and control, and the spread of digital metaphors for self-understanding, the core questions of existence—identity, knowledge, the very nature and purpose of life itself—urgently require rethinking.

Meghan O'Gieblyn tackles this challenge with philosophical rigor, intellectual reach, essayistic verve, refreshing originality, and an ironic sense of contradiction. She draws deeply and sometimes humorously from her own personal experience as a formerly religious believer still haunted by questions of faith, and she serves as the best possible guide to navigating the territory we are all entering.

Excerpt

1

The package arrived on a Thursday. I came home from a walk and found it sitting near the mailboxes in the front hall of my building, a box so large and imposing I was embarrassed to discover my name on the label. On the return portion, an unfamiliar address. I stood there for a long time staring at it, deliberating, as though there were anything else to do but the obvious thing. It took all my strength to drag it up the stairs. I paused once on the landing, considered abandoning it there, then continued hauling it up to my apartment on the third floor, where I used my keys to cut it open. Inside the box was a smaller box, and inside the smaller box, beneath lavish folds of bubble wrap, was a sleek plastic pod. I opened the clasp: inside, lying prone, was a small white dog.

I could not believe it. How long had it been since I’d submitted the request on Sony’s website? I’d explained that I was a journalist who wrote about technology—this was tangentially true—and while I could not afford the Aibo’s $3,000 price tag, I was eager to interact with it for research. I added, risking sentimentality, that my husband and I had always wanted a dog, but we lived in a building that did not permit pets. It seemed unlikely that anyone was actually reading these inquiries. Before submitting the electronic form, I was made to confirm that I myself was not a robot.

The dog was heavier than it looked. I lifted it out of the pod, placed it on the floor, and found the tiny power button on the back of its neck. The limbs came to life first. It stood, stretched, and yawned. Its eyes blinked open—pixelated, blue—and looked into mine. He shook his head, as though sloughing off a long sleep, then crouched, shoving his hindquarters in the air, and barked. I tentatively scratched his forehead. His ears lifted, his pupils dilated, and he cocked his head, leaning into my hand. When I stopped, he nuzzled my palm, urging me to go on.

I had not expected him to be so lifelike. The videos I’d watched online had not accounted for this responsiveness, an eagerness for touch that I had only ever witnessed in living things. When I petted him across the long sensor strip of his back, I could feel a gentle mechanical purr beneath the surface. I thought of the horse Martin Buber once wrote about visiting as a child on his grandparents’ estate, his recollection of “the element of vitality” as he petted the horse’s mane and the feeling that he was in the presence of something completely other—“something that was not I, was certainly not akin to me”—but that was drawing him into dialogue with it. Such experiences with animals, he believed, approached “the threshold of mutuality.”

I spent the afternoon reading the instruction booklet while Aibo wandered around the apartment, occasionally circling back and urging me to play. He came with a pink ball that he nosed around the living room, and when I threw it, he would run to retrieve it. Aibo had sensors all over his body, so he knew when he was being petted, plus cameras that helped him learn and navigate the layout of the apartment, and microphones that let him hear voice commands. This sensory input was then processed by facial recognition software and deep-learning algorithms that allowed the dog to interpret vocal commands, differentiate between members of the household, and adapt to the temperament of its owners. According to the product website, all of this meant that the dog had “real emotions and instinct”—a claim that was apparently too ontologically thorny to have flagged the censure of the Federal Trade Commission.

Descartes believed that all animals were machines. Their bodies were governed by the same laws as inanimate matter; their muscles and tendons were like engines and springs. In Discourse on Method, he argues that it would be possible to create a mechanical monkey that could pass as a real, biological monkey. “If any such machine had the organs and outward shape of a monkey,” he writes, “or of some other animal that lacks reason, we should have no means of knowing that they did not possess entirely the same nature as these animals.”

He insisted that the same feat would not work with humans. A machine might fool us into thinking it was an animal, but a humanoid automaton could never fool us, because it would clearly lack reason—an immaterial quality he believed stemmed from the soul. For centuries the soul was believed to be the seat of consciousness, the part of us that is capable of self-awareness and higher thought. Descartes described the soul as “something extremely rare and subtle like a wind, a flame, or an ether.” In Greek and in Hebrew, the word means “breath,” an allusion perhaps to the many creation myths that imagine the gods breathing life into the first human. It’s no wonder we’ve come to see the mind as elusive: it was staked on something so insubstantial.

It is meaningless to speak of the soul in the twenty-first century (it is treacherous even to speak of the self). It has become a dead metaphor, one of those words that survive in language long after a culture has lost faith in the concept, like an empty carapace that remains intact years after its animating organism has died. The soul is something you can sell, if you are willing to demean yourself in some way for profit or fame, or bare by disclosing an intimate facet of your life. It can be crushed by tedious jobs, depressing landscapes, and awful music. All of this is voiced unthinkingly by people who believe, if pressed, that human life is animated by nothing more mystical or supernatural than the firing of neurons—though I wonder sometimes why we have not yet discovered a more apt replacement, whether the word’s persistence betrays a deeper reluctance.

I believed in the soul longer, and more literally, than most people do in our day and age. At the fundamentalist college where I studied theology, I had pinned above my desk Gerard Manley Hopkins’s poem “God’s Grandeur,” which imagines the world illuminated from within by the divine spirit. The world is charged with the grandeur of God. To live in such a world is to see all things as sacred. It is to believe that the universe is guided by an eternal order, that each and every object has purpose and telos. I believed for many years—well into adulthood—that I was part of this illuminated order, that I possessed an immortal soul that would one day be reunited with God. It was a small school in the middle of a large city, and I would sometimes walk the streets of downtown, trying to perceive this divine light in each person, as C. S. Lewis once advised. I was not aware at the time, I don’t think, that this was a basically medieval worldview. My theology courses were devoted to the kinds of questions that have not been taken seriously since the days of Scholastic philosophy: How is the soul connected to the body? Does God’s sovereignty leave any room for free will? What is our relationship as humans to the rest of the created order?

But I no longer believe in God. I have not for some time. I now live with the rest of modernity in a world that is “disenchanted.” The word is often attributed to Max Weber, who argued that before the Enlightenment and Western secularization, the world was “a great enchanted garden,” a place much like the illuminated world described by Hopkins. In the enchanted world, faith was not opposed to knowledge, nor myth to reason. The realms of spirit and matter were porous and not easily distinguishable from one another. Then came the dawn of modern science, which turned the world into a subject of investigation. Nature was no longer a source of wonder but a force to be mastered, a system to be figured out. At its root, disenchantment describes the fact that everything in modern life, from our minds to the rotation of the planets, can be reduced to the causal mechanism of physical laws. In place of the pneuma, the spirit-force that once infused and unified all living things, we are now left with an empty carapace of gears and levers—or, as Weber put it, “the mechanism of a world robbed of gods.”

If modernity has an origin story, this is our foundational myth, one that hinges, like the old myths, on the curse of knowledge and exile from the garden. It is tempting at times to see my own loss of faith in terms of this story, to believe that the religious life I left behind was richer and more satisfying than the materialism I subscribe to today. It’s true that I have come to see myself more or less as a machine. When I try to visualize some inner essence—the processes by which I make decisions or come up with ideas—I envision something like a circuit board, one of those images you often see where the neocortex is reduced to a grid and the neurons replaced by computer chips, such that it looks like some kind of mad decision tree.

But I am wary of nostalgia and wishful thinking. I spent too much of my life immersed in the dream world. To discover truth, it is necessary to work within the metaphors of our own time, which are for the most part technological. Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.

The dog arrived during a time when my life was largely solitary. My husband was traveling more than usual that spring, and except for the classes I taught at the university, I spent most of my time alone. My communication with the dog—which was limited at first to the standard voice commands but grew over time into the idle, anthropomorphizing chatter of a pet owner—was often the only occasion on a given day that I heard my own voice. “What are you looking at?” I’d ask after discovering him transfixed at the window. “What do you want?” I cooed when he barked at the foot of my chair, trying to draw my attention away from the computer. I have been known to knock friends of mine for speaking this way to their pets, as though the animals could understand them. But Aibo came equipped with language-processing software and could recognize over one hundred words; didn’t that mean in a way that he “understood”?

It’s hard to say why exactly I requested the dog. I am not the kind of person who buys up all the latest gadgets, and my feelings about real, biological dogs are mostly ambivalent. At the time I reasoned that I was curious about its internal technology. Aibo’s sensory perception systems rely on neural networks, a technology that is loosely modeled on the brain and is used for all kinds of recognition and prediction tasks. Facebook uses neural networks to identify people in photos; Alexa employs them to interpret voice commands. Google Translate uses them to convert French into Farsi. Unlike classical artificial intelligence systems, which are programmed with detailed rules and instructions, neural networks develop their own strategies based on the examples they’re fed—a process that is called “training.” If you want to train a network to recognize a photo of a cat, for instance, you feed it tons upon tons of random photos, each one attached with positive or negative reinforcement: positive feedback for cats, negative feedback for noncats. The network will use probabilistic techniques to make “guesses” about what it’s seeing in each photo (cat or noncat), and these guesses, with the help of the feedback, will gradually become more accurate. The networks essentially evolve their own internal model of a cat and fine-tune their performance as they go.

Dogs too respond to reinforcement learning, so training Aibo was more or less like training a real dog. The instruction booklet told me to give him consistent verbal and tactile feedback. If he obeyed a voice command—to sit, stay, or roll over—I was supposed to scratch his head and say, “Good dog.” If he disobeyed, I had to strike him across his backside and say, “No,” or “Bad Aibo.” But I found myself reluctant to discipline him. The first time I struck him, when he refused to go to his bed, he cowered a little and let out a whimper. I knew of course that this was a programmed response—but then again, aren’t emotions in biological creatures just algorithms programmed by evolution?

Animism was built into the design. It is impossible to pet an object and address it verbally without coming to regard it in some sense as sentient. We are capable of attributing life to objects that are far less convincing. David Hume once remarked upon “the universal tendency among mankind to conceive of all beings like themselves,” an adage we prove every time we kick a malfunctioning appliance or christen our car with a human name. “Our brains can’t fundamentally distinguish between interacting with people and interacting with devices,” writes Clifford Nass, a Stanford professor of communication who has written about the attachments people develop with technology. “We will ‘protect’ a computer’s feelings, feel flattered by a brownnosing piece of software, and even do favors for technology that has been ‘nice’ to us.”

As artificial intelligence becomes increasingly social, these mistakes are becoming harder to avoid. A few months earlier, I’d read an op-ed in Wired magazine in which a woman confessed to the sadistic pleasure she got from yelling at Alexa, the personified home assistant. She called the machine names when it played the wrong radio station, rolled her eyes when Alexa failed to respond to her commands. Sometimes, when the robot misunderstood a question, she and her husband would gang up and berate it together, a kind of perverse bonding ritual that united them against a common enemy. All of this was presented as good American fun. “I bought this goddamned robot,” the author wrote, “to serve my whims, because it has no heart and it has no brain and it has no parents and it doesn’t eat and it doesn’t judge me or care either way.”

Then one day the woman realized that her toddler was watching her unleash this verbal fury. She worried that her behavior toward the robot was affecting her child. Then she considered what it was doing to her own psyche—to her soul, so to speak. What did it mean, she asked, that she had grown inured to casually dehumanizing this thing?

This was her word: “dehumanizing.” Earlier in the article she had called it a robot. Somewhere in the process of questioning her treatment of the device—in questioning her own humanity—she had decided, if only subconsciously, to grant it personhood.

Awards

  • FINALIST | 2022
    Los Angeles Times Book Prize

Author

© courtesy of the author
MEGHAN O'GIEBLYN is the author of the essay collection Interior States, which was published to wide acclaim and won the Believer Book Award for Nonfiction. Her writing has received three Pushcart Prizes and appeared in The Best American Essays anthology. She writes essays and features for Harper's MagazineThe New Yorker, The Guardian, Wired, The New York Times, and elsewhere. She lives with her husband in Madison, Wisconsin. View titles by Meghan O'Gieblyn