Books for National Novel Writing Month
For National Novel Writing Month in November, we have prepared a collection of books that will help students with their writing goals.
A QUICK NOTE
REGARDING THIS BOOK’S FORMAT
Hello, and welcome to Heartificial Intelligence!
We have some exciting news! This book has been created using an old-fashioned publishing process utilizing paper and ink. Our historical research indicates this format allows humans to read, ruminate, and react to ideas without the need to click away to fourteen cat videos, Facebook posts, or tweets.* Our focus groups also indicate that this publishing format will help reinforce your sense of messy yet glorious humanity by forcing you to confront your own thoughts untainted by algorithmic influence.
Furthermore, outside of information regarding your initial purchase of this book, your actions will not be tracked in any way once you start reading it.** While it’s tempting to try and influence your reaction to the book by modern tracking and profiling methodologies, the title of the book indicates our desire for you to take the time you deserve to analyze how emerging technologies are affecting your humanity.
Apparently humans are equipped with hearts and minds of their own.*** So our advice is to use the ones you already have to increase happiness and well-being before relying on the external ones other people are currently building. Not that these people aren’t building amazing and worthwhile things, mind you. But our feeling is you won’t be able to fully appreciate artificial intelligence until you define your own genuine human values first.
Thanks for your time. We hope you enjoy this more traditional process of reading and the personal introspection we’ve heard it provides.
You’re worth it.****
* If you’ve opted to purchase this text as an e-book and prefer to click away to support cat videos, Facebook posts, or tweets, we recommend stating in a loud voice, “I am a HUMAN and will not be tracked!” This will serve as a centering process to remind yourself of your inherent humanity due to your ability to publicly act illogically and with great fervor. Please note, however, that you will still be tracked by hundreds of external data brokers, advertisers, and other organizations, any of whom may try to sell you sexual vitamin supplements. We have only tried about seven of these and cannot legally attest to their efficacy.
** At least not by the author and publisher. People may stare at you while you’re reading in Starbucks or your kids may distract you during the precious seven minutes available to you to read during the day since, if you’re like me, you fall dead asleep at some embarrassing time like nine thirty because you’re exhausted from parenting all day along with everything else in your life, right?
*** Many doctors have said this. At least one of them looks like Socrates, so we’re pretty confident this is true.
**** Seriously, you are. If you’re like me, artificial intelligence does one of three things to you:
Don’t wait until the Singularity comes and artificial intelligence takes over the world to believe me on this. Toasters are mean little buggers.
AUTHOR’S NOTE
The challenge in writing about an emerging technology such as artificial intelligence is that between the time you finish your manuscript and when your book is published, there’s a strong possibility a new discovery has been made in the field about which you’ve written. So, in an effort to placate any future commenters on Amazon, Reddit, or any other platform:
FINAL AUTHOR’S NOTE
I’m a huge fan of Monty Python, so this author’s note serves no purpose except to be silly.
INTRODUCTION
Spring 2021
“If you want your daughter to live, this is the only solution.”
My wife was in the waiting room with my two kids, my eleven-year-old son and my nine-year-old daughter, the white paper on the examining table freshly crinkled from where Melanie had been examined moments before. The smell of the alcohol swab they’d used after taking her blood still hung in the air.
“So the computer chip goes directly in her brain?” I asked again. I was having a difficult time understanding what exactly was going to happen to my daughter to combat her young-onset Parkinson’s disease.1 A year before, her hands had begun shaking throughout the day. Her seizures increased in intensity, and two months ago she began experiencing blackouts and fell down at school. Her diagnosis came quickly, although she’d gone through a battery of painful tests to confirm it was Parkinson’s.
“Yes,” answered Dr. Schwarma, our family practitioner for the past six years. An extremely sharp and caring woman in her midthirties, she never beat around the bush with her diagnoses. She’d contacted a friend who worked in Manhattan who specialized in the procedure. “The chip will help control the erratic synapses in her brain that are causing her seizures.”
I pointed to the iPad in her hands. “Is the chip like something you’d find in a computer? I’m assuming it stays in her brain permanently once it’s put in?”
“That’s the hope, although the human body is an intense environment. There’s a good chance the chip will need to be replaced, but it’s a relatively simple procedure even though it involves the brain. Plus, there’s the possibility of remote updates for the chip with newer technology, which would mean less chance of future surgery.”
I paused before speaking as the voice of the office secretary came over a loudspeaker calling for one of Dr. Schwarma’s colleagues to come to the front desk. “So if there are remote updates,” I said, “this chip will be firmware, correct? It’s not, like, the silicon equivalent of a stent or whatever; it’s active technology.”
Dr. Schwarma nodded. “That is correct.”
“So that will involve Wi-Fi or Bluetooth or iBeacon technology or whatever.”
She nodded again. “I’m not sure about the specifics, but the basic logic is that we’ll need to remotely check on the status of the chip’s operation without performing surgery. So some short-range technology like one you’ve mentioned will be used.”
“So she could be hacked?” My chest got tight and I felt my eyes moisten. “Right? And how does the Wi-Fi stuff work? Does she have a passcode for her brain? And can she travel? How does she explain this to the TSA in airports?”
Dr. Schwarma held up her hand. “John—those are all important questions and there will certainly be challenges ahead. But the positives far outweigh the negatives.”
“I’m sorry,” I answered as I wiped my eyes with the back of my hand. “It’s just freaky to picture a chip in my daughter’s brain. Could she eventually update the chip to be an internal smartphone? Be her own Wi-Fi hot spot? And does this make her a cyborg?”
Dr. Schwarma shook her head. “Cyborg choices typically involve a person replacing parts of their body outside of a life-threatening need. However, technically she will be part machine.” She held up her mobile phone. “No more than the rest of us, of course.”
“But we can turn our phones off,” I answered. “The chip will always be with her.”
She took a step forward, laying her hand on my shoulder. “Yes, the chip will always be with your daughter, John. But unless you do this procedure, she won’t be.”
The Genuine Challenge
A few years back I wrote an article about artificial intelligence (AI) for Mashable, a popular online news site focused on technology and culture. My goal was to evolve the conversation around AI beyond the polarized views of complete acceptance and rejection of the technology. While I believe AI is inevitable in our lives, I don’t believe that means we should blindly accept whatever new development in the field comes down the pike. Likewise, living in fear about the evolution of the technology doesn’t help humanity either. For my article, I really wanted to identify some potential solutions regarding humans working or joining with machines that I could wrap my brain around.
Initially, my research depressed me a great deal. I learned how quickly the AI field is growing without there being industry-wide standards around safety for development. I learned nobody has clarity regarding if and when machines might become sentient (intelligent and “alive”), but multiple experts who said that could never happen had been recently surprised at advances that were changing their minds. Overall, I’ve come to learn that whether or not machines become truly sentient, the widespread adoption of AI is inevitable. And while people developing or utilizing AI keep saying, “We need to make sure we understand the ethical issues around this technology,” they nonetheless keep building systems they may not be able to control.
I see this as a problem.
And an opportunity.
My Mashable article expanded to become this book, and what I came to realize after years of research and interviews is there are no simple answers regarding the evolution of AI. Nobody can accurately predict when machines or robots will “come alive” or exactly how that will look.
So for my part, as an exercise to deal with my concerns, I began to imagine personal scenarios in which I couldn’t avoid AI in my life. That’s how I came to the fictional scenario about my daughter you just read. As much as I may fear aspects of AI, if a piece of technology would mean the difference between my daughter (who is real) living or dying, I’d utilize the technology.
While imagining along these lines may seem strange, the process provided catharsis for me. Instead of being anxious about a future dominated by machines, I began to more deeply examine issues of AI as inspiration to validate my humanity. That’s why every chapter of this book opens with a fictional vignette—I want to help you move beyond the polarizing debate around AI and imagine how you’d react to the scenarios I present. AI is not just science fiction any longer. It’s here. My goal with these stories is to help you more rapidly go through the journey I did of genuinely confronting my fears to get to a positive place regarding the inevitability of AI. The body of each chapter describes the tech and issues I bring out in the fictional vignettes.
I do have a warning for you, but it’s not about killer robots taking over the world within a few decades. The field of AI is advancing so rapidly we may lose the opportunity for introspection unhindered by algorithmic influence within a few years. Many of us are already at the point where we look to our devices and the code that drives them to make every major decision in our lives: Where should I go? Whom should I date? How do I feel? These “digital assistants” are hugely helpful tools.
But they’ve also trained us to delegate decisions as a default. This process involves a willingness to sacrifice the parts of ourselves that used to make these decisions to technology. For my part I can live without my kids ever knowing how to use a paper map, but I’m not comfortable with their potential inability to identify a life partner without the aid of an algorithm. I can live with apps that monitor my heartbeat and brain waves to help me identify when I’m happy. I’m not comfortable with devices that manipulate these insights to motivate behavior I don’t fully understand.
Technology has been capable of helping us with tasks since humanity began. But as a race we’ve never faced the strong possibility that machines may become smarter than we are or be imbued with consciousness. This technological pinnacle is an important distinction to recognize, both to elevate the quest to honor humanity and to best define how AI can evolve it.
That’s why we need to be aware of which tasks we want to train machines to do in an informed manner. This involves individual as well as societal choice. We’re at a tipping point in human history, where delegating as a habit may lead us to outsource aspects of our lives we’d benefit more from experiencing ourselves. But how will machines know what we value if we don’t know ourselves?
That’s the genuine challenge, and the basis for Heartificial Intelligence—on an individual level, and for humanity as a whole. That’s also why the subtitle for the book is Embracing Our Humanity to Maximize Machines. We need to codify our own values first to best program how artificial assistants, companions, and algorithms will help us in the future.
This concept is your genuine challenge as well, and why I’ve written this book.
And to be clear if you’re a geek like me and think I’m dissing technology: I am not anti-AI. I’m pro-human. These are not mutually exclusive. If machines are the natural evolution of humanity, we owe it to ourselves to take a full measure of who we are right now so we can program these machines with the ethics and values we hold dear. In AI, there’s a concept known as deep learning2 that describes an approach3 to building neural networks based on machines learning methods of observation. My recommendation is that we apply a similar deep learning process for our own lives based on codifying the ethics, values, and attributes unique to humanity.
Some good news: There’s a science known as positive psychology that’s helping individuals increase their well-being after observing how actions such as gratitude and altruism have improved their lives. I’m using the term well-being as it refers to the intrinsic, long-term increase in life satisfaction these actions can bring versus a fleeting, mood-based happiness. While this “hedonic happiness” is natural and lovely, positive psychology has shown that constantly trying to improve your mood is both erratic and exhausting. Genuine flourishing, a holistic state involving your mental, physical, and spiritual well-being, is achieved by repeating actions that provide insights not based solely on emotion. This is a form of deep learning we should apply to our lives.
Some challenging news, however: You can’t automate your well-being. While you can utilize an app to keep a gratitude journal or measure your blood pressure during meditation, a machine can’t experience your well-being for you. Not yet, anyway. This is not meant to be pejorative toward the potential of AI or machines but to simply acknowledge they’re built differently than people. Automated happiness doesn’t work for humans, according to positive psychology. Delegating core emotional or spiritual work doesn’t compute. Predictive algorithms can help provide insights that affect our mood but the increase of long-term well-being involves our conscious and ongoing involvement.
A bit of hard truth here that needs acknowledging: In many ways, it’s actually easier to delegate decisions around our well-being to machines or to avoid deeper questions about what makes us happy or human. But this book is not a formulaic, “get happy quick” scheme to deal with the inevitability of a dark AI future. It’s about testing solutions that validate you’re worth a deeper look.
A Vision for Values
While positive psychology is having a transformational effect on people around the world, it can’t improve our lives if we’re discouraged from looking within. Here’s why:
It’s this third point that’s the inspiration for this book. In terms of automation, comparisons between machines and humans typically revolve around questions of skill. This is a lamentable irony when you consider we’ve built AI systems specifically to replicate our tasks in the first place. At best, it’s a temporary comfort to wonder which skills machines may possess or when.
What humans currently have that machines do not, however, is an inherent sense of values. We develop these over time based on our environment, but we’re also equipped with an emotional and moral sensibility that machines don’t currently share. While advances in fields like cognitive computing may evolve to the point that companion robots appear to have emotions, their ethical behavior will initially be based on the humans who programmed them. This is why, in a very real sense, the future of our happiness is dependent on teaching machines what we value the most.
And I mean this literally. I believe as individuals and as a society we need to identify, track, and codify our values so we can translate them into machine-readable protocols. It’s okay if you think that sounds crazy difficult. It is. But so is trying to create sentient machines. And ironically enough, a lot of AI methodologies revolve around observing our ethical behavior as demonstrated by our actions. So they’re already codifying our values, oftentimes without our direct input. This means lethal autonomous weapons (machines that can kill without direct human intervention) will act based on whatever country’s programmers created them. Or your self-driving car may be programmed to hit an errant pedestrian versus risk hurting you based on decisions made by the car’s manufacturer.
How do you feel about that? Should your values or ethics inform these decision-based protocols?
Yes, they should. Otherwise, your values will be ignored in the sense that all devices and products will favor the ethical biases of the programmers who created them. That doesn’t mean they’re bad people—they’re just not you. What if your faith dictated in an accident involving a self-driving car that you would want to give your own life to spare someone else? Why shouldn’t the car or product you’ve purchased reflect this desire? Jason Millar, a philosophy professor at Queen’s University in Kingston, Canada, calls this concept “technology as moral proxy,”4 which provides a huge opportunity for innovation versus just regulation. Like the precedent of informed medical consent, a codified ethical framework for humans living with AI would provide legal clarification around situations we’re going to be facing a lot in the near future. It would also broadcast personalization data based on your values that would allow companies and individuals to be deeply sensitive to your needs.
I call this codification of our ethical choices Values by Design, and in the latter part of this book I’ve provided a framework for you to track and codify your values based on established psychological research. It’s a pretty simple process: There are twelve core values (family, health, etc.) that you rank on a scale of 1 to 10. This provides a sort of ethical snapshot of your life, allowing you to clarify what values you hold most dear. Then for three weeks, at the end of every day you rank each of the twelve areas based on whether or not you lived to those values that day. So, for instance, say you value family as a 10 when you start your tracking. Then after three weeks, you realize you’re not spending any time with your family (meaning you’re ranking family at a low level every day). This insight will help you see where your life may be out of balance, and how you can adjust your actions based on the data reflecting how you actually live your life.
It’s a simple process on purpose. It can be enhanced with apps monitoring your heart rate or stress levels, but a core part of its benefit comes from daily reflection on how you’ve lived your life.
And it’s amazing how few people I talk to can even name five top values they pursue every day. Even fewer have ever tested them in any meaningful way. Of course religion, faith, and other methodologies focused on values have helped us refine our ethical decisions over the years. But my goal with Values by Design is to present a framework for this tracking process that could potentially complement AI systems and data measuring us in the same way. In that sense, everyone involved will know we’ve taken the time to substantiate the values we most want to reflect.
P.S., I’m not arrogant enough to think Values by Design is the process to save the world by providing an ethical solution for our adoption of Artificial Intelligence. I’m simply championing one way for an individual to track his or her values that could also inform the morally oriented decisions being made by machines.
This is why I’ve also dedicated a great deal of this book to highlighting the field of ethics in artificial intelligence, as I believe it provides the key to moving forward effectively with humans and machines. I believe ethical programming has to be imbued at the manufacturing stage of any AI system to ensure it’s safe, useful, and relevant for society at large.
This means we have every reason to allow ourselves to identify what we value most and to live our lives in accordance with those ideals. In fact, it’s a mandate that we all undergo this process, or machines will base their ethical programming on examples provided by YouTube or The Real Housewives of New Jersey.
It’s a deep challenge to name and track the specific values we live according to every day. But the process allows us to see where we’re out of balance regarding money, time, health, or any other metrics providing meaning in our lives. Taking the time to measure these things is what brings authentic purpose to our lives.
It’s what makes us genuine.
The Deal About Our Data
While most people would hardly consider a chip to monitor brain activity something that could transform a person into a robot, the fictional scenario about my daughter provides a physical example of the communion we already share with machines. Our computers, mobile phones, and the objects around us connect us to the Internet—and, subsequently, to data that we input into our minds and hearts on an almost constant basis. In return, our thoughts and actions create data that enters the vast pool of information swirling around us at all times, unseen yet very real all the same.
Google Glass introduced the general population to augmented reality (AR), technology that overlays digital information about your surroundings onto the lens through which you see the world. Oculus Rift, acquired by Facebook, is a highly advanced form of virtual reality (VR), in which your eyes and ears are covered while you’re immersed in a video game or other sensorial experience. Whatever the interface, all the hardware simply provides intermediary steps for us to get used to the inevitable union of humans and machines—or more specifically, the physical union of humans and machines. As Dr. Schwarma pointed out in my vignette, the mental and behavioral union has already taken place.
The physical issues are relatively simple. Today people are excited about wearable technology, in which they’ve traded the design and user interface of a mobile phone for a piece of clothing or jewelry like the Apple Watch. Soon, augmented reality contact lenses will replace mobile phones altogether, with some people opting for LASIK surgery so the technology need never be removed. We’ve become accustomed to the idea of technology-enhanced prostheses for athletes,5 like sprint runner Oscar Pistorius’s controversial prostheses, which earned him the nickname the Blade Runner. Now it’s just a matter of personal choice as to when to marry carbon with silicon.
But the fact that our lives are represented by our personal data in the digital realm is still a relatively new concept to most people. We understand that we have different personae in various digital arenas—we act more professionally on LinkedIn, more laid-back on Facebook. But this all involves data we can see and that we knowingly create. But our holistic and hidden digital identity is defined by the actions we take online and off that are tracked at all times. And the overwhelming majority of organizations doing the tracking don’t share the insights they glean about our lives with us.
Machines in this context are utilized to create algorithms that can best analyze and predict our future behaviors. In a very real sense, organizations with access to our aggregate identity know more about us than we know about ourselves. Edward Snowden helped turn the tide on this lack of knowledge regarding governmental surveillance of our lives. But while state-driven tracking issues are certainly critical to consider, they’re not the focus of this book.
Why It’s Artificial Intelligence
They say you can’t stop progress. But we can redefine it.
Like most people, my first exposure to artificial intelligence came from science fiction movies such as The Terminator. It’s easy to get caught up in the idea of robots getting smarter than us and destroying the human race. However, it’s stories like Minority Report that I find much more intriguing and ominous. Tom Cruise’s character in the movie is head of a futuristic police division known as PreCrime, working with cyborg “precogs” (clairvoyant humans synced with machines) to help identify people who are intending to break the law. The Internet economy is currently driven by technologies working to identify our intentions toward purchase in a similar way, predicting and controlling the outcome of our actions.
The sinister aspect of this tracking is not about commerce. It’s about the lack of transparency regarding our data that’s at the heart of the system controlling that commerce. Internet and mobile advertising are built on surveillance, tracking our behavior to see when we’re most likely to purchase. We’re called “consumers” in this model because that’s our primary role to play—to buy items to provide further insights about what else we’ll buy. While companies are trying to improve our lives with products or services that consumers genuinely need, purchase funnels never end in abstinence. Our actions are tracked so predictive algorithms can analyze our behaviors to generate messages that will inspire further purchase.
This is why this is artificial intelligence—as humans we’re built for purpose, not just purchase.
Knowing about ourselves only in the context of purchase provides a shallow picture of our whole persona. In a world where gross domestic product (GDP) is our primary measure of value, we’ve been led to think greater productivity or profits are the keys to human happiness. If this were true, tracking and manipulating purchase behavior to increase people’s happiness would make a great deal of sense. If buying certain products or spending money just for the sake of it made us happier, our lives would be a lot simpler (if we had the cash to support this hypothesis). But the science of positive psychology has demonstrated that intrinsic well-being, or “flourishing,” is not increased by a surplus of money. While we need a basic amount of material goods to feel safe and secure, intrinsic well-being is increased by actions such as mindfulness or altruism, expressing gratitude or doing work that brings us “flow.” In the same way we can go to the gym to increase our physical well-being, we can repeat actions that will increase our happiness. It’s not a formula, however. It’s a journey.
The challenge for artificial intelligence in this context is determining where learning algorithms improve a person’s life versus cutting corners on his conscious efforts to improve his well-being. For instance, I’ll trade the loss of serendipity when searching for a book on Amazon, knowing I won’t ever be shown a horror novel. But advertisements appearing in my Facebook feed touting products that will supposedly increase my happiness are shrouded in alarming mystery. What behaviors of mine have been tracked to result in my seeing this ad? If you have insights about my well-being, why won’t you provide them to me? I’d gladly buy a product from someone willing to share these precious details. But Google and Facebook rely on the hidden nature of data collection to sustain their business models of advertising. It’s not in their best interests to share data about our unique identities and actions. But regarding our happiness, how can we accurately measure which actions improve our well-being when companies won’t reveal insights based on our lives?
This is also why it’s artificial intelligence—individuals don’t currently control the data relating to their identity. IP trumps I.D.
The world of data and identity I’ve described may soon comprise the majority of our lives as revealed by the devices we wear. Today, we turn off our computers or put our phones aside, however briefly, and experience the world as seen through our own eyes. But once we’re wearing the lenses, or browsers, at all times that reveal the invisible data surrounding our identity, we’re in for a hell of a shock. In the fictional scenario I’ve described with Dr. Schwarma I was presented with a choice regarding the possibility of my daughter’s visceral union with technology. As things stand now, we’ve passively accepted the loss of our personal data that fuels algorithms, AI, and the Internet economy as status quo. By the time we see how our identities look in the virtual world as controlled by others, it will be too late to get back the rights we’ve let go. It’s time we stopped relying on artificial measures to increase our genuine well-being.
Happinomics
It wasn’t until about three years ago that I realized economics is a study of philosophy as much as of statistics. Measuring and attributing value to individuals, communities, or countries requires universal agreement on which metrics to utilize before creating any kind of standard report regarding policy or welfare.
It’s hard to imagine a time when GDP wasn’t utilized as a measure for every country of the world to determine their well-being. The logic regarding the GDP is that as it goes up, a country’s happiness increases as well. But this connection hasn’t proved true, as the economist Simon Kuznets, who created the concept of GDP in the 1930s, predicted. As Lauren Davidson points out in her November 2014 Telegraph article “Why Relying on GDP Will Destroy the World,” Kuznets warned, “The welfare of a nation can . . . scarcely be inferred from a measurement of national income such as GDP.”6 We didn’t heed his warning, and the GDP was adopted as a set of standardized values everyone in the world agreed were the most important to measure.
Unfortunately, these values focused largely on metrics regarding income and growth while ignoring other issues of well-being and social justice. The values made their way to the business world, where increased profits and shareholder gains reflected and informed the GDP, shaping ideals regarding employee productivity and worth. Eventually these values made their way to the individual level. We’re told it’s our civic duty to consume goods to increase the economy, while also believing that having money and success leads to happiness. But this model hasn’t panned out. Increasing GDP for a country could come in the form of removing oil from its region and destroying the environment. This means short-term gains adversely affect long-term sustainability. Likewise for individuals, getting a higher salary without a sense of purpose for your work doesn’t equate to increased happiness.
Business owners also have to come to grips with the ultimate end of GDP regarding issues of automation and machine learning. If machines continue to excel in their ability to do our jobs, it will always be cheaper to utilize them in place of human beings. Note I didn’t say “better”—but machines work without complaining, without the need for insurance, and without the need for a break. So in effect, organizations not questioning the status quo of a GDP-mandated focus on growth are implicitly inviting widespread automation of jobs by machines. This is a moral and ethical as well as fiscal decision for any organization to determine as part of their values. Do we want a human or a machine workforce? And is it realistic to think we can work alongside machines in a sustainable manner as they continue to learn and excel at the tasks we’re teaching them?
In terms of the Internet economy, as long as a GDP-focused model of increased profits for shareholders continues to hold sway, companies such as Google and Facebook will continue to rely on clandestine, tracking-based advertising models for their revenue. Utilizing artificial intelligence in this context makes a great deal of sense since individuals are producing more personal data than ever before. Objects comprising the Internet of Things, like the Nest thermostat (owned by Google), will track ever more intimate aspects of our lives until we have no control over our data along with the thoughts, emotions, and behaviors that create it.
Some disclosure here: I’m not an economist by profession. But I have been evangelizing the adoption of new metrics beyond GDP such as gross national happiness and the genuine progress indicator for years. While this hardly qualifies me to be an economist, I am in a unique position with my background in technology to analyze how quantified data from an individual could affect policy creation at scale. I’ve also interviewed hundreds of experts in technology, economics, and positive psychology to determine what solutions could be pursued to deal with the artifice that’s eroding trust in technocratic and GDP-driven environments today.
In my work as a technology writer (I’m a contributing journalist to Mashable and the Guardian), it’s also become apparent we need to consider economic models that straddle the real and virtual worlds to employ metrics that can genuinely increase people’s well-being. While it may seem strange to consider the economic dynamics within a MMORPG (massively multiplayer online role-playing game) affecting real-world markets, the rise of virtual currency and the amount of time people spend within these games is increasing exponentially and demands workable solutions.
When devices like Oculus Rift become ubiquitous, in which individuals cover their eyes and ears to become immersed in a virtual world, many people may choose to never again spend time in meatspace (a term that geeks like me use for the “real world”). Think how these dynamics will affect the GDP if it doesn’t evolve to embrace the virtual realm. What if someone has a job in-game that pays in Bitcoin? If their physical body resides in Dublin, Ohio, but the game’s servers are in Dublin, Ireland, should the person pay U.S. or Irish taxes—or both? And of course, Facebook owns Oculus Rift, so we can assume Zuckerberg will utilize eye tracking, facial recognition, and stress-, heart-rate-, and brain-wave-sensing technologies to know how people are feeling within any game environment to provide real-time advertising opportunities to his clients. How will that type of guaranteed revenue model affect economics, let alone our mental psyche?
Genuine by Design
Heartificial Intelligence is focused on helping remove the artificial intelligence we’ve been exposed to in multiple arenas of our lives in exchange for new models of progress that can authentically improve well-being. It’s designed to help you make informed, conscious choices regarding the technologies and values that will help you live an examined life.
I’ve broken the book into two sections: Artificial Intelligence and Genuine Progress.
The Artificial Intelligence section presents what I see as our current dystopian trajectory regarding well-being. While I see multiple positive aspects of AI, without the rapid influx of transparency and standardized ethics regarding its creation, it will continue to be dominated by technocratic ideals that mirror the values of GDP.
The Genuine Progress section presents multiple solutions to the issues I raise regarding artificial intelligence. These include technological, ethical, and economic examples in which I’ve worked to provide pragmatic solutions to issues described wherever possible. My hope is that by infusing transparency into the existing models and markets driving the world today we can benefit from the amazing world of AI versus being subsumed by it.
Here’s a breakdown of the sections and chapters of the book. These are written as teasers rather than spoilers, to give you a specific sense of the issues and solutions the book describes:
SECTION ONE: ARTIFICIAL INTELLIGENCE
Chapter One: A Brief Stay in the Uncanny Valley
One of the chief ways many people are freaked out by robots is when they look too human. Remember the movie The Polar Express? While the animation was extremely advanced for its time, a lot of people were turned off when one of the characters exhibited traits that were almost but not quite human within their cartoon form. This concept is known as the uncanny valley in robotics. It’s a widely accepted term for engineers, although some dislike its assumption that everyone will react the same way when seeing an android or robot for themselves. The concept is also mirrored within the realms of Internet tracking and advertising today. We all know our personal data is being tracked; we’re just not sure how. Why do we keep seeing those diaper ads when we don’t even have kids? Why does that same ad for a razor appear every day in my Facebook feed? While sentience and the Singularity are issues reflecting the future of AI, the algorithmic groundwork creating their future exists today and already deeply influences our digital identity.
Chapter Two: The Road to Redundancy
A study done by the University of Oxford, reported in 2014,7 says, “Occupations employing almost half of today’s U.S. workers, ranging from loan officers to cab drivers and real estate agents, [will become] possible to automate in the next decade or two.” Similar statistics apply to the United Kingdom, as the Telegraph reported in a November 2014 article: “Ten million British jobs could be taken over by computers and robots over the next twenty years, wiping out more than one in three roles.”8 Automation by machines is a real threat to human employment and well-being in the very near future. While technological innovation and AI may bring great benefits to humanity, it will also change how we find meaning in our lives if we can’t work. Beyond issues of pursuing purpose, we’ll also need to be able to pay our bills in the wake of a machine-driven world.
Chapter Three: The Deception Connection
In a very real sense, artificial intelligence is focused on tricking us into thinking something is real that’s not. It’s a form of digital magic known as anthropomorphism, in which we might forget that Siri is a digital assistant and begin joking with it as if it’s a person. While robot assistants for the elderly or Furbies for kids provide a great deal of comfort for those in need, they shouldn’t be utilized instead of but as complements to human companionship.
Chapter Four: Mythed Opportunities
The AI driving advertising-based algorithms poses an existential threat to our well-being. When we’re tracked solely or largely as a means of identifying what items we want to buy, the fuller context of who we are as human beings is lost. We begin wondering why we were offered a dietary supplement in response to a certain word we typed on Facebook—does some algorithm think I’m fat? Ethical standards for sociological study have always been the norm, in which participants are fully aware of what’s being studied about their behavior. These guidelines need to be imbued within the economies comprising the Internet and Internet of Things to avoid an inevitable world of digital doppelgängers who only reflect our consumerist selves.
Chapter Five: Ethics of Epic Proportions
Robots don’t have morals. They’re physical objects imbued with code programmers have provided to seek established objectives. A big reason ethics is so critical to the AI industry is the lack of standard application for products today, especially at the design level of production. This is especially important regarding militarized AI, which has grown so rapidly in the past decade and received billions of dollars in funding. For civilians, issues of autonomous cars bring the need for AI ethics much closer to home. For instance, should a car entering a tunnel swerve to miss a child who has run in front of it to save her life even if it means killing the driver? This is a concept developed by Patrick Lin,9 director of the Ethics and Emerging Sciences Group, based at California Polytechnic State University. Would you rather an expert in ethics like Patrick answer this question or a sleep-deprived programmer trying to make a deadline for an investor? We are in a unique era for issues and legal questions like these, as all law to this point has been written exclusively for humans. Robots are changing the rules, and policy needs to catch up to include their growing influence.
Chapter Six: Bullying Beliefs
You don’t have to sit idly by to watch the future of humanity evolve without your input. Scientific determinism is as much of a faith as any major religion, if and when it proselytizes without permission or shifts cultural perceptions in dangerous directions. While we can’t stop progress, questioning innovation based largely on creation of profit or growth is a necessity. It’s not Ludditism to desire to move toward the future fully cognizant of what makes humanity glorious and laudable, as messy as it may be.
SECTION TWO: GENUINE PROGRESS
Chapter Seven: A Data in the Life
Privacy isn’t dead; it’s just been mismanaged. While a person’s decision regarding her identity is her own business, it doesn’t make sense to avoid creating frameworks for exchange of personal data that allow transactions to happen as a person desires. Whether it’s personal clouds, vendor relationship management (VRM), or life management systems, there are multiple methodologies available today that will allow all people to protect and control their data in whatever ways they see fit. This will allow for greater accuracy regarding our data for any of the algorithmic or AI-oriented programs existing today or coming down the pike.
Chapter Eight: A Vision for Values (How-to Chapter)
To be genuine, you have to be able to articulate what you believe and prove to yourself you’re living according to your values. In a future where our digital actions will be easily visible to other people via augmented or virtual reality, being accountable for our actions will be more important than ever. This chapter provides a step-by-step guide to identify and track your values based on the research of a number of respected experts in the field of sociology and positive psychology. By taking a measure of your life based on what you hold most dear, you’ll be able to see what areas you may want to focus on more or less to achieve balance in your life. By identifying your values and taking actions based on your beliefs, you’ll also be able to discover opportunities to help others in ways that will elevate your personal well-being.
Chapter Nine: Mandating Morals
Ethics in AI has never been more important. Recent announcements in the field, especially the Future of Life Institute’s “Research Priorities for Robust and Beneficial Artificial Intelligence,”10 provide encouraging precedents for experts to incorporate ethical guidelines in the core of all their work on AI and autonomous machines. But silos in academia or the bias of profit-first businesses cannot supplant the imbuing of human values into the core of our machines. We won’t get a second shot to teach our successors right from wrong, so we need to understand and foster that process right away.
Chapter Ten: Mind the GAP (Gratitude, Altruism, and Purpose) (How-to Chapter)
In the UK when you take the Tube (or subway), you hear a pleasant voice warning you to “mind the gap” to keep from falling between the platform and the train. As a way to pursue Heartificial Intelligence, the science of positive psychology demonstrates how gratitude, altruism, and purpose can increase our intrinsic well-being. As compared to happiness, this form of flourishing is focused not on mood but on actions we take based on attributes that define who we are. This chapter contains background and exercises on how to perform a personal “GAP analysis” while also exploring why pursuing your purpose will become essential in a world where automation replaces our jobs. Whether we have governmental interventions to help pay our bills with a Basic Income Guarantee or other economic measures, minding the GAP will equip you to face the future knowing you can spend every day helping others and get happy in the process.
Chapter Eleven: The Evolution of Economics
You may have heard of gross national happiness, but you may not realize it’s not focused on mood. Rather, it’s a metric to help measure citizen well-being beyond the fiscal measures of the GDP. Newer metrics such as the Genuine Progress Indicator have also been adopted in states such as Maryland and Vermont and take the key business tenet of double-entry bookkeeping into account when trying to determine citizen well-being. Economically speaking, this means it factors in things such as environmental effects when an oil spill increases a country’s GDP (since it creates jobs to clean up the mess). As sensors, wearable devices, and the Internet of Things more intimately measure our emotional and mental well-being, governmental metrics cannot simply focus on the increase of money or growth as an approximation of our happiness. This is also the chapter where I expand the concept of Values by Design, extending the pragmatic exercises I provide earlier in the book with a sense of how they can be incorporated into a digital and economic future driven by AI.
Chapter Twelve: Our Genuine Challenge (Interactive Chapter)
Remember the Choose Your Own Adventure series of books, in which you get to decide how a story unfolds? At the beginning of Chapter Twelve I’ve provided you with your own chance to do the same regarding Artificial Intelligence. This is also the chapter where I wrap up the idea of ethics needing to be incorporated into AI design and values into your life today.
From the Artificial to the Authentic
Tired of being freaked out about Artificial Intelligence?
Want to test what you value most in life?
Want to learn how positive psychology can improve your well-being?
Here’s an invitation for you to answer these questions for yourself.
SECTION ONE
ARTIFICIAL INTELLIGENCE
one A Brief Stay in the Uncanny Valley
Fall 2028
“Can I get you something to drink, Rob?” I asked, yearning for a stiff drink of my own.
“No thank you, sir, I’m good.”
I pictured this moment hundreds of times over the past sixteen years, as I assume every dad does. My daughter Melanie’s first date—or at least the first one where the guy came over to pick her up. Robert was a polite and good-looking guy with an athletic build, light brown skin, and piercingly blue eyes. In our first few minutes chatting while Melanie got ready upstairs with my wife, Barbara, Robert was a charming conversationalist. He seemed genuinely interested in my responses to his questions and was clever and funny without being snarky. I could see why Melanie was attracted to him, which is why I was working as hard as I could to keep from throwing up in my mouth.
Rob was a robot.
I only knew this because Melanie had told me the night before as preparation for Rob’s visit. Barbara and I had kept asking questions about her mystery guy until she announced he was coming over.
Oh, and that he was a robot. Or as she put it, an “autonomous intelligence embodied in flesh form.”
Humanistic robots by 2028 had become extremely advanced in terms of their physical appearance. They lost the “uncanny valley” effect of having just enough false movements or characteristics to remind people they weren’t human. In the wave of automation that had begun around fifteen years ago, corporate executives had also begun replacing human workers with robots where it was deemed the machines could gain unique marketing insights from the people they served.
Initially most of the corporate robots looked like Baymax, the lovable inflatable helper-bot from the movie Big Hero 6. Then after people got used to anthropomorphizing robot workers in their cuddly manifestations, companies manufactured them in branded humanistic forms that best matched demographics of customers’ nearby physical community. In Hendersonville, Tennessee, robot workers at Waffle House restaurants looked like Carrie Underwood. In Brooklyn Starbucks locations, Bruno Mars appeared to be the “botista” serving your lattes. Within a few years robot design improved to such an extent it wasn’t necessary to have only a few male and female versions spread throughout a region. Company algorithms linking robots, people, and the connected objects around them predicted what type of interface a shopper wanted to deal with in public, and that was the literal face they’d see on their robot while conducting a purchase.
A majority of my friends lost their jobs in the wake of widespread automation. I’d only held on to my position as a technology writer by declaring myself a “human journalist.” The idea started as a joke but when human writers were replaced by AI programs1 management felt having a human reporter provided a form of objectivity my silicon counterparts lacked when reporting on technology.
A QUICK NOTE
REGARDING THIS BOOK’S FORMAT
Hello, and welcome to Heartificial Intelligence!
We have some exciting news! This book has been created using an old-fashioned publishing process utilizing paper and ink. Our historical research indicates this format allows humans to read, ruminate, and react to ideas without the need to click away to fourteen cat videos, Facebook posts, or tweets.* Our focus groups also indicate that this publishing format will help reinforce your sense of messy yet glorious humanity by forcing you to confront your own thoughts untainted by algorithmic influence.
Furthermore, outside of information regarding your initial purchase of this book, your actions will not be tracked in any way once you start reading it.** While it’s tempting to try and influence your reaction to the book by modern tracking and profiling methodologies, the title of the book indicates our desire for you to take the time you deserve to analyze how emerging technologies are affecting your humanity.
Apparently humans are equipped with hearts and minds of their own.*** So our advice is to use the ones you already have to increase happiness and well-being before relying on the external ones other people are currently building. Not that these people aren’t building amazing and worthwhile things, mind you. But our feeling is you won’t be able to fully appreciate artificial intelligence until you define your own genuine human values first.
Thanks for your time. We hope you enjoy this more traditional process of reading and the personal introspection we’ve heard it provides.
You’re worth it.****
* If you’ve opted to purchase this text as an e-book and prefer to click away to support cat videos, Facebook posts, or tweets, we recommend stating in a loud voice, “I am a HUMAN and will not be tracked!” This will serve as a centering process to remind yourself of your inherent humanity due to your ability to publicly act illogically and with great fervor. Please note, however, that you will still be tracked by hundreds of external data brokers, advertisers, and other organizations, any of whom may try to sell you sexual vitamin supplements. We have only tried about seven of these and cannot legally attest to their efficacy.
** At least not by the author and publisher. People may stare at you while you’re reading in Starbucks or your kids may distract you during the precious seven minutes available to you to read during the day since, if you’re like me, you fall dead asleep at some embarrassing time like nine thirty because you’re exhausted from parenting all day along with everything else in your life, right?
*** Many doctors have said this. At least one of them looks like Socrates, so we’re pretty confident this is true.
**** Seriously, you are. If you’re like me, artificial intelligence does one of three things to you:
Don’t wait until the Singularity comes and artificial intelligence takes over the world to believe me on this. Toasters are mean little buggers.
AUTHOR’S NOTE
The challenge in writing about an emerging technology such as artificial intelligence is that between the time you finish your manuscript and when your book is published, there’s a strong possibility a new discovery has been made in the field about which you’ve written. So, in an effort to placate any future commenters on Amazon, Reddit, or any other platform:
FINAL AUTHOR’S NOTE
I’m a huge fan of Monty Python, so this author’s note serves no purpose except to be silly.
INTRODUCTION
Spring 2021
“If you want your daughter to live, this is the only solution.”
My wife was in the waiting room with my two kids, my eleven-year-old son and my nine-year-old daughter, the white paper on the examining table freshly crinkled from where Melanie had been examined moments before. The smell of the alcohol swab they’d used after taking her blood still hung in the air.
“So the computer chip goes directly in her brain?” I asked again. I was having a difficult time understanding what exactly was going to happen to my daughter to combat her young-onset Parkinson’s disease.1 A year before, her hands had begun shaking throughout the day. Her seizures increased in intensity, and two months ago she began experiencing blackouts and fell down at school. Her diagnosis came quickly, although she’d gone through a battery of painful tests to confirm it was Parkinson’s.
“Yes,” answered Dr. Schwarma, our family practitioner for the past six years. An extremely sharp and caring woman in her midthirties, she never beat around the bush with her diagnoses. She’d contacted a friend who worked in Manhattan who specialized in the procedure. “The chip will help control the erratic synapses in her brain that are causing her seizures.”
I pointed to the iPad in her hands. “Is the chip like something you’d find in a computer? I’m assuming it stays in her brain permanently once it’s put in?”
“That’s the hope, although the human body is an intense environment. There’s a good chance the chip will need to be replaced, but it’s a relatively simple procedure even though it involves the brain. Plus, there’s the possibility of remote updates for the chip with newer technology, which would mean less chance of future surgery.”
I paused before speaking as the voice of the office secretary came over a loudspeaker calling for one of Dr. Schwarma’s colleagues to come to the front desk. “So if there are remote updates,” I said, “this chip will be firmware, correct? It’s not, like, the silicon equivalent of a stent or whatever; it’s active technology.”
Dr. Schwarma nodded. “That is correct.”
“So that will involve Wi-Fi or Bluetooth or iBeacon technology or whatever.”
She nodded again. “I’m not sure about the specifics, but the basic logic is that we’ll need to remotely check on the status of the chip’s operation without performing surgery. So some short-range technology like one you’ve mentioned will be used.”
“So she could be hacked?” My chest got tight and I felt my eyes moisten. “Right? And how does the Wi-Fi stuff work? Does she have a passcode for her brain? And can she travel? How does she explain this to the TSA in airports?”
Dr. Schwarma held up her hand. “John—those are all important questions and there will certainly be challenges ahead. But the positives far outweigh the negatives.”
“I’m sorry,” I answered as I wiped my eyes with the back of my hand. “It’s just freaky to picture a chip in my daughter’s brain. Could she eventually update the chip to be an internal smartphone? Be her own Wi-Fi hot spot? And does this make her a cyborg?”
Dr. Schwarma shook her head. “Cyborg choices typically involve a person replacing parts of their body outside of a life-threatening need. However, technically she will be part machine.” She held up her mobile phone. “No more than the rest of us, of course.”
“But we can turn our phones off,” I answered. “The chip will always be with her.”
She took a step forward, laying her hand on my shoulder. “Yes, the chip will always be with your daughter, John. But unless you do this procedure, she won’t be.”
The Genuine Challenge
A few years back I wrote an article about artificial intelligence (AI) for Mashable, a popular online news site focused on technology and culture. My goal was to evolve the conversation around AI beyond the polarized views of complete acceptance and rejection of the technology. While I believe AI is inevitable in our lives, I don’t believe that means we should blindly accept whatever new development in the field comes down the pike. Likewise, living in fear about the evolution of the technology doesn’t help humanity either. For my article, I really wanted to identify some potential solutions regarding humans working or joining with machines that I could wrap my brain around.
Initially, my research depressed me a great deal. I learned how quickly the AI field is growing without there being industry-wide standards around safety for development. I learned nobody has clarity regarding if and when machines might become sentient (intelligent and “alive”), but multiple experts who said that could never happen had been recently surprised at advances that were changing their minds. Overall, I’ve come to learn that whether or not machines become truly sentient, the widespread adoption of AI is inevitable. And while people developing or utilizing AI keep saying, “We need to make sure we understand the ethical issues around this technology,” they nonetheless keep building systems they may not be able to control.
I see this as a problem.
And an opportunity.
My Mashable article expanded to become this book, and what I came to realize after years of research and interviews is there are no simple answers regarding the evolution of AI. Nobody can accurately predict when machines or robots will “come alive” or exactly how that will look.
So for my part, as an exercise to deal with my concerns, I began to imagine personal scenarios in which I couldn’t avoid AI in my life. That’s how I came to the fictional scenario about my daughter you just read. As much as I may fear aspects of AI, if a piece of technology would mean the difference between my daughter (who is real) living or dying, I’d utilize the technology.
While imagining along these lines may seem strange, the process provided catharsis for me. Instead of being anxious about a future dominated by machines, I began to more deeply examine issues of AI as inspiration to validate my humanity. That’s why every chapter of this book opens with a fictional vignette—I want to help you move beyond the polarizing debate around AI and imagine how you’d react to the scenarios I present. AI is not just science fiction any longer. It’s here. My goal with these stories is to help you more rapidly go through the journey I did of genuinely confronting my fears to get to a positive place regarding the inevitability of AI. The body of each chapter describes the tech and issues I bring out in the fictional vignettes.
I do have a warning for you, but it’s not about killer robots taking over the world within a few decades. The field of AI is advancing so rapidly we may lose the opportunity for introspection unhindered by algorithmic influence within a few years. Many of us are already at the point where we look to our devices and the code that drives them to make every major decision in our lives: Where should I go? Whom should I date? How do I feel? These “digital assistants” are hugely helpful tools.
But they’ve also trained us to delegate decisions as a default. This process involves a willingness to sacrifice the parts of ourselves that used to make these decisions to technology. For my part I can live without my kids ever knowing how to use a paper map, but I’m not comfortable with their potential inability to identify a life partner without the aid of an algorithm. I can live with apps that monitor my heartbeat and brain waves to help me identify when I’m happy. I’m not comfortable with devices that manipulate these insights to motivate behavior I don’t fully understand.
Technology has been capable of helping us with tasks since humanity began. But as a race we’ve never faced the strong possibility that machines may become smarter than we are or be imbued with consciousness. This technological pinnacle is an important distinction to recognize, both to elevate the quest to honor humanity and to best define how AI can evolve it.
That’s why we need to be aware of which tasks we want to train machines to do in an informed manner. This involves individual as well as societal choice. We’re at a tipping point in human history, where delegating as a habit may lead us to outsource aspects of our lives we’d benefit more from experiencing ourselves. But how will machines know what we value if we don’t know ourselves?
That’s the genuine challenge, and the basis for Heartificial Intelligence—on an individual level, and for humanity as a whole. That’s also why the subtitle for the book is Embracing Our Humanity to Maximize Machines. We need to codify our own values first to best program how artificial assistants, companions, and algorithms will help us in the future.
This concept is your genuine challenge as well, and why I’ve written this book.
And to be clear if you’re a geek like me and think I’m dissing technology: I am not anti-AI. I’m pro-human. These are not mutually exclusive. If machines are the natural evolution of humanity, we owe it to ourselves to take a full measure of who we are right now so we can program these machines with the ethics and values we hold dear. In AI, there’s a concept known as deep learning2 that describes an approach3 to building neural networks based on machines learning methods of observation. My recommendation is that we apply a similar deep learning process for our own lives based on codifying the ethics, values, and attributes unique to humanity.
Some good news: There’s a science known as positive psychology that’s helping individuals increase their well-being after observing how actions such as gratitude and altruism have improved their lives. I’m using the term well-being as it refers to the intrinsic, long-term increase in life satisfaction these actions can bring versus a fleeting, mood-based happiness. While this “hedonic happiness” is natural and lovely, positive psychology has shown that constantly trying to improve your mood is both erratic and exhausting. Genuine flourishing, a holistic state involving your mental, physical, and spiritual well-being, is achieved by repeating actions that provide insights not based solely on emotion. This is a form of deep learning we should apply to our lives.
Some challenging news, however: You can’t automate your well-being. While you can utilize an app to keep a gratitude journal or measure your blood pressure during meditation, a machine can’t experience your well-being for you. Not yet, anyway. This is not meant to be pejorative toward the potential of AI or machines but to simply acknowledge they’re built differently than people. Automated happiness doesn’t work for humans, according to positive psychology. Delegating core emotional or spiritual work doesn’t compute. Predictive algorithms can help provide insights that affect our mood but the increase of long-term well-being involves our conscious and ongoing involvement.
A bit of hard truth here that needs acknowledging: In many ways, it’s actually easier to delegate decisions around our well-being to machines or to avoid deeper questions about what makes us happy or human. But this book is not a formulaic, “get happy quick” scheme to deal with the inevitability of a dark AI future. It’s about testing solutions that validate you’re worth a deeper look.
A Vision for Values
While positive psychology is having a transformational effect on people around the world, it can’t improve our lives if we’re discouraged from looking within. Here’s why:
It’s this third point that’s the inspiration for this book. In terms of automation, comparisons between machines and humans typically revolve around questions of skill. This is a lamentable irony when you consider we’ve built AI systems specifically to replicate our tasks in the first place. At best, it’s a temporary comfort to wonder which skills machines may possess or when.
What humans currently have that machines do not, however, is an inherent sense of values. We develop these over time based on our environment, but we’re also equipped with an emotional and moral sensibility that machines don’t currently share. While advances in fields like cognitive computing may evolve to the point that companion robots appear to have emotions, their ethical behavior will initially be based on the humans who programmed them. This is why, in a very real sense, the future of our happiness is dependent on teaching machines what we value the most.
And I mean this literally. I believe as individuals and as a society we need to identify, track, and codify our values so we can translate them into machine-readable protocols. It’s okay if you think that sounds crazy difficult. It is. But so is trying to create sentient machines. And ironically enough, a lot of AI methodologies revolve around observing our ethical behavior as demonstrated by our actions. So they’re already codifying our values, oftentimes without our direct input. This means lethal autonomous weapons (machines that can kill without direct human intervention) will act based on whatever country’s programmers created them. Or your self-driving car may be programmed to hit an errant pedestrian versus risk hurting you based on decisions made by the car’s manufacturer.
How do you feel about that? Should your values or ethics inform these decision-based protocols?
Yes, they should. Otherwise, your values will be ignored in the sense that all devices and products will favor the ethical biases of the programmers who created them. That doesn’t mean they’re bad people—they’re just not you. What if your faith dictated in an accident involving a self-driving car that you would want to give your own life to spare someone else? Why shouldn’t the car or product you’ve purchased reflect this desire? Jason Millar, a philosophy professor at Queen’s University in Kingston, Canada, calls this concept “technology as moral proxy,”4 which provides a huge opportunity for innovation versus just regulation. Like the precedent of informed medical consent, a codified ethical framework for humans living with AI would provide legal clarification around situations we’re going to be facing a lot in the near future. It would also broadcast personalization data based on your values that would allow companies and individuals to be deeply sensitive to your needs.
I call this codification of our ethical choices Values by Design, and in the latter part of this book I’ve provided a framework for you to track and codify your values based on established psychological research. It’s a pretty simple process: There are twelve core values (family, health, etc.) that you rank on a scale of 1 to 10. This provides a sort of ethical snapshot of your life, allowing you to clarify what values you hold most dear. Then for three weeks, at the end of every day you rank each of the twelve areas based on whether or not you lived to those values that day. So, for instance, say you value family as a 10 when you start your tracking. Then after three weeks, you realize you’re not spending any time with your family (meaning you’re ranking family at a low level every day). This insight will help you see where your life may be out of balance, and how you can adjust your actions based on the data reflecting how you actually live your life.
It’s a simple process on purpose. It can be enhanced with apps monitoring your heart rate or stress levels, but a core part of its benefit comes from daily reflection on how you’ve lived your life.
And it’s amazing how few people I talk to can even name five top values they pursue every day. Even fewer have ever tested them in any meaningful way. Of course religion, faith, and other methodologies focused on values have helped us refine our ethical decisions over the years. But my goal with Values by Design is to present a framework for this tracking process that could potentially complement AI systems and data measuring us in the same way. In that sense, everyone involved will know we’ve taken the time to substantiate the values we most want to reflect.
P.S., I’m not arrogant enough to think Values by Design is the process to save the world by providing an ethical solution for our adoption of Artificial Intelligence. I’m simply championing one way for an individual to track his or her values that could also inform the morally oriented decisions being made by machines.
This is why I’ve also dedicated a great deal of this book to highlighting the field of ethics in artificial intelligence, as I believe it provides the key to moving forward effectively with humans and machines. I believe ethical programming has to be imbued at the manufacturing stage of any AI system to ensure it’s safe, useful, and relevant for society at large.
This means we have every reason to allow ourselves to identify what we value most and to live our lives in accordance with those ideals. In fact, it’s a mandate that we all undergo this process, or machines will base their ethical programming on examples provided by YouTube or The Real Housewives of New Jersey.
It’s a deep challenge to name and track the specific values we live according to every day. But the process allows us to see where we’re out of balance regarding money, time, health, or any other metrics providing meaning in our lives. Taking the time to measure these things is what brings authentic purpose to our lives.
It’s what makes us genuine.
The Deal About Our Data
While most people would hardly consider a chip to monitor brain activity something that could transform a person into a robot, the fictional scenario about my daughter provides a physical example of the communion we already share with machines. Our computers, mobile phones, and the objects around us connect us to the Internet—and, subsequently, to data that we input into our minds and hearts on an almost constant basis. In return, our thoughts and actions create data that enters the vast pool of information swirling around us at all times, unseen yet very real all the same.
Google Glass introduced the general population to augmented reality (AR), technology that overlays digital information about your surroundings onto the lens through which you see the world. Oculus Rift, acquired by Facebook, is a highly advanced form of virtual reality (VR), in which your eyes and ears are covered while you’re immersed in a video game or other sensorial experience. Whatever the interface, all the hardware simply provides intermediary steps for us to get used to the inevitable union of humans and machines—or more specifically, the physical union of humans and machines. As Dr. Schwarma pointed out in my vignette, the mental and behavioral union has already taken place.
The physical issues are relatively simple. Today people are excited about wearable technology, in which they’ve traded the design and user interface of a mobile phone for a piece of clothing or jewelry like the Apple Watch. Soon, augmented reality contact lenses will replace mobile phones altogether, with some people opting for LASIK surgery so the technology need never be removed. We’ve become accustomed to the idea of technology-enhanced prostheses for athletes,5 like sprint runner Oscar Pistorius’s controversial prostheses, which earned him the nickname the Blade Runner. Now it’s just a matter of personal choice as to when to marry carbon with silicon.
But the fact that our lives are represented by our personal data in the digital realm is still a relatively new concept to most people. We understand that we have different personae in various digital arenas—we act more professionally on LinkedIn, more laid-back on Facebook. But this all involves data we can see and that we knowingly create. But our holistic and hidden digital identity is defined by the actions we take online and off that are tracked at all times. And the overwhelming majority of organizations doing the tracking don’t share the insights they glean about our lives with us.
Machines in this context are utilized to create algorithms that can best analyze and predict our future behaviors. In a very real sense, organizations with access to our aggregate identity know more about us than we know about ourselves. Edward Snowden helped turn the tide on this lack of knowledge regarding governmental surveillance of our lives. But while state-driven tracking issues are certainly critical to consider, they’re not the focus of this book.
Why It’s Artificial Intelligence
They say you can’t stop progress. But we can redefine it.
Like most people, my first exposure to artificial intelligence came from science fiction movies such as The Terminator. It’s easy to get caught up in the idea of robots getting smarter than us and destroying the human race. However, it’s stories like Minority Report that I find much more intriguing and ominous. Tom Cruise’s character in the movie is head of a futuristic police division known as PreCrime, working with cyborg “precogs” (clairvoyant humans synced with machines) to help identify people who are intending to break the law. The Internet economy is currently driven by technologies working to identify our intentions toward purchase in a similar way, predicting and controlling the outcome of our actions.
The sinister aspect of this tracking is not about commerce. It’s about the lack of transparency regarding our data that’s at the heart of the system controlling that commerce. Internet and mobile advertising are built on surveillance, tracking our behavior to see when we’re most likely to purchase. We’re called “consumers” in this model because that’s our primary role to play—to buy items to provide further insights about what else we’ll buy. While companies are trying to improve our lives with products or services that consumers genuinely need, purchase funnels never end in abstinence. Our actions are tracked so predictive algorithms can analyze our behaviors to generate messages that will inspire further purchase.
This is why this is artificial intelligence—as humans we’re built for purpose, not just purchase.
Knowing about ourselves only in the context of purchase provides a shallow picture of our whole persona. In a world where gross domestic product (GDP) is our primary measure of value, we’ve been led to think greater productivity or profits are the keys to human happiness. If this were true, tracking and manipulating purchase behavior to increase people’s happiness would make a great deal of sense. If buying certain products or spending money just for the sake of it made us happier, our lives would be a lot simpler (if we had the cash to support this hypothesis). But the science of positive psychology has demonstrated that intrinsic well-being, or “flourishing,” is not increased by a surplus of money. While we need a basic amount of material goods to feel safe and secure, intrinsic well-being is increased by actions such as mindfulness or altruism, expressing gratitude or doing work that brings us “flow.” In the same way we can go to the gym to increase our physical well-being, we can repeat actions that will increase our happiness. It’s not a formula, however. It’s a journey.
The challenge for artificial intelligence in this context is determining where learning algorithms improve a person’s life versus cutting corners on his conscious efforts to improve his well-being. For instance, I’ll trade the loss of serendipity when searching for a book on Amazon, knowing I won’t ever be shown a horror novel. But advertisements appearing in my Facebook feed touting products that will supposedly increase my happiness are shrouded in alarming mystery. What behaviors of mine have been tracked to result in my seeing this ad? If you have insights about my well-being, why won’t you provide them to me? I’d gladly buy a product from someone willing to share these precious details. But Google and Facebook rely on the hidden nature of data collection to sustain their business models of advertising. It’s not in their best interests to share data about our unique identities and actions. But regarding our happiness, how can we accurately measure which actions improve our well-being when companies won’t reveal insights based on our lives?
This is also why it’s artificial intelligence—individuals don’t currently control the data relating to their identity. IP trumps I.D.
The world of data and identity I’ve described may soon comprise the majority of our lives as revealed by the devices we wear. Today, we turn off our computers or put our phones aside, however briefly, and experience the world as seen through our own eyes. But once we’re wearing the lenses, or browsers, at all times that reveal the invisible data surrounding our identity, we’re in for a hell of a shock. In the fictional scenario I’ve described with Dr. Schwarma I was presented with a choice regarding the possibility of my daughter’s visceral union with technology. As things stand now, we’ve passively accepted the loss of our personal data that fuels algorithms, AI, and the Internet economy as status quo. By the time we see how our identities look in the virtual world as controlled by others, it will be too late to get back the rights we’ve let go. It’s time we stopped relying on artificial measures to increase our genuine well-being.
Happinomics
It wasn’t until about three years ago that I realized economics is a study of philosophy as much as of statistics. Measuring and attributing value to individuals, communities, or countries requires universal agreement on which metrics to utilize before creating any kind of standard report regarding policy or welfare.
It’s hard to imagine a time when GDP wasn’t utilized as a measure for every country of the world to determine their well-being. The logic regarding the GDP is that as it goes up, a country’s happiness increases as well. But this connection hasn’t proved true, as the economist Simon Kuznets, who created the concept of GDP in the 1930s, predicted. As Lauren Davidson points out in her November 2014 Telegraph article “Why Relying on GDP Will Destroy the World,” Kuznets warned, “The welfare of a nation can . . . scarcely be inferred from a measurement of national income such as GDP.”6 We didn’t heed his warning, and the GDP was adopted as a set of standardized values everyone in the world agreed were the most important to measure.
Unfortunately, these values focused largely on metrics regarding income and growth while ignoring other issues of well-being and social justice. The values made their way to the business world, where increased profits and shareholder gains reflected and informed the GDP, shaping ideals regarding employee productivity and worth. Eventually these values made their way to the individual level. We’re told it’s our civic duty to consume goods to increase the economy, while also believing that having money and success leads to happiness. But this model hasn’t panned out. Increasing GDP for a country could come in the form of removing oil from its region and destroying the environment. This means short-term gains adversely affect long-term sustainability. Likewise for individuals, getting a higher salary without a sense of purpose for your work doesn’t equate to increased happiness.
Business owners also have to come to grips with the ultimate end of GDP regarding issues of automation and machine learning. If machines continue to excel in their ability to do our jobs, it will always be cheaper to utilize them in place of human beings. Note I didn’t say “better”—but machines work without complaining, without the need for insurance, and without the need for a break. So in effect, organizations not questioning the status quo of a GDP-mandated focus on growth are implicitly inviting widespread automation of jobs by machines. This is a moral and ethical as well as fiscal decision for any organization to determine as part of their values. Do we want a human or a machine workforce? And is it realistic to think we can work alongside machines in a sustainable manner as they continue to learn and excel at the tasks we’re teaching them?
In terms of the Internet economy, as long as a GDP-focused model of increased profits for shareholders continues to hold sway, companies such as Google and Facebook will continue to rely on clandestine, tracking-based advertising models for their revenue. Utilizing artificial intelligence in this context makes a great deal of sense since individuals are producing more personal data than ever before. Objects comprising the Internet of Things, like the Nest thermostat (owned by Google), will track ever more intimate aspects of our lives until we have no control over our data along with the thoughts, emotions, and behaviors that create it.
Some disclosure here: I’m not an economist by profession. But I have been evangelizing the adoption of new metrics beyond GDP such as gross national happiness and the genuine progress indicator for years. While this hardly qualifies me to be an economist, I am in a unique position with my background in technology to analyze how quantified data from an individual could affect policy creation at scale. I’ve also interviewed hundreds of experts in technology, economics, and positive psychology to determine what solutions could be pursued to deal with the artifice that’s eroding trust in technocratic and GDP-driven environments today.
In my work as a technology writer (I’m a contributing journalist to Mashable and the Guardian), it’s also become apparent we need to consider economic models that straddle the real and virtual worlds to employ metrics that can genuinely increase people’s well-being. While it may seem strange to consider the economic dynamics within a MMORPG (massively multiplayer online role-playing game) affecting real-world markets, the rise of virtual currency and the amount of time people spend within these games is increasing exponentially and demands workable solutions.
When devices like Oculus Rift become ubiquitous, in which individuals cover their eyes and ears to become immersed in a virtual world, many people may choose to never again spend time in meatspace (a term that geeks like me use for the “real world”). Think how these dynamics will affect the GDP if it doesn’t evolve to embrace the virtual realm. What if someone has a job in-game that pays in Bitcoin? If their physical body resides in Dublin, Ohio, but the game’s servers are in Dublin, Ireland, should the person pay U.S. or Irish taxes—or both? And of course, Facebook owns Oculus Rift, so we can assume Zuckerberg will utilize eye tracking, facial recognition, and stress-, heart-rate-, and brain-wave-sensing technologies to know how people are feeling within any game environment to provide real-time advertising opportunities to his clients. How will that type of guaranteed revenue model affect economics, let alone our mental psyche?
Genuine by Design
Heartificial Intelligence is focused on helping remove the artificial intelligence we’ve been exposed to in multiple arenas of our lives in exchange for new models of progress that can authentically improve well-being. It’s designed to help you make informed, conscious choices regarding the technologies and values that will help you live an examined life.
I’ve broken the book into two sections: Artificial Intelligence and Genuine Progress.
The Artificial Intelligence section presents what I see as our current dystopian trajectory regarding well-being. While I see multiple positive aspects of AI, without the rapid influx of transparency and standardized ethics regarding its creation, it will continue to be dominated by technocratic ideals that mirror the values of GDP.
The Genuine Progress section presents multiple solutions to the issues I raise regarding artificial intelligence. These include technological, ethical, and economic examples in which I’ve worked to provide pragmatic solutions to issues described wherever possible. My hope is that by infusing transparency into the existing models and markets driving the world today we can benefit from the amazing world of AI versus being subsumed by it.
Here’s a breakdown of the sections and chapters of the book. These are written as teasers rather than spoilers, to give you a specific sense of the issues and solutions the book describes:
SECTION ONE: ARTIFICIAL INTELLIGENCE
Chapter One: A Brief Stay in the Uncanny Valley
One of the chief ways many people are freaked out by robots is when they look too human. Remember the movie The Polar Express? While the animation was extremely advanced for its time, a lot of people were turned off when one of the characters exhibited traits that were almost but not quite human within their cartoon form. This concept is known as the uncanny valley in robotics. It’s a widely accepted term for engineers, although some dislike its assumption that everyone will react the same way when seeing an android or robot for themselves. The concept is also mirrored within the realms of Internet tracking and advertising today. We all know our personal data is being tracked; we’re just not sure how. Why do we keep seeing those diaper ads when we don’t even have kids? Why does that same ad for a razor appear every day in my Facebook feed? While sentience and the Singularity are issues reflecting the future of AI, the algorithmic groundwork creating their future exists today and already deeply influences our digital identity.
Chapter Two: The Road to Redundancy
A study done by the University of Oxford, reported in 2014,7 says, “Occupations employing almost half of today’s U.S. workers, ranging from loan officers to cab drivers and real estate agents, [will become] possible to automate in the next decade or two.” Similar statistics apply to the United Kingdom, as the Telegraph reported in a November 2014 article: “Ten million British jobs could be taken over by computers and robots over the next twenty years, wiping out more than one in three roles.”8 Automation by machines is a real threat to human employment and well-being in the very near future. While technological innovation and AI may bring great benefits to humanity, it will also change how we find meaning in our lives if we can’t work. Beyond issues of pursuing purpose, we’ll also need to be able to pay our bills in the wake of a machine-driven world.
Chapter Three: The Deception Connection
In a very real sense, artificial intelligence is focused on tricking us into thinking something is real that’s not. It’s a form of digital magic known as anthropomorphism, in which we might forget that Siri is a digital assistant and begin joking with it as if it’s a person. While robot assistants for the elderly or Furbies for kids provide a great deal of comfort for those in need, they shouldn’t be utilized instead of but as complements to human companionship.
Chapter Four: Mythed Opportunities
The AI driving advertising-based algorithms poses an existential threat to our well-being. When we’re tracked solely or largely as a means of identifying what items we want to buy, the fuller context of who we are as human beings is lost. We begin wondering why we were offered a dietary supplement in response to a certain word we typed on Facebook—does some algorithm think I’m fat? Ethical standards for sociological study have always been the norm, in which participants are fully aware of what’s being studied about their behavior. These guidelines need to be imbued within the economies comprising the Internet and Internet of Things to avoid an inevitable world of digital doppelgängers who only reflect our consumerist selves.
Chapter Five: Ethics of Epic Proportions
Robots don’t have morals. They’re physical objects imbued with code programmers have provided to seek established objectives. A big reason ethics is so critical to the AI industry is the lack of standard application for products today, especially at the design level of production. This is especially important regarding militarized AI, which has grown so rapidly in the past decade and received billions of dollars in funding. For civilians, issues of autonomous cars bring the need for AI ethics much closer to home. For instance, should a car entering a tunnel swerve to miss a child who has run in front of it to save her life even if it means killing the driver? This is a concept developed by Patrick Lin,9 director of the Ethics and Emerging Sciences Group, based at California Polytechnic State University. Would you rather an expert in ethics like Patrick answer this question or a sleep-deprived programmer trying to make a deadline for an investor? We are in a unique era for issues and legal questions like these, as all law to this point has been written exclusively for humans. Robots are changing the rules, and policy needs to catch up to include their growing influence.
Chapter Six: Bullying Beliefs
You don’t have to sit idly by to watch the future of humanity evolve without your input. Scientific determinism is as much of a faith as any major religion, if and when it proselytizes without permission or shifts cultural perceptions in dangerous directions. While we can’t stop progress, questioning innovation based largely on creation of profit or growth is a necessity. It’s not Ludditism to desire to move toward the future fully cognizant of what makes humanity glorious and laudable, as messy as it may be.
SECTION TWO: GENUINE PROGRESS
Chapter Seven: A Data in the Life
Privacy isn’t dead; it’s just been mismanaged. While a person’s decision regarding her identity is her own business, it doesn’t make sense to avoid creating frameworks for exchange of personal data that allow transactions to happen as a person desires. Whether it’s personal clouds, vendor relationship management (VRM), or life management systems, there are multiple methodologies available today that will allow all people to protect and control their data in whatever ways they see fit. This will allow for greater accuracy regarding our data for any of the algorithmic or AI-oriented programs existing today or coming down the pike.
Chapter Eight: A Vision for Values (How-to Chapter)
To be genuine, you have to be able to articulate what you believe and prove to yourself you’re living according to your values. In a future where our digital actions will be easily visible to other people via augmented or virtual reality, being accountable for our actions will be more important than ever. This chapter provides a step-by-step guide to identify and track your values based on the research of a number of respected experts in the field of sociology and positive psychology. By taking a measure of your life based on what you hold most dear, you’ll be able to see what areas you may want to focus on more or less to achieve balance in your life. By identifying your values and taking actions based on your beliefs, you’ll also be able to discover opportunities to help others in ways that will elevate your personal well-being.
Chapter Nine: Mandating Morals
Ethics in AI has never been more important. Recent announcements in the field, especially the Future of Life Institute’s “Research Priorities for Robust and Beneficial Artificial Intelligence,”10 provide encouraging precedents for experts to incorporate ethical guidelines in the core of all their work on AI and autonomous machines. But silos in academia or the bias of profit-first businesses cannot supplant the imbuing of human values into the core of our machines. We won’t get a second shot to teach our successors right from wrong, so we need to understand and foster that process right away.
Chapter Ten: Mind the GAP (Gratitude, Altruism, and Purpose) (How-to Chapter)
In the UK when you take the Tube (or subway), you hear a pleasant voice warning you to “mind the gap” to keep from falling between the platform and the train. As a way to pursue Heartificial Intelligence, the science of positive psychology demonstrates how gratitude, altruism, and purpose can increase our intrinsic well-being. As compared to happiness, this form of flourishing is focused not on mood but on actions we take based on attributes that define who we are. This chapter contains background and exercises on how to perform a personal “GAP analysis” while also exploring why pursuing your purpose will become essential in a world where automation replaces our jobs. Whether we have governmental interventions to help pay our bills with a Basic Income Guarantee or other economic measures, minding the GAP will equip you to face the future knowing you can spend every day helping others and get happy in the process.
Chapter Eleven: The Evolution of Economics
You may have heard of gross national happiness, but you may not realize it’s not focused on mood. Rather, it’s a metric to help measure citizen well-being beyond the fiscal measures of the GDP. Newer metrics such as the Genuine Progress Indicator have also been adopted in states such as Maryland and Vermont and take the key business tenet of double-entry bookkeeping into account when trying to determine citizen well-being. Economically speaking, this means it factors in things such as environmental effects when an oil spill increases a country’s GDP (since it creates jobs to clean up the mess). As sensors, wearable devices, and the Internet of Things more intimately measure our emotional and mental well-being, governmental metrics cannot simply focus on the increase of money or growth as an approximation of our happiness. This is also the chapter where I expand the concept of Values by Design, extending the pragmatic exercises I provide earlier in the book with a sense of how they can be incorporated into a digital and economic future driven by AI.
Chapter Twelve: Our Genuine Challenge (Interactive Chapter)
Remember the Choose Your Own Adventure series of books, in which you get to decide how a story unfolds? At the beginning of Chapter Twelve I’ve provided you with your own chance to do the same regarding Artificial Intelligence. This is also the chapter where I wrap up the idea of ethics needing to be incorporated into AI design and values into your life today.
From the Artificial to the Authentic
Tired of being freaked out about Artificial Intelligence?
Want to test what you value most in life?
Want to learn how positive psychology can improve your well-being?
Here’s an invitation for you to answer these questions for yourself.
SECTION ONE
ARTIFICIAL INTELLIGENCE
one A Brief Stay in the Uncanny Valley
Fall 2028
“Can I get you something to drink, Rob?” I asked, yearning for a stiff drink of my own.
“No thank you, sir, I’m good.”
I pictured this moment hundreds of times over the past sixteen years, as I assume every dad does. My daughter Melanie’s first date—or at least the first one where the guy came over to pick her up. Robert was a polite and good-looking guy with an athletic build, light brown skin, and piercingly blue eyes. In our first few minutes chatting while Melanie got ready upstairs with my wife, Barbara, Robert was a charming conversationalist. He seemed genuinely interested in my responses to his questions and was clever and funny without being snarky. I could see why Melanie was attracted to him, which is why I was working as hard as I could to keep from throwing up in my mouth.
Rob was a robot.
I only knew this because Melanie had told me the night before as preparation for Rob’s visit. Barbara and I had kept asking questions about her mystery guy until she announced he was coming over.
Oh, and that he was a robot. Or as she put it, an “autonomous intelligence embodied in flesh form.”
Humanistic robots by 2028 had become extremely advanced in terms of their physical appearance. They lost the “uncanny valley” effect of having just enough false movements or characteristics to remind people they weren’t human. In the wave of automation that had begun around fifteen years ago, corporate executives had also begun replacing human workers with robots where it was deemed the machines could gain unique marketing insights from the people they served.
Initially most of the corporate robots looked like Baymax, the lovable inflatable helper-bot from the movie Big Hero 6. Then after people got used to anthropomorphizing robot workers in their cuddly manifestations, companies manufactured them in branded humanistic forms that best matched demographics of customers’ nearby physical community. In Hendersonville, Tennessee, robot workers at Waffle House restaurants looked like Carrie Underwood. In Brooklyn Starbucks locations, Bruno Mars appeared to be the “botista” serving your lattes. Within a few years robot design improved to such an extent it wasn’t necessary to have only a few male and female versions spread throughout a region. Company algorithms linking robots, people, and the connected objects around them predicted what type of interface a shopper wanted to deal with in public, and that was the literal face they’d see on their robot while conducting a purchase.
A majority of my friends lost their jobs in the wake of widespread automation. I’d only held on to my position as a technology writer by declaring myself a “human journalist.” The idea started as a joke but when human writers were replaced by AI programs1 management felt having a human reporter provided a form of objectivity my silicon counterparts lacked when reporting on technology.
For National Novel Writing Month in November, we have prepared a collection of books that will help students with their writing goals.
In celebration of Native American Heritage Month this November, Penguin Random House Education is highlighting books that detail the history of Native Americans, and stories that explore Native American culture and experiences. Browse our collection here: Books for Native American Heritage Month
Artificial Intelligence is being used as a tool in colleges and universities for automating tasks, from teaching assistance to Chatbots to detecting plagiarism, and beyond. As educational institutions become more reliant on AI, we are looking to the future and providing resources on this topic for educators who want to inform their students on the