Explore 1.5M+ audiobooks & ebooks free for days

Only $12.99 CAD/month after trial. Cancel anytime.

The AI Generation: Shaping Our Global Future with Thinking Machines
The AI Generation: Shaping Our Global Future with Thinking Machines
The AI Generation: Shaping Our Global Future with Thinking Machines
Ebook519 pages11 hours

The AI Generation: Shaping Our Global Future with Thinking Machines

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

An update edition of Solomon’s Code—now The A.I. Generation—the thought-provoking examination of artificial intelligence and how it reshapes human values, trust, and power around the world.

Whether in medicine, money, or love, technologies powered by forms of artificial intelligence are playing an increasingly prominent role in our lives. As we cede more decisions to thinking machines, we face new questions about staying safe, keeping a job and having a say over the direction of our lives. The answers to those questions might depend on your race, gender, age, behavior, or nationality.

New AI technologies can drive cars, treat damaged brains and nudge workers to be more productive, but they also can threaten, manipulate, and alienate us from others. They can pit nation against nation, but they also can help the global community tackle some of its greatest challenges—from food crises to global climate change.

In clear and accessible prose, global trends and strategy adviser Olaf Groth, AI scientist and social entrepreneur Mark Nitzberg, along with seasoned economics reporter Dan Zehr, provide a unique human-focused, global view of humanity in a world of thinking machines.
LanguageEnglish
PublisherPegasus Books
Release dateNov 6, 2018
ISBN9781681779355
The AI Generation: Shaping Our Global Future with Thinking Machines
Author

Olaf Groth

Olaf Groth, PhD is CEO and co-founder of advisory-think tank Cambrian Futures and solution development studio Cambrian Labs; a Professor of Practice at UC Berkeley Haas School of Business; and co-author of The Great Remobilization: Strategies & Designs For A Smarter Global Future (MIT Press, 2023) and Solomon’s Code: Humanity in a World of Thinking Machines (Pegasus, 2018).  With over 30 years of experience as an executive and adviser, he has helped organizations to shape their strategies, capabilities, programs, and ventures across 35+ countries in North America, Europe, the Middle East, Africa, and Asia. Combining expertise in emerging tech, geopolitics, and corporate strategy, Olaf serves on the Global Expert Network for the 4th Industrial Revolution and the Global Alliance for AI Governance at the World Economic Forum and contributes frequently to major media outlets, including ABC, CBS, Fox News, NPR, Bloomberg, and various international publications.

Related authors

Related to The AI Generation

Related ebooks

Robotics For You

View More

Reviews for The AI Generation

Rating: 4 out of 5 stars
4/5

2 ratings1 review

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 4 out of 5 stars
    4/5

    Dec 15, 2018

    The clearest way to describe the differences between artificial intelligence (AI) and regular computer development is that computers help us understand the world, and AI helps us change the world. That’s my definition (though thousands probably came up with the same idea), and Solomon’s Code reinforces it in every domain it examines.

    AI digests unfathomable amounts of data to come up with methods and solutions that would take humans years and centuries to replicate. So driverless cars, arrangements of warehouse goods and defeating humans at chess and go are all examples of AI’s power to process data. If you want a computer to run a robotic arm, endlessly bolting car parts together, that’s old hat, garden-variety computer smarts at work. Saving your life by comparing your x-ray to a hundred million others around the world and coming up with an outside the box diagnosis in four minutes – that’s AI.

    Olaf Groth and Mark Nitzberg have combed the planet to determine the state of the art in this nearly comprehensive look at developments, and in particular the thinking behind them. Because the considerations are huge and many. Things like language, religion, history, human rights and culture need to be considered, and so AI is taking different approaches around the world. Government philosophies towards AI startups also twist the landscape accordingly. The authors interview experts everywhere to tease out the differences in every nation’s approach, and the resulting developments.

    They are careful not to be (too) swept up in the excitement, citing the very real risks AI poses for civilization. For every great solution, they pose controversies. There are issues like jobs lost, never having to think or remember again, privacy, security and deepening the already considerable gulf with the have-nots (“digital divide”).

    For example, there is one inevitable dystopian AI scenario that is already well entrenched. In China, the authors say, the government has installed 170 million surveillance cameras. In addition to recording crime, they use AI facial and gait recognition to nail people by the millions. Things like crossing the street outside the crosswalk affect a person’s “social credit score” in China. Failing to pay for parking, failing to stop at stop signs, being drunk, paying taxes late - pretty much anything the government wants can affect the score, at the government’s whim. And the government demonstrates its whims all the time. A bad score keeps people from flying at all, or traveling first class in trains, or whatever the government wants to deny them. The worse the score, the less people can do in China.

    This has gone beyond the famous episode of Black Mirror, in which a young woman, ever conscious of her social media and reputation score and always eager to please, trips up. This leads to a cascade of bad reviews, influenced and exacerbated by her deteriorating score, until her life is a total ruin. No one wants to be caught associating with a low score person. Party invitations cease. Store clerks shun her. Marriage to someone decent is out of the question. So is a career job or even an interview. Forget loans, mortgages or university acceptance. Low scorers are poison. That this is already happening in the world is horrifying.

    Not participating is no answer either, as the lack of a score means zero access to anything. Ask any immigrant.

    And then, do you really want to have to converse with your phone so it can send updates on your mental state and mood – to someone else? Because that will be part of the mandatory health monitoring system already required as a condition of employment in many large American companies. Fitbits are just the beginning.

    As bad as that might be, worse is the fact that many AI systems are black boxes. We actually don’t know how they work. They learn by themselves, “improve” by themselves and decide by themselves, with no human oversight, input or control. No one can explain how they come to their decisions. (This is one rather large aspect of AI the book does not cover.)

    The bottom line on AI being smarter than people is captured in the fact that computers can’t think. People think. They take numerous inputs into consideration, and weigh them unfairly, with biases and prejudices and gaps in knowledge. Worse, humans do not even know what consciousness is. They have no viable theory of what makes a person the person s/he is, and not someone else, or a toaster. Until and unless humans can define what consciousness is and how the brain creates, manages and tolerates it, there can be no threat of AI also having consciousness.

    Solomon’s Code finishes weakly. The Conclusion is a hope that some sort of global oversight body will emerge to regulate AI developments worldwide, somehow taking everyone’s values and fears into account. It then ends with a bizarre Afterword that is really a Preface, explaining the value of what you are about to read. But for insight into the state of the world of AI, the book is very useful.

    I liked Solomon’s Code because it is fair and balanced, and all but forces the reader to think, a property AI seeks to dispense with.

    David Wineberg

Book preview

The AI Generation - Olaf Groth

Introduction

The once-grandiose tales of artificial intelligence have become quotidian stories. Before the robots started to look and sound human, they automated real jobs and transformed industries. Before AI put self-driving cars and trucks on the highways, it helped find alternate routes around traffic jams. Before AI gave us brain-enhancing implants, it gave us personal assistants that converse with us and respond to the sound of our voices. While previous plotlines for AI promised sudden and sweeping changes in our lives, today’s AI bloom has delivered a transformation one step at a time, not through an apocalyptic blowout.

Artificial intelligence now pervades our lives, and it’s not going away. Sure, the machines we call intelligent today might strike us as rote tomorrow, but the tremendous gains in computing power, availability of massive data sets, and a handful of engineering and scientific breakthroughs have lifted AI from its early Wright Brothers days to NASA heights. And as researchers fortify those underlying elements, more and more companies will integrate thinking machines into their products and services—and by extension, deeper into our daily lives.

These and future developments will continue to reshape our existence in both radical and mundane ways, and their ongoing emergence will raise more and new questions not only about intelligence, but also about the very nature of our humanity. Those of us not involved in the field can sit passively by and let this unfolding plot carry us wherever it leads, or we can help write a story about beneficial human-machine coexistence. We can wait until it is time to march in the streets, asking governments to step in and protect us, or we can get in front of these developments and figure out how we want to relate to them. That is what this book intends to do: To help you, the reader, confront some of the societal, ethical, economic, and cultural quandaries that an increasingly powerful set of AI technologies will generate. The following chapters illustrate how AI will force us to consider what it means to be intelligent, human, and autonomous—and how our humanity makes us question how AI might become capable of ethical, compassionate decision-making and something more than just brutally efficient.

These issues will challenge our local and global conception of values, trust, and power, and we touch on those three themes throughout Solomon’s Code. The title itself refers to the biblical King Solomon, an archetype of wealth and ethics-based wisdom but also a flawed leader. As we create the computer code that will power AI systems of the future, we do well to heed the cautionary tale of Solomon. In the end, the magnificent kingdom he built and ruled imploded—largely due to his own sins—and the subsequent breakup of his realm ushered in an era of violent unrest and social decline. The gift of wisdom was squandered, and society paid the price. Our book takes the position that humanity can prosper if we act with wisdom and foresight, and if we learn from our shortcomings as we design the next generation of AI. After all, we are already dealing with new tests that these advanced technologies have presented for our values, trust, and power. Governments, citizens, and companies around the world are debating personal-data protections and struggling to forge agreements on values of privacy and security. Stories about Google’s Duplex placing humanlike automated calls to make reservations with restaurants or Amazon’s Alexa accidentally listening in to conversations have people wondering just how much trust they can put in AI systems. The United States, China, the European Union and others are already seeking to spread their influence and power through the use of these advanced technologies, accelerating into a global AI race that might help address climate change or, just as easily, lead to even more meddling in other countries’ domestic affairs.

Meanwhile technologically, as these systems gain more and more cognitive power, they might begin to reflect a certain level of what we would call consciousness, or the ability to metareflect on their actions and their context. We all win if we can first instill a proper conscience in AI developers and the systems they create, so we ensure these technologies influence our lives in beneficial ways. And we can only accomplish this by joining forces, engaging in public discourse, creating relevant new policy, educating ourselves and our children, and developing and following a globally sourced open code of ethics. Whatever pathway the future of AI might take, we must create an inclusive and open loop that enables individuals and companies to increase productivity, heighten professional and personal satisfaction, and drive our progressive evolution.

Humanity’s innate and undaunted desire to explore, develop, and advance will continue to spawn transformative new applications of artificial intelligence. That genie is out of the bottle, despite the unknown risks and rewards that might come of it. If we endeavor to build a machine that facilitates our higher development—rather than the other way around—we must maintain a focus on the subtle ways AI will transform values, trust, and power. And to do that, we must understand what AI can tell us about humanity itself, with all its rich global diversity, its critical challenges, and its remarkable potential.

1

Where Human Meets Machine

People move through life in blissful ignorance. In many ways, our bodies and lives work like a black box, and we consider it a surprise misfortune when disease or disaster strikes. For better or worse, we stumble through our existence and figure out our strengths and weaknesses through trial and error. But what happens as we start to fill in more and more of the unknowns with insights generated by smart algorithms? We might get remarkable new ways to enhance our strengths, mitigate our weaknesses, and defuse threats to our well-being. But we might also start to limit our choices, blocking off some enriching pathways in life because of a small chance they could lead to disastrous outcomes. If I want to make a risky choice, will I be the only one who has agency over that decision? Can my employer discriminate against me because I decided not to take the safest path? Have I sacrificed an improbable joy for fear of a possible misfortune?

And what happens to us fifteen years from now, when AI-powered technologies permeate so many more facets of everyday lives?


The chimes from Ava’s home artificial intelligence system grew louder as she rolled over and covered her head with the pillow. Despite her better judgment, not to mention the constant reminders from her PAL, she’d ordered another vodka tonic at last call. She already hated this day—the anniversary of her mother’s diagnosis thirty years earlier—but the hangover throbbing in her temples was making this morning downright painful. The blinds rising and the bedroom lights growing steadily brighter didn’t help. Yeah, yeah. I’m up, she growled as she steadied herself with a foot on the floor. Slowly, she rose and walked toward the bathroom, her assistant quietly reminding her of what she couldn’t put out of her mind this morning no matter how hard she might try: precancer screening today.

So far, the doctors and their machines hadn’t seen any need for action. But given her mother’s medical history, Ava knew she carried an elevated risk of breast cancer. I’m only twenty-nine years old, she thought, I shouldn’t have to worry about this yet. Her mother was pregnant with Ava when she got her diagnosis, so it came as a complete shock. Her parents agonized over what to do—about the cancer and the baby—until they found a doctor who made all the difference. In the two decades since, progress in artificial intelligence and biomedical breakthroughs had eliminated many of the worst health-care surprises, and it seemed like medical science conquered a couple new ones in the few years since. Ava was old enough to remember when AI could identify and predict ailments half the time. Now, it hit 90 percent thresholds, and most people trusted the machine to make critical decisions (even if they still wanted a doctor in the room).

Ava snapped back into focus: Where are my goddamned keys?

A patient, disembodied voice reminded her: You left your keys and sunglasses on the kitchen counter. I’ll order a car for you.

She winced. I gotta change your speech setting, she said. You still sound too much like Connor. No time now. She headed out the door for the doctor’s office. If she could, she would skip the precautionary checkups, but then she’d lose her health insurance. So today, she just had to go through the motions and the machines.

Just don’t say that word. A couple hours later, seated in the consultation room, the pounding in Ava’s head finally faded. The anxiety didn’t. Sorry about the delay, her doctor said as she breezed in and sat down. Everything looks fine for now, but we’re starting to see some patterns in your biomarkers. The WellScreen check says about 78 percent of patients with your combination of markers and genetic disposition develop cancer within a decade. It’s about time we look at some preventative measures.

The rational part of Ava’s mind expected the news; she always knew how likely it was given her family history. Still, that word—she stared blankly for a moment while the jolt of cancer started to ease. The doctor leaned forward and put a hand on Ava’s forearm, and Ava remembered again why she kept coming back to her. There’s plenty of time between now and any potential tumors that might come of this, the doctor said. We have a lot of options.

Ava started to breathe a little easier and tried to convince herself this was a good thing: All the tests and machines caught this long before she ran short on options. The doctor nodded to the screen on the wall, and up popped Ava’s customized patient portal. They waited a few seconds for the health monitor on Ava’s wrist to connect and upload her real-time biodata. There was her ex-boyfriend again, Connor’s voice oddly reassuring this time: Do you want me to grant the patient portal access to your data?

The recommendations filled the screen, and the doctor started explaining. Do you plan to have children? Research shows that pregnancy-related hormones can be a strong defense against the development of some breast cancers. If you have a child in the next ten years, the best models suggest your chances of developing cancers drop to about 13 percent. But given the types of cancers that run in your family, we also have a tailored set of hormone therapies you could choose from. Better, because you’re starting now; it won’t be nearly as harsh as the old hormone drugs your mom dealt with. There are some side effects, but they’re pretty mild in most patients. On their own, they’ll drop your chances of developing cancer to less than 20 percent. If you do either of those—the child or the hormone therapy—and replace your biological breasts with Lactadyne prosthetics in the next eight years, your chances of breast cancer are essentially nil.

The doctor noticed Ava’s furrowed brow. Look, you don’t need to decide right now. Take a little time to think about it. You can go through all these options anytime on your portal. Meanwhile, take a couple days to look at this waiver, too. If you’re up for it, we can start collecting data about your home and habits through your health manager and its connection to your refrigerator, toilet, and ActiSensor mattress—the usual stuff like environmental quality, diet, and exercise. It weirds some people out, but the monitoring can help suggest simple lifestyle changes to improve your odds. Once we get that data, we can adjust some parameters in your daily life and think about how we might change your diet and exercise. You’re on Blue Star insurance, right? Their Survive ‘n’ Thrive incentive plan offers some great rewards for people who do environmental and health monitoring. You give up some control—but, hey, it is your health we’re talking about, after all.

The doctor chuckled. Ava shuddered. She didn’t much care for any of the options. The whole ride home she wondered how much the diagnosis would sidetrack her dreams. Will my predisposition toward cancer disqualify me from the Antarctica trip? I’m comfortable, but I don’t have piles of money—will the insurance company raise my premiums if I wait a few years before starting preventative therapies? What if my bosses find out? Will they ask me to leave or take a different role? And what about Emily? Should I tell my love, my partner that I might develop cancer unless we adjust our lifestyle? Will she still want to buy the condo on the hill?

Will Emily leave? Ava harbored no illusions about the program the doctor described. Sure, it would keep her clock ticking, but it also meant giving the machine a lot of control over her life, and she would pay a financial and personal price if she didn’t comply. As her PAL started describing the program during the ride home, it began to dawn on Ava just how comprehensive this program would be. The insurance company and the doctor would put together a total treatment plan that would consider everything from her emotional well-being to the friends she hung out with. Her girls’ nights out would never fly, at least not to the heights it did last night. Would she have to rethink her entire social life, her friends, and her schedule to make healthier choices? She might have to change her home environment to maximize the hormonal therapy. She might have to reduce the stress of her job, maybe even change jobs altogether.

Her mind was racing now: Will I have to give up Ayurveda because it’s not scientifically proven to minimize my risk? I could move to Germany, where regulators accept Ayurveda and allow personal AIs to integrate data from Indian and Chinese medicine. Emily loves traveling, but we never thought about living overseas…

Too much, she whispered. It’s too much. She took a deep breath and massaged her temples. I can’t go home right now, and I sure as hell can’t concentrate on work. Her fingers trembled as she rifled through her purse to find her PAL. She chuckled about the device in her hand—whenever she needed a human touch, she relied on a machine to deliver it.

Ava! What’s going on?

Mom? she said, her voice cracking.

Ava’s PAL had made the call automatically, a sensor in the earpiece picking up on her anxiety through the minute electromagnetic impulses in her brain and skin. The PAL instantly correlated the best person to call in her current state—always Mom or Dad, at least when Dad wasn’t gallivanting through some far-off place—which, of course, her PAL also knew to be the case during this time of the year. Ava couldn’t even recall whether she’d acknowledged a prompt to connect the call. Sometimes the PAL would just call automatically, as she had set it to do at especially stressful times.

Normally, the chipper greeting and the background noise that came over her mom’s eight-year-old iPhone drove her nuts, like an old vinyl record. Today, it couldn’t have been more comforting. Papa sends his best, her mother said. He’s teaching today in Shanghai. He said he got you the gift of the century. I told him I don’t even want to know.

He got the beacon, too?

Of course, honey. You haven’t delisted either of us yet. And you better not, either. You need your Swarm.

Yeah, I suppose, Ava said, trailing off. Every week, Ava’s PAL asked if she wanted to switch her alerts from the Swarm of friends and family to only Emily, who still couldn’t understand why Ava wouldn’t make the change. Ava tried to explain her relationship with her mom, the peace she got from the idea of multiple loved ones responding whenever she needed a comfort call or a reassuring holo-message. But Ava had substituted Connor for the Swarm back then, and the fact that she wouldn’t make the change now really burned at Emily.

Mom’s voice snapped Ava back again: So, Zut! in Berkeley, then? At least that’s what my phone says.

Connor’s disembodied voice piped in: Fifteen minutes until we arrive at Zut!

Their favorite lunch spot. The restaurant they’d gone to for years. An easy drive from the city. The fact that Zut! just popped up as her new destination didn’t even register with Ava, though she hadn’t been there in months. Still, her PAL assigned it a unique rating, based on voice diagnostics and states of mind. Ava still marveled about how often Connor’s voice suggested the perfect place, just like he used to.

When they met at lunch, Ava couldn’t stop hugging her mom. There was nothing like the real thing, communing with another body and all its warmth, tenderness, and vulnerability. It didn’t matter that her mom’s advice was pretty much exactly what Ava’s doctor and AI recommended. The emotional connection and the depth of familial love imbued it with so much more credibility. The AI knew, Ava thought, but mom knows.

I survived, her mom said, and so will you. Your chances are so much better now, and at least you can take a little time to map out a more predictable path. God, I’ll never forget how shocked I felt when the first doctor told us to terminate the pregnancy.

Tears started to well in Ava’s eyes, but her mom pushed on: Honey, there was no way I was going to let that happen. No way. When we talked to the second doctor, he realized that was nonnegotiable and looked for alternatives. It helped that he worked at a Catholic hospital, but I think he just understood the emotional side of it, the fact that fighting for something I so desperately wanted, motherhood, probably helped my chances. Her mom shook her head, sighed, and wiped away a tear. Her eyes bored into Ava’s. There’s nothing worse than someone or something telling you that you have no options—especially when they might be wrong. You need to take care of yourself, but you need to live your own life, too.

Ava looked up at the hills and smiled as she rode toward her film studio. Mom and Dad won’t be around forever, at least not physically, she mused, but something about the song selection during the drive reminded her of how intimately her PAL picked up on the little recordings, notes, conversations, and subtle guidance her parents always provided. Melody Gardot’s Who Will Comfort Me piped in, followed by Con Funk Shun’s Shake and Dance With Me. Dad’s favorites were ancient, but her PAL correlated her mood with data on her interactions with him and found the exact combination of empathy and pick-me-up she needed. Go get the day, the message on her PAL said. She didn’t even bother to check if her dad actually sent it, or if her AI just knew to post his favorite exhortation. She smiled again, soaking in the energy of the sunny day.

At the studio, the walk to her desk always prompted a sense of gratitude. She had initially accepted a Wall Street job, opting for the money and the excitement without ever consulting the Career Navigator. Had she never bothered to take her mom’s advice and consult her old AI assistant before moving to New York City, who knows how many miserable years she would’ve spent at that investment bank?

Fortunately, the Navigator homed in on her passion and predisposition for all things living and environmental, despite her best efforts to convince even herself otherwise. Career advisers, with their engrained biases and imperfect data, had told her she was a science ace, so the recommendation seemed to fit. It was definitely better than investment banking, anyway. She embarked on a mission to help mitigate climate change, enrolled in social justice programs, and spent a year as a park ranger in Tanzania. It was a fantastic time, but she never felt fully satisfied by the work. She spent a year debating herself until her AI finally projected a life picture that truly excited her. That beautiful, lifelike hologram of her work—not so different from the studio she stood in today—eventually took her to NYU’s writing and directing program. The first night out with her classmates, the night she noticed Connor sitting by himself at the end of the bar, she actually kissed her new PAL.

Her job changed dramatically in the years since. AI generated increasingly precise insights about audience consumption patterns, societal mood swings, and political trends. Now the studio’s AI capabilities distilled narratives that guided plot development and created meaning for people in their daily lives. Ava would guide those narratives and enrich them with emotional content, imaginative imagery, and storyboards that spoke to the human mind and spirit—however undefinable that still was in 2034.

But not everyone integrated well when the studio, like so many other companies, installed deeper AI systems. Ava had a number of friends who started and aborted careers in different fields—accounting, civil engineering, and pharmacy majors who suddenly discovered their education had not prepared them for the days when machines would conduct analyses, calculations, and highly routine or repetitive tasks. Ava recalled all too well the many long nights of whiskey-induced commiseration with struggling friends. Yet, it had been the same sort of AI insights that set her on the right path.

Ava started flipping through the storyboards for the studio’s next animated feature, occasionally stopping to dictate a few ideas. Each time she felt especially inspired by a change, she’d reload the entire package and start reviewing the fully revised plotlines from the beginning. Today, though, she just couldn’t connect with the stories. Leaning back in her chair, she flipped her PAL to attention mode. Peso, her financial advice AI, immediately beeped in: Hi, Ava. Looks like the markets will rebound tomorrow. We’re picking up on improving geopolitical, productivity, and climate forecasts for next quarter. I give it a 75 percent probability, and we still have some time to move. Shall I put $2,500 of your savings into the market? Your medical and communication data suggest you’ll be cutting back on consumables and travel over the next few months, so maybe put that money to good use in equities?

Fine, Ava replied in a resigned tone. It was the right advice, rational and purposeful, no matter how much it rekindled the anxieties from earlier in the day.

You sound worried, her PAL said. Do you want to speak with Zoe?

Ah, Zoe, the fin-psych. Fin-psychs hadn’t even existed until six or seven years ago. Before financial AIs hit the mainstream, no one needed people to help them process the difficult choices recommended by the machine. There were no more investment advisers, at least not as Ava remembered her parents’ meetings with them. AI could handle all the synthesis of quantifiable data. What people needed was the emotional intelligence to anchor those decisions and make them palatable. These frail, complex, and emotional animals still needed that support.

I need the support of a glass of wine, Ava muttered to herself, gathering her things and heading out of the studio. She walked up the hill toward home. Despite all the support around her, both machine and human, she felt as fragile as ever. This must’ve been what Leo felt like, she thought as she walked past his old apartment. A few years ago, Leo, her old college friend, had locked himself inside, drank a bottle of top-shelf vodka, and overdosed on a fistful of pills. Soon after he got married a decade earlier, he railed against the AI Gaydar app that could identify the sexual preference of a person in a photo with disturbingly high precision.I

Following the divorce, though, his PAL’s relationship advice started to convince him that maybe he wasn’t the Latin Lothario he’d always been conditioned to believe he was. If he ever admitted his sexual ambiguity to himself as his depression set in, he never accepted it.

Neither did Connor. He left Ava the day after Leo’s wake, unable or unwilling to deal with the loss of a friend and the same sort of ambiguity her PAL expressed about her choice of partners. It had said her sexuality wasn’t as clear cut and simple as either of them thought. She told herself and Connor that she didn’t fit a typical mold, whatever that was. And as she was coming to grips with her fuller identity, they fought in ways they never had before. She stopped knowing how to act around him, whether to argue with him or suppress her feelings about their relationship. After Leo’s suicide, it didn’t matter. Connor left for Canada, hurt and heartbroken. A year later Emily entered Ava’s life.

Ava smiled at the thought of her.

I need to change my Swarm settings, she told herself as she walked into the condo she shared with Emily. And I need to change this goddamned voice.

Her PAL asked about both, but she turned it off and poured herself glass of wine instead. She dropped onto the couch, the lights automatically dimming and the speakers quietly sounding hints of waves lapping at the beach.

Ava had already dozed off when the lock clicked open. Emily was home.

AI TODAY AND TOMORROW

For all the incredible capabilities AI will afford in the coming decades—and they will be incredible—the development of robust machine intelligence poses fundamental questions about our humanity. Machines can augment human capability with potentially stunning results, but their predictive elements might also limit what we, and those around us, believe we can accomplish. Your self-identity and approach to life could change, because your rational choices might eliminate several of the paths available to you. Can you really choose to remain blissfully ignorant anymore, intentionally choosing to stumble through a life enriched by trial and error?

These aren’t apocalyptic questions. The machine hasn’t taken over the world. Ava and her world—with all the benefits and complications AI adds to her health, love, and career decisions—still remain years in the future. But AI applications already control many facets of our lives, and each of the incremental advancements that lead us toward an existence like Ava’s might make perfect sense in the moment. They might benefit humanity by keeping our world safer (e.g., predicting crime), keeping it healthier (e.g., identifying cancer risks), or enhancing our lives (e.g., better matching workers with jobs or handling complex financial transactions). Each positive step forward might preclude a grievous error. But in so doing, it might also diminish serendipity and the chance to learn and emotionally grow from our mistakes. To the extent life is an exploration and meaning derives from experience, AI will change the very anthropological nature of individual self-discovery. At what cost? How do we govern the line between human and machine? Without a concerted societal effort now, will we even be able to govern that relationship in the future?

We stand at a critical moment in the proliferation of intelligent systems. Pervasive computing technology, increasingly sophisticated data analytics, and a proliferation of actors with conflicting interests have ushered us into a vibrant yet muddled Cambrian Period of human-digital coexistence, during which new AI applications will bloom as biological life did millennia ago. While these technologies produce immeasurable economic and social benefits, they also create equally potent barriers to transparency and balanced judgment. Sophisticated algorithms obscure the mechanics of choice, influence, and power. For evidence, we need only recall the events of the 2016 US presidential election, fraught with fake news reports and the interference of Russian hackers.

Amid the turbulence, a new wave of research and investment in artificial intelligence gathered strength, the field reawakening from a long dormancy thanks to advances in neural networks, which are modeled loosely on the human brain. These technological architectures allow systems to structure and find patterns in massive unstructured data sets; improve performance as more data becomes available; identify objects quickly and accurately; and, increasingly, accomplish all that without humans clarifying the streams of data fed into these computers.

In this world where AI-powered networks create more value and produce more of the products and services we use each day—and produce this with less and less human control over designs and decisions—our jobs and livelihoods will change significantly. For centuries, technology has destroyed inefficient forms of manual labor and replaced them with more productive work. But more so than any other time in history, economists worry about our ability to create jobs fast enough to replace the ones lost to the automation of artificial intelligence. Our own creations are running circles around us, faster than we can count the laps.

The disruptive impact of AI and automation spans all areas of life. Machines make decisions for us without our conscious and proactive involvement, or even our consent. Algorithms comb through our aggregated data and recognize our past patterns, and the patterns of allegedly similar people across the world. We receive news that shapes our opinions, outlooks, and actions based on the subconscious inclinations we expressed in past actions, or the actions of others in our bubble. While driving our cars, we share our behavioral patterns with automakers and insurance companies so we can take advantage of navigation and increasingly autonomous vehicle technologies, which in return provide us with new conveniences and safer transportation. We enjoy richer, customized entertainment and video games, the makers of which know our socioeconomic profiles, our movement patterns, and our cognitive and visual preferences. Those developers use that information to tailor prices to our personal level of perceived satisfaction, our need to pass the time, or our level of addiction. One person might buy a game at $2, but the next person, who exhibits a vastly different profile, might have to pony up $10.

None of this means the machines will enslave us and deprive us of free will. We already opt in to many deals from Internet companies, often blindly agreeing to the details buried in the fine print because of the benefits we reap in return. Yet, as we continue to opt into more services going forward, we might be doing so for entire sections of our lives, allowing AI to manage complex collections of big and small decisions that will help automate our days, make services more convenient, and tailor offerings to our desires. No longer will we revisit each decision deliberately; we’ll choose instead to trust a machine to get us right. That’s part of the appeal. And to be sure, the machine will get to know us in better and, perhaps, more honest ways than we know ourselves, at least from a strictly rational perspective.

Even when we willingly participate, however, the machine might not account for cognitive disconnects between what we purport to be and what we actually are. Reliant on real data from our real actions, the machine could constrain us to what we have been, rather than what we wish to be. Even with the best data, AI developers might fashion algorithms based on their own experiences, unwittingly creating a system that guides us toward actions we might not choose. So, does the machine then eliminate or reduce our personal choice? Does it do away with life’s serendipity? Does it plan and plot our lives so we meet people like us, and thus deprive us of encounters with people who spark the types of creative friction that make us think, reconsider our views, and evolve into different, perhaps better, human beings?

The trade-offs are endless. A machine might judge us on our expressed values—especially our commercial interests—and provide greater convenience, yet overlook other deeply held values we’ve suppressed. It might not account for newly formed beliefs or changes in what we value. It might even make decisions about our safety that compromise the well-being of others, and do so in ways we find objectionable. Perhaps more troubling, a machine might discriminate against less-healthy or less-affluent people because its algorithms focus instead on statistical averages or pattern recognition that favors the survival of the fittest. After all, we’re complex beings who regularly make value trade-offs within the context of the situation at hand, and sometimes those situations have little or no precedent for an AI to process.

Nor can we assume an AI will work with objective efficiency all the time, free of biases and assumptions. While the machine lacks complex emotions or the quirkiness of a human personality with all its ego-shaping psychology, a programmer’s personal history, predisposition, and unseen biases—or the motivations and incentives of his or her employer—might still get baked into algorithms and selections of data sets. We’ve seen examples already where even the most earnest efforts have had unintended consequences. In 2016, Uber faced criticism about longer wait times in zip codes with large minority populations, where its demand-based algorithms triggered fewer

Enjoying the preview?
Page 1 of 1