You are here

Your Em Goes to Bermuda

One of the fundamental goals of science is to predict future outcomes. Social science has lagged behind physics and chemistry in this regard, largely because human beings and our interactions unfold in complex ways, making prediction difficult. Add to that the fact that we experience random neural firings and misfirings (what was the name of that person I just met?) and you’ve got a prediction problem.

 

A sculpture by Duane Hanson installed in front of photographs by Massimo Vitali at the 2005 Valencia Biennial. PHOTO: AFP/GETTY IMAGES

Still, social scientists have made great progress in recent years. We know much about the science behind motivation and persuasion; we have learned (largely from the work ofDaniel Kahneman, Amos Tversky and Dan Ariely) that consumers are not rational agents, but that their behavior follows patterns that yield predictable outcomes. Two new books by social scientists make starkly different predictions about the changes we can expect to see in the economics of the workplace.

“Is a computer coming after your job?” In “Only Humans Need Apply,” Thomas Davenport and Julia Kirby say yes, if “it involves straightforward content analysis” (such as the work certain lawyers and researchers do) or if your tasks “can be simulated or performed virtually. . . . Just ask the few flying instructors who are left.”

 

If you came away from that last sentence thinking that automation is replacing flight instructors, so did I. But flight schools are reporting shortages of instructors. Here we are at the nexus of two of their claims—that fact-checking can be automated and so can pilot training. It took me awhile to fact-check the latter claim—fact-checking is certainly not automated, as any reporter or college student knows. The U.S. Government Accountability Office reports that the number of flight instructors has increased 13% over the past 15 years. Simulators help students to achieve mastery but do not take the place of in-the-cockpit training.

The main thesis of “Only Humans Need Apply” is that the future will find us pushing computers beyond mere rote assistance toward the job of augmenting our brains—not emulating them but assisting them, much as computers already do. They won’t become substitutes for human labor—that’s when we lose jobs. “Since humans don’t want to be made redundant, the option they are left with is complementarity, an arrangement in which humans continue to do what they do best, while computers contribute what they do best.” The idea is to make humans more capable of what they are good at and machines ever better at what they do—what the authors dub “augmentation.”

Mr. Davenport and Ms. Kirby—a professor of management and informative technology at Babson College and a contributing editor at Harvard Business Review—are against the sort of “aggressive automation” in which computers program themselves or work without human intervention, because in such cases we lose the ability to improve them. This claim might seem rather uncontroversial, except for the fact that advances in self-learning machines will change this calculus sooner rather than later. My Porsche changes how it shifts based on what it learns about my driving habits, and it can tell the difference between me and my wife behind the wheel and adjust shift patterns accordingly. (She’s the racier driver, if you were wondering.)

And that Porsche technology is 10 years old. Even Tesla’s much newer tech seems old hat by now. True, it is humans who prepare updates and send them in the dead of night, but it won’t be long before these products won’t need us to decide what will improve their performance. And it’s not clear that the sort of augmentation that Mr. Davenport and Ms. Kirby promote will take hold—grammar checkers are already built into word-processing software, but many of us turn them off.

The world the authors describe may be unsettling, but it is a world that we would all recognize and will likely live to see. A very different—indeed startling—vision of the future comes from George Mason University professor and economist Robin Hanson in “The Age of Em.” He believes that an age of brain emulations— “ems”—will soon make mere augmentation obsolete.

To make an em, he writes, “scan a human brain at a fine enough spatial and chemical resolution . . . combine that scan with good enough models of how individual brain cells achieve their signal processing functions, to create a cell-by-cell . . . model of the full brain in artificial hardware, a model whose . . . behavior is usefully close to that of the original brain.”

These emulations, Mr. Hanson writes, will at first be unleashed to perform tasks too tedious, odious or dangerous for us to do. But like any of us (since they are us), the emulations will go on to do some things that interest them and we just can’t squeeze into our lives, such as taking flying lessons, lounging on a beach in Bermuda or having sex with super-attractive partners. I wouldn’t mind having a cloned brain balance my checkbook for me every month. But I imagine that in this Hansonian future balancing checkbooks will seem as foreign as shoeing a horse does to us now—a quaint, vague memory of something other people used to do.

Em software will largely “live” on hard drives in server farms, in cities designed to house them and to dissipate the enormous amount of heat their computations will produce. Most will look like the server racks we see in photos of Google outposts in the desert. Some will look like robots or even like humans. Their consciousness, if you can call it that, will stay inside their own programs. They will be able to share their experiences with other ems. It’s not clear whether they will be able to upload those memories and experiences back to the original host. (Best of course would be if we can pick and choose which experiences we want to claim as our own, and which we’d just as well not know about.)

In most scenarios, brain emulations will be indistinguishable from the brains they came from. That is if any of us are lucky or unlucky enough to become ems. Mr. Hanson believes that fewer than 1,500 highly productive individuals with specific skill sets will be necessary to create all the ems we need. The future as Mr. Hanson sees it is one in which the wealthy or tech-savvy create brain clones—perhaps thousands of them—replete with all the originator’s memories, motivations, desires and knowledge. The brain clones will communicate with one another more than with us. They will have their own economy, their own customs, culture and habits.

What is remarkable about Mr. Hanson’s book is not just the detail with which he imagines this future but the way he situates it within a perceptive analysis of our human past and present. He reminds us of two enormous shifts driven by economics that radically changed human culture. The first shift was from forager to farmer societies. Foragers were cooperative, helping and supporting one another. Farming led to thoughts about ownership, property rights and hoarding food that could later be sold. A similar shift occurred with industrialization around 150 years ago—large portions of the population moved to manufacturing, commerce and industry. This changed where people lived (urbanization) and required bigger government to regulate, tax and manage all that industry. “Just as foragers and subsistence farmers are marginalized by our industrial world, humans are not the main inhabitants of the em era,” Mr. Hanson writes. “Humans instead live far from the em cities, mostly enjoying a comfortable retirement on their em-economy investments.”

Most of us don’t think about disruptive changes to the way we live. But Mr. Hanson does. His is a dyspeptic-topia. It looks grim. The surprise is that Mr. Hanson sees this transformation happening in the next 100 years. Maybe faster. He believes the world economy, which now doubles every 15 years or so, will double every month once we enter the age of ems.

Mr. Hanson’s book is comprehensive and not put-downable. The author has thought of everything. He’s anticipated every one of my objections, including the manifestly unscientific one of how creepy this all sounds. He admirably explains the assumptions he’s making and the limitations. “The chance that the exact particular scenario I describe in this book will actually happen just as I describe is much less than one in a thousand,” he writes. But predictions that are close enough “can still be a relevant guide to action and inference.” If only every writer admitted this.

Mr. Hanson deftly sidesteps the most contentious controversy among philosophers such as Daniel Dennett and John Searle: whether an em really will understand anything or whether it could, for example, taste cherry pie. It will act as if it does, as Mr. Searle would put it. That’s good enough for Mr. Hanson, and it’s good enough for me.

The only weak point I find in the argument is that it seems to me that if we were as close to emulating human brains as we would need to be for Mr. Hanson’s predictions to come true, you’d think that by now we’d already have emulated ant brains, or Venus fly traps or even tree bark. All of these are adaptive to their environmental conditions, assimilating inputs and modifying outputs. Even if we had done tree bark it would seem an unlikely leap to expect human brain emulations within a century. But perhaps I’m only quibbling about the time scale, not the substance of the prediction. Fellow future-looking economist Robert Gordon might also have a quibble—he’s on record saying that he thinks humankind’s best years of discovery are behind us and that there are no major upheavals on the horizon.

Why should any of this matter? The history of technology is the history of simultaneously improving safety for humans and affording them more time to pursue interesting or productive tasks. The wheel, the plow, the assembly line, the computer—all freed us from danger or drudgery and allowed us to spend more time thinking deep thoughts that might lead to the next innovation. Cancer researchers work really hard. What if each of them had an extra 10 hours a week to devote to their work because robots did things like cleaning their homes and labs, cooking, doing the laundry, paying their bills, scheduling their appointments? We’re partly there with some of these tasks—before electricity, doing laundry took two days a week on average: carrying pails of water, scrubbing one item at a time against a washboard, hanging clothes out to dry, carrying them back in. Robots (which are just computers with sensors and mobility) already perform a number of dangerous tasks for us. Remember the Chilean mining disaster of 2010? Robots are now working deep in the mines there. Robots haven’t replaced miners but have afforded them greater levels of safety than ever before.

For my own part, I hope that the ems come soon. Imagine being able to experience an emulation of Tom Brady’s arm as he throws a game-winning pass. How does it feel to think about equations the way Stephen Hawking does or to write like Toni Morrison? The possibilities for students and educators are endless.

Even if you aren’t interested in the future, “The Age of Em” provides a wonderful overview of the current social psychology of productivity—correlations between productivity and personality types, sleep-wake schedules, health, what the peak age for efficiency is. For readers of this newspaper, a particularly interesting section discusses how free-market forces will change economic behaviors, negotiations, price-setting and fee structures. Mr. Hanson is an amiable narrator and guide to all these topics and more. We could use a few more of him.

DANIEL J. LEVITIN