Apocalypse Now?, Part 1: The Rise of the Machines

This August, that summer-cinema experience of cataclysm and crash has escaped the theaters and invaded our everyday lives. The panic is real: about politics and economics, terrorism and temperature.

So we’re taking a cue from Hollywood for a summer blockbuster of our own. What if we looked beyond those superhero-movie scenarios—New York decimated by robots, clones, aliens, or terrorists—into the world-changing, and life-threatening, real developments of 2016? In 200 years, will humans (if they still exist!) speak with regret about Trump, the rising tide, or about trends and inventions we’ve barely even heard of yet?

With scientists, writers, humanists and technologists, we’ve got our eyes looking for the big risks and asking the life-or-death question for our entire civilization: Apocalypse Now?

Our series begins on St. Ebbes Street in Oxford, England, in the curious office of The Future of Humanity Institute. Inside, founder Nick Bostrom, researcher Anders Sandberg, and a number of other highly intelligent young philosophers, engineers, and scientists have set about imagining a way to keep what Bostrom calls “the human story” going safely along.

From Bostrom’s perspective, wicked problems like climate change or income inequality seem like a planetary heart condition, or back pain: serious, but not fatal. He and the staff of the F.H.I. want us to develop a vigilance against existential threats—the truly disastrous, world-ending outcomes that might arise, probably from our own fumbling.

Bostrom has been able to persuade very smart, tech-savvy people like Bill GatesElon Musk, and Stephen Hawking that one such risk might come from the world of machine intelligence, advancing everyday in labs around the world.

Before you protest that Siri can’t even understand what you’re saying yet, you have to remember that the apocalyptically-minded, like Royal Astronomer Martin Rees, think on the longest of timelines.

Here’s how they see the story so far: Earth has been turning for around 4.5 billion years. Homo sapiens has only witnessed a couple of hundred thousand of those. And only since 1945 have we human beings had the ability to wipe ourselves out.

On the astronomical timeline, 70 years of nuclear peace seems a lot less impressive. And the fact that advanced computers—equipped with new methods for autonomous learning—are mastering the devilishly complicated game of Go and analyzing radiology readouts well ahead of schedule is cause for concern as well as celebration.

not-terminator

And our apocalypse watchers want us to be perfectly clear: they’re not talking about Terminator. Bostrom more often describes AI “superintelligence” as a sort of species unto itself, one that won’t necessarily recognize the importance we humans have typically ascribed to our own survival:

The principal concern would be that the machines would be indifferent to human values, would run roughshod over human values… Much as when we want to build a parking lot outside a supermarket and there happens to be an ant colony living there, but we just pave it over. And it’s not because we hate the ants—it’s just because they don’t factor into our utility function. So it’s similar. If you have an AI whose utility function that just doesn’t value human goals, you might have violence as a kind of side effect.

The Columbia roboticist Hod Lipson tells us how his “creative machines” learn. It isn’t by being given new rules, but by being set free to observe new behaviors and draw their own conclusions. It’s a bit like raising a child.

It’s easy to think of these machines as stuck in a permanent infancy when you watch the strangely poignant robot videos posted by our local robot lab, Boston Dynamics. They can’t open doors; they stumble through the woods. But the point is that we have plunged into the deep water of man-machine interdependency, almost without noticing it, and the current is already carrying us away in unknown directions.

With a panel of our favorite tech-concerned writers—Nicholson Baker, Maria Bustillos, and the critic Mark O’Connell—we’ll discuss the prospect of our first apocalyptic scenario: the rise of the machines.


Related Content