Apocalypse Now?, Part 1: The Rise of the Machines


This August, that summer-cinema experience of cataclysm and crash has escaped the theaters and invaded our everyday lives. The panic is real: about politics and economics, terrorism and temperature.

So we’re taking a cue from Hollywood for a summer blockbuster of our own. What if we looked beyond those superhero-movie scenarios—New York decimated by robots, clones, aliens, or terrorists—into the world-changing, and life-threatening, real developments of 2016? In 200 years, will humans (if they still exist!) speak with regret about Trump, the rising tide, or about trends and inventions we’ve barely even heard of yet?

With scientists, writers, humanists and technologists, we’ve got our eyes looking for the big risks and asking the life-or-death question for our entire civilization: Apocalypse Now?


Our series begins on St. Ebbes Street in Oxford, England, in the curious office of The Future of Humanity Institute. Inside, founder Nick Bostrom, researcher Anders Sandberg, and a number of other highly intelligent young philosophers, engineers, and scientists have set about imagining a way to keep what Bostrom calls “the human story” going safely along.

From Bostrom’s perspective, wicked problems like climate change or income inequality seem like a planetary heart condition, or back pain: serious, but not fatal. He and the staff of the F.H.I. want us to develop a vigilance against existential threats—the truly disastrous, world-ending outcomes that might arise, probably from our own fumbling.

Bostrom has been able to persuade very smart, tech-savvy people like Bill GatesElon Musk, and Stephen Hawking that one such risk might come from the world of machine intelligence, advancing everyday in labs around the world.

Before you protest that Siri can’t even understand what you’re saying yet, you have to remember that the apocalyptically-minded, like Royal Astronomer Martin Rees, think on the longest of timelines.

Here’s how they see the story so far: Earth has been turning for around 4.5 billion years. Homo sapiens has only witnessed a couple of hundred thousand of those. And only since 1945 have we human beings had the ability to wipe ourselves out.

On the astronomical timeline, 70 years of nuclear peace seems a lot less impressive. And the fact that advanced computers—equipped with new methods for autonomous learning—are mastering the devilishly complicated game of Go and analyzing radiology readouts well ahead of schedule is cause for concern as well as celebration.


And our apocalypse watchers want us to be perfectly clear: they’re not talking about Terminator. Bostrom more often describes AI “superintelligence” as a sort of species unto itself, one that won’t necessarily recognize the importance we humans have typically ascribed to our own survival:

The principal concern would be that the machines would be indifferent to human values, would run roughshod over human values… Much as when we want to build a parking lot outside a supermarket and there happens to be an ant colony living there, but we just pave it over. And it’s not because we hate the ants—it’s just because they don’t factor into our utility function. So it’s similar. If you have an AI whose utility function that just doesn’t value human goals, you might have violence as a kind of side effect.

The Columbia roboticist Hod Lipson tells us how his “creative machines” learn. It isn’t by being given new rules, but by being set free to observe new behaviors and draw their own conclusions. It’s a bit like raising a child.

It’s easy to think of these machines as stuck in a permanent infancy when you watch the strangely poignant robot videos posted by our local robot lab, Boston Dynamics. They can’t open doors; they stumble through the woods. But the point is that we have plunged into the deep water of man-machine interdependency, almost without noticing it, and the current is already carrying us away in unknown directions.

With a panel of our favorite tech-concerned writers—Nicholson Baker, Maria Bustillos, and the critic Mark O’Connell—we’ll discuss the prospect of our first apocalyptic scenario: the rise of the machines.

Related Content

  • dirk in omaha

    in addition to the nukes one shouldn’t forget the many machines pouring our pollution into the environs, to deny that we are in end days for much of the biosphere (that one day will continue to be more or less like the past) is to deny some pretty basic aspects of reality.


    • mulp

      Newshour reported on the Florida waterway polluted with nitrogen, phosphorus, drug resistant bacteria coming from 600,000 septic tanks in the watershed. I think it’s people filling those septic tanks, not robots or machines.

      The problem is people do not want to pay workers to work, in this case building a community sewer and water treatment plant, which costs about ten thousand in labor cost per house plus probably $200 to $500 in labor costs per year. The argument against this is that with so many people unemployed or working at low wages, paying middle class wages to tens of thousands of Florida workers would kill jobs and make Florida workers worse off.

      Which is the magical thinking taught by conservatives since Reagan which makes people fear robots replacing them so the capitalists get rich. Notice how the capitalists are blaming Obama and government for the unemployed and working poor not buying more stuff like upper middle class workers because taxes are too high, the tax being the EITC that is the government giving “taxes” to poor people to pay to capitalists who sell to the workers they pay low wages to.

      Pollution exists because people do not want to pay workers to not pollute – pollution is the sign jobs have been killed, and that you are not paying enough to live.

  • Elaine Scarry: act of genius
    “When you see it from the making end it sabotages our ability to see what these
    things are… the makers call out for respect because there were acts of genius.”
    I wish I could speak that eloquently.
    I would just say something like:
    Edward Teller was a moron.

  • A in Sharon

    The end will come. Only the timing and manner are unknown. I do not fear the machine as much as the human hand that wields them. Every ego that was ever born has died. The immediate end, the one occurring before solar energy burns the planet, may come from ego combined with novel power. Who knows, maybe that is G-d. Maybe we are here as a cute reminder, a souvenir for an infinite ego trying to recall mortality.

  • mulp

    Any time anyone says anything about robots replacing humans in production, robots replacing workers in factories, robots replacing office workers, robots replacing teachers and child care workers, simply ask, “what will robots buy from robots?”

    Workers are people who do things for other people, providing goods and services in exchange for good and services. Money is just a proxy for work used to pay for goods and services to pay for the work.

    Replace all workers with robots, and you have replaced all people buying stuff with work, aka money.

    Replace all workers with robots and then individuals will only produce what they want. People will plant gardens, harvest food to eat, cook the food, clean up after eating, activity not counted as work by economists.

    People will not buy what robots produce when robots replace all workers. Thus nothing robots do will have any value.

    Scientist call this zero sum, conservation – you never get something for nothing. Equal and opposite reaction.

    • fun bobby

      when robots can fully replace human labor then the elites can just let the 99% die off from disease war or famine.

  • fun bobby

    wow, that “petman” is terrifying, when robots can replace streetwalkers and soldiers and maids then the elites will no longer require most humans.

  • Potter

    I am trying to get my head around this one. I see AI and robotics creeping in with mixed results including leaving people out of jobs.. including with having to deal with an artificial voice telling me at AT&T that I can be sure that “he” understands me and complete sentences- which “he” did not. We have some way to go.

    I cannot see myself or any human abdicating willfully to a robot. But the point was it would not be willful. But then I thought, what if I were incapacitated ( which surely happens with age)? I would need help, and intelligent help at that. A surgeon could use another pair of hands, for instance as well. And so on….

    “Frankenstein” was not mentioned, i.e. such fears that we have long had that pop up in fiction. Nor was the movie “On the Beach” mentioned. That really put the scare in me. I am remembering Kubrick’s “2001, a Space Odyssey” and now the trailer to the upcoming “A Star Wars Story”. But then you only had an hour. We humans seem to always need something to either fear or fight, to tame or undo.

    Elaine Scarry says that getting rid of nuclear weapons is easy. I wish she had elaborated as to how she thinks we could given the need for a cooperation and trust which seems unattainable given human nature and the nature of governments and the reasons for failure up until this moment. One reason is that nuclear weapons are the ultimate deterrent. We have so much nuclear, beyond what is actually “needed” to make this point even symbolically(?). Ms. Scarry, it IS scary, and seems hopeless.