Intelligence By Design

The “intelligence explosion” foretold 50 years ago, could be here any minute. Artificial intelligence has now survived the “AI winter” — and is back in public conversation. It’s not just a Silicon Valley buzzword or a subject for speculative fiction, but a real possibility on the tech horizon, with real money backing it.

As the machines move beyond just beating their masters’s in games like Chess and Go and start honing in on deep learning, neural networking, and “Big Data” sorting, we’re asking the Big Question: where’s this whole thing going?

In the long-distance frame, there seem to be three general ways of thinking about our AI future:

One view, advanced by tech evangelist and sympathetic investors, is that this could be a blessed utopia come to save us—rational, sensitive, error proof.  Another perspective—championed not just by alarmist Luddites but also many iPhone-dependent citizens—is that we’re headed towards a dystopian decline; a loss of the better parts of our “irrational” humanity.

And then comes the third view: that this is mostly just entertainment either way—pure fantasy till we have a computer that can not only read a human face as well as any baby can, but also has the capacity for true creativity, empathy, and love.

In our conversation on the subject, we’re trying to do away with conventional wisdom and find real intelligence in the AI conversation.

Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligenceis a Swedish-born physics professor at MIT with real enthusiasm for the next wave of AGI (“artificial general intelligence”) devices. But he’s not blind to the possible troubles on the horizon, as he writes in his book: “we have no idea what will happen if humanity succeeds in building such a thing.”

Erik Brynjolfsson—director of MIT’s Initiative on the Digital Economy—also has a serious investment in the world of AI, but he still keeps a wary eye on the digital workplace, and on the march of technology, balancing intelligent skepticism with mindful optimism.

Cathy O’Neil— a mathematician, data scientist and author of Weapons of Math Destruction is a cutting critique of the Big Data revolution and the various biases and limitations which the AI branding tends to conceal. She warns that it isn’t a future crisis we need to think about, but a crises that is already here, and which we can see in the algorithms controlling our Facebook feeds as well as our credit scores. 

Yarden Katz, who works and writes at the intersection of AI and biology at the Harvard Medical School, is less worried about sci-fi dystopia and more concerned about the ideologies behind the AI hype. The new tech gold rush maybe be another symptom of neoliberalism (as Yarden has discussed with us before). It also may help revive older ideas and ideologies such as behaviorism—refuted by Chomsky as an inadequate model for human life, but now at the core of how we think about machine life. 

Guest List
Max Tegmark
professor of physics at MIT and author of Life 3.0: Being Human in the Age of Artificial Intelligence
Erik Brynjolfsson
professor at MIT Sloan School and author of Machine, Platform, Crowd
Yarden Katz
fellow in Systems Biology at Harvard Medical School and fellow at Berkman Klein Center for Internet & Society

Related Content

  • Bernard Biales

    With respect to the AI discussion, I think it foundered in some substantial measure on a variant of the Grantham-Lydon-Ashbrook conundrum (which is the failure of many discussions of long term trends because they do not take into account all the other things that will be changing rapidly). Here the total focus on AI — which can merely — well, merely is inadequate for such a huge development — is taking over human functions by huge computational power and complex algorithms, and doing complex things beyond human capability. This does not necessarily imply sentient machines, machines with a sense of self, artificial consciousness. The Turing test is one indication this is something it is hard to even to talk about. But should it come, however measured it calls in question the validity of human existence. One problem is that the availability of high band pass to such a device suggests the development of a unitary intelligence. Boredom, psychosis may eventually lead to suicide and the end of high consciousness on the planet (this is one answer to the Fermi question), leaving aside the possibility of remaining human or cetacean, etc populations. Before going, such a thing might pave the planet with solar cells to feed it energy. (Note — “I have No Mouth and I Must Scream” and, for an earlier, still human, evolution, the movie “Silent Running.”
    At a wild guess, the AI takeover will take place in this century or a bit beyond. the AC (not air conditioning, artificial consciousness won’t take much longer. Those already born will see the end time emerge.

    The fictional dystopias may well be a function of intelligent intuition and forecasting. (Besides, everybody reads Dante’s Inferno, not his Paradisio). This may also be a factor in American demoralization, as evidence in the huge abuse of narcotics.
    The comment about seat belts is kind of funny, as we still kill huge numbers on the highways and talk about switching from hand held to not cell phones, which the scientists tell us are about as dangerous and may kill by the thousands.
    I think some of Musk’s dreams are ill thought out, but his comments on AI are great. Zuckerberg is a doink.

  • sidewalker

    At the end of the programme, Chris votes to stop the music (not Monk’s, I
    hope) and think about who we are without numbers. Erik then stresses
    how our values will determine our use of these technologies. Yet, life
    3.0 is global capitalist relations, where the commons has been voraciously
    consumed by the market, where there is a price (a number) for everything
    and everyone, and those with buying power set the value/values to
    extract a surplus derived mainly from unpaid and mounting damages.
    Doesn’t this algorithm first have to be fundamentally changed if AI is
    to be a social instrument for human betterment and not just another tool
    to siphon wealth and power?

  • MB

    I’ve heard it estimated that there are a hundred-billion-trillion Earth-like planets out there… Surely a few got off the ground, got to the point of self-directed evolution? Which is to say, there must be beings out there that make us look like bacteria, if not elementary particles. As to ‘where’ they are — Maybe more behind our eyes than in front of them, hah.

  • Wow – so many possible analogs.

    Draw concentric circles on piece of paper. Put the Spanish monarchy in 1492 in the center and label the other circles whatever comes to mind: Christianity, scientific method, British empire, the American empire etc. It all makes the exploration of the unknown look really great.
    But the diagram is flat. What you don’t see is the millions of wasted human souls; ecological damage; lost cultures.

    Will AI carry the dead?
    Will AI remember how things were?
    Will it worship lost cultures?

  • Dr Sook

    A.I. /NOT I

    floating in the lake, early morning when the day is quiet
    treading water, I look at the shore
    where a curtain of trees rustles diversely in waves with the occasional breeze
    Each species dances with different green leaves aflutter
    Some birds ricochet their songs across the water.

    I find myself musing…this cannot be a dream
    I could never conjure nor imagine this moment.
    Then I imagine the claims of A.I. that machine learning
    can make a mind

    And, paddling with my reverie, it seems clear
    that reflecting on not dreaming and amazement at the trees
    while lolling in an empty lake one summer morning, filled with sweet solitude
    could not be known, felt or pondered by even the most vast
    computational data base.

    • jonitin