< Return to Video

"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview

  • 0:02 - 0:04
    Whether you think
    artificial intelligence
  • 0:04 - 0:07
    will save the world or end it, you
  • 0:07 - 0:10
    have Geoffrey Hinton to thank.
  • 0:10 - 0:13
    Hinton has been called
    "the godfather of AI,"
  • 0:13 - 0:18
    a British computer scientist
    whose controversial ideas
  • 0:18 - 0:22
    help make advanced
    artificial intelligence possible,
  • 0:22 - 0:25
    and so change the world.
  • 0:25 - 0:28
    Hinton believes that AI
    will do enormous good.
  • 0:28 - 0:30
    But tonight he has a warning.
  • 0:30 - 0:34
    He says that A.I. systems
    may be more intelligent
  • 0:34 - 0:39
    than we know, and there's
    a chance the machines could take over,
  • 0:39 - 0:43
    which made us ask the question.
  • 0:43 - 0:48
    [ticking](Unknown Speaker) The story
    will continue in a moment.
  • 0:48 - 0:51
    (Host) Does humanity
    know what it's doing?
  • 0:51 - 0:55
    -No. Um.
  • 0:55 - 1:00
    I think we're moving into a period when,
  • 1:00 - 1:02
    for the first time ever,
    we may have things
  • 1:02 - 1:04
    more intelligent than us.
  • 1:04 - 1:07
    -You believe they can understand?
    -Yes.
  • 1:07 - 1:10
    -You believe they are intelligent?
    -Yes.
  • 1:10 - 1:15
    -You believe these systems
    have experiences of their own,
  • 1:15 - 1:18
    and can make decisions
    based on those experiences.
  • 1:18 - 1:20
    -In the same sense as people do, yes.
  • 1:20 - 1:22
    -Are they conscious?
  • 1:22 - 1:25
    -I think they probably don't
    have much self-awareness at present.
  • 1:25 - 1:27
    So in that sense, I don't
    think they're conscious.
  • 1:27 - 1:31
    Will they have self-awareness,
    consciousness?
  • 1:31 - 1:32
    -Oh yes, I think they will in time.
  • 1:32 - 1:36
    -And so human beings
    will be the second-most
  • 1:36 - 1:40
    intelligent beings on the planet.
    -Yeah.
  • 1:40 - 1:44
    -Geoffrey Hinton told us,
    the artificial intelligence he
  • 1:44 - 1:49
    set in motion was an accident,
    born of a failure.
  • 1:49 - 1:53
    In the 1970s, at the University
    of Edinburgh, he dreamed
  • 1:53 - 1:58
    of simulating a neural network
    on a computer, simply as a tool
  • 1:58 - 2:03
    for what he was really studying:
    the human brain.
  • 2:03 - 2:05
    But back then, almost no one
  • 2:05 - 2:07
    thought software could mimic the brain.
  • 2:07 - 2:10
    His PhD advisor told him to drop it,
  • 2:10 - 2:13
    before it ruined his career.
  • 2:13 - 2:16
    Hinton says he failed
    to figure out the human mind,
  • 2:16 - 2:21
    but the long pursuit
    led to an artificial version.
  • 2:21 - 2:23
    -It took much, much longer
    than I expected.
  • 2:23 - 2:26
    It took 50 years before it worked well,
  • 2:26 - 2:27
    but in the end it did work well.
  • 2:27 - 2:32
    -At what point did you realize that you
  • 2:32 - 2:35
    were right about neural networks,
  • 2:35 - 2:37
    and most everyone else was wrong?
  • 2:37 - 2:40
    -I always thought I was right.
  • 2:40 - 2:44
    -In 2019, Hinton
    and collaborators Yann LeCun
  • 2:44 - 2:49
    on the left, and Yoshua Bengio,
    won the Turing Award.
  • 2:49 - 2:52
    The Nobel Prize of computing.
  • 2:52 - 2:56
    To understand how their work
    on artificial neural networks
  • 2:56 - 2:59
    helped machines learn to learn,
  • 2:59 - 3:02
    let us take you to a game.
  • 3:02 - 3:06
    Look at that. Oh my goodness.
  • 3:06 - 3:09
    This is Google's AI lab in London,
  • 3:09 - 3:12
    which we first showed you
    this past April.
  • 3:12 - 3:16
    Geoffrey Hinton wasn't involved
    in this soccer project,
  • 3:16 - 3:20
    but these robots are a great example
    of machine learning.
  • 3:20 - 3:23
    The thing to understand
    is that the robots
  • 3:23 - 3:26
    were not programed to play soccer.
  • 3:26 - 3:29
    They were told to score.
  • 3:29 - 3:31
    They had to learn how on their own.
  • 3:31 - 3:34
    (Unknown speaker) Wup, goal!
  • 3:34 - 3:37
    -In general, here's how A.I. does it.
  • 3:37 - 3:41
    Hinton and his collaborators
    created software in layers,
  • 3:41 - 3:44
    with each layer handling
    part of the problem.
  • 3:44 - 3:46
    That's the so-called neural network.
  • 3:46 - 3:48
    But, this is the key, when,
  • 3:48 - 3:51
    for example, the robot scores,
  • 3:51 - 3:54
    a message is sent back down
    through all of the layers
  • 3:54 - 3:58
    that says that pathway was right.
  • 3:58 - 4:01
    Likewise, when an answer
    is wrong, that message
  • 4:01 - 4:04
    goes down through the network.
  • 4:04 - 4:06
    So correct connections get stronger,
  • 4:06 - 4:08
    wrong connections get weaker,
  • 4:08 - 4:13
    and by trial and error,
    the machine teaches itself.
  • 4:13 - 4:16
    You think these AI systems are better
  • 4:16 - 4:18
    at learning than the human mind?
  • 4:18 - 4:21
    -I think they may be, yes.
  • 4:21 - 4:24
    And at present, they're
    quite a lot smaller.
  • 4:24 - 4:26
    So, even the biggest chatbots only
  • 4:26 - 4:29
    have about a trillion connections
    in them.
  • 4:29 - 4:31
    The human brain has about 100 trillion.
  • 4:31 - 4:35
    And yet, in the trillion connections
    in a chatbot,
  • 4:35 - 4:39
    it knows far more than you do
    in your 100 trillion connections,
  • 4:39 - 4:42
    which suggests it's
    got a much better way
  • 4:42 - 4:44
    of getting knowledge
    into those connections.
  • 4:44 - 4:47
    -A much better way of getting knowledge
  • 4:47 - 4:49
    that isn't fully understood.
  • 4:49 - 4:52
    -We have a very good idea of,
    sort of, roughly what it's doing,
  • 4:52 - 4:54
    but as soon as it
    gets really complicated,
  • 4:54 - 4:56
    we don't actually know what's going on,
  • 4:56 - 4:59
    any more than we know
    what's going on in your brain.
  • 4:59 - 5:02
    -What do you mean, we don't
    know exactly how it works?
  • 5:02 - 5:04
    It was designed by people.
  • 5:04 - 5:06
    -No it wasn't.
  • 5:06 - 5:09
    What we did was, we
    designed the learning algorithm.
  • 5:09 - 5:12
    That's a bit like
    designing the principle of evolution.
  • 5:12 - 5:16
    But when this learning algorithm
    then interacts with data,
  • 5:16 - 5:18
    it produces complicated neural networks
  • 5:18 - 5:21
    that are good at doing things,
    but we don't really
  • 5:21 - 5:23
    understand how they do those things.
  • 5:23 - 5:29
    -What are the implications
    of these systems
  • 5:29 - 5:30
    autonomously writing
    their own computer code,
  • 5:30 - 5:33
    and executing their own computer code?
  • 5:33 - 5:35
    -That's a serious worry, right?
  • 5:35 - 5:39
    So one of the ways in which
    these systems might escape control
  • 5:39 - 5:44
    is by writing their own computer code
    to modify themselves,
  • 5:44 - 5:47
    and that's something we
    need to seriously worry about.
  • 5:47 - 5:50
    -What do you say to someone who
    might argue,
  • 5:50 - 5:54
    if the systems become malevolent,
    just turn them off.
  • 5:54 - 5:57
    -They will be able to
    manipulate people, right?
  • 5:57 - 6:00
    And these will be very good
    at convincing people,
  • 6:00 - 6:02
    because they'll have learned
    from all the novels
  • 6:02 - 6:07
    that were ever written,
    all the books by Machiavelli,
  • 6:07 - 6:09
    all the political connivances.
  • 6:09 - 6:12
    They'll know that stuff,
    they'll know how to do it.
  • 6:12 - 6:18
    -Know-how, of the human kind,
    runs in Geoffrey Hinton's family.
  • 6:18 - 6:21
    His ancestors
    include mathematician George Boole,
  • 6:21 - 6:25
    who invented the basis of computing.
  • 6:25 - 6:28
    And George Everest, who surveyed India
  • 6:28 - 6:32
    and got that mountain named after him.
  • 6:32 - 6:36
    But as a boy, Hinton himself could never
  • 6:36 - 6:41
    climb the peak of expectations
    raised by a domineering father.
  • 6:41 - 6:43
    -Every morning when I went to school,
  • 6:43 - 6:46
    he'd actually say to me,
    as I walked down the driveway,
  • 6:46 - 6:48
    "Get in there pitching,
    and maybe when you're
  • 6:48 - 6:50
    twice as old as me,
    you'll be half as good."
  • 6:50 - 6:54
    -Dad was an authority on beetles.
  • 6:54 - 6:56
    -He knew a lot more about beetles
    than he knew about people.
  • 6:56 - 6:58
    -Did you feel that as a child?
  • 6:58 - 7:01
    -A bit. Yes.
  • 7:01 - 7:05
    When he died, we
    went to his study at the university,
  • 7:05 - 7:08
    and the walls
    were lined with boxes of papers
  • 7:08 - 7:10
    on different kinds of beetle.
  • 7:10 - 7:13
    And just near the door there
    was a slightly smaller box
  • 7:13 - 7:16
    that simply said, "Not insects."
  • 7:16 - 7:19
    And that's where he
    had all the things about the family.
  • 7:19 - 7:23
    Today, at 75, Hinton recently retired
  • 7:23 - 7:27
    after what he calls,
    ten happy years at Google.
  • 7:27 - 7:31
    Now he's professor emeritus
    at the University of Toronto,
  • 7:31 - 7:33
    and he happened to mention, he
  • 7:33 - 7:37
    has more academic citations
    than his father.
  • 7:37 - 7:41
    Some of his research
    led to chatbots like Google's Bard,
  • 7:41 - 7:44
    which we met last spring.
  • 7:44 - 7:45
    Confounding. Absolutely confounding.
  • 7:45 - 7:50
    We asked Bard
    to write a story from six words.
  • 7:50 - 7:54
    For sale. Baby shoes. Never worn.
  • 7:54 - 7:56
    Holy cow!
  • 7:56 - 8:00
    The shoes were a gift from my wife,
    but we never had a baby.
  • 8:00 - 8:04
    Bard created a deeply human tale
    of a man
  • 8:04 - 8:07
    whose wife could not conceive,
    and a stranger
  • 8:07 - 8:12
    who accepted the shoes to heal the pain
    after her miscarriage.
  • 8:12 - 8:15
    I am rarely speechless.
  • 8:15 - 8:18
    I don't know what to make of this.
  • 8:18 - 8:21
    Chatbots are said to be language models
  • 8:21 - 8:25
    that just predict the next
    most likely word based on probability.
  • 8:25 - 8:28
    -You'll hear people saying things like,
  • 8:28 - 8:29
    they're just doing auto-complete,
    they're just trying to
  • 8:29 - 8:34
    predict the next word,
    and they're just using statistics.
  • 8:34 - 8:37
    Well, it's true, they're
    just trying to predict the next word.
  • 8:37 - 8:40
    But if you think about it,
    to predict the next word,
  • 8:40 - 8:45
    you have to understand the sentences.
  • 8:45 - 8:46
    So, the idea they're
    predicting the next word,
  • 8:46 - 8:48
    so they're not intelligent, is crazy.
  • 8:48 - 8:50
    You have to be really intelligent,
  • 8:50 - 8:52
    to predict the next word
    really accurately.
  • 8:52 - 8:56
    -To prove it, Hinton showed us a test he
  • 8:56 - 9:00
    devised for ChatGPT-4, the chatbot
  • 9:00 - 9:02
    from a company called OpenAI.
  • 9:02 - 9:07
    It was sort of reassuring
    to see a Turing Award winner
  • 9:07 - 9:09
    mistype and blame the computer.
  • 9:09 - 9:12
    -Oh, damn this thing,
    we're going to go back and start again.
  • 9:12 - 9:13
    -That's okay.
  • 9:13 - 9:17
    Hinton's test
    was a riddle about house painting.
  • 9:17 - 9:21
    An answer would demand
    reasoning and planning.
  • 9:21 - 9:25
    This is what he typed into chat GPT-4.
Title:
"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview
Description:

more » « less
Video Language:
English
Duration:
13:12

English subtitles

Revisions