< Return to Video

"Godfather of AI" Geoffrey Hinton (2) 9m 24s 27ms - end

  • 0:00 - 0:05
    The rooms in my house
    are painted white or blue or yellow,
  • 0:05 - 0:08
    and yellow paint fades
    to white within a year.
  • 0:08 - 0:09
    In two years, I'd like all the rooms
  • 0:09 - 0:11
    to be white: What should I do?
  • 0:11 - 0:17
    -The answer began in one second.
  • 0:17 - 0:21
    GPT-4 advised, the rooms
    painted in blue need to be repainted.
  • 0:21 - 0:25
    The rooms painted in yellow
    don't need to be repainted,
  • 0:25 - 0:28
    because they would fade to white
    before the deadline.
  • 0:28 - 0:30
    And...
  • 0:30 - 0:32
    -Oh, I didn't even think of that.
  • 0:32 - 0:36
    -It warned, if you
    paint the yellow rooms white,
  • 0:36 - 0:41
    there's a risk the color
    might be off when the yellow fades.
  • 0:41 - 0:45
    Besides, it advised, you'd
    be wasting resources,
  • 0:45 - 0:48
    painting rooms that
    were going to fade to white anyway.
  • 0:48 - 0:52
    You believe that ChatGPT-4 understands.
  • 0:52 - 0:55
    -I believe it *definitely*
    understands. Yes.
  • 0:55 - 0:57
    -And in five years time.
  • 0:57 - 1:01
    In five years, it may be able to
    reason better than us.
  • 1:01 - 1:03
    -Reasoning that, he says,
  • 1:03 - 1:08
    is leading to AI's great risks
    and great benefits.
  • 1:08 - 1:13
    -So, an obvious area where there's
    huge benefits is healthcare.
  • 1:13 - 1:17
    AI is already comparable
    with radiologists at understanding
  • 1:17 - 1:20
    what's going on in medical images.
  • 1:20 - 1:21
    It will be very good
    at designing drugs--
  • 1:21 - 1:23
    it already *is* designing drugs.
  • 1:23 - 1:28
    So, that's an area where it's
    almost entirely
  • 1:28 - 1:31
    going to do good--I like that area.
  • 1:31 - 1:33
    -The risks are what?
  • 1:33 - 1:37
    -Well, the risks
    are having a whole class of people
  • 1:37 - 1:40
    who are unemployed and not valued much
  • 1:40 - 1:44
    because what they used to do
    is now done by machines.
  • 1:44 - 1:46
    -Other immediate risks he
  • 1:46 - 1:50
    worries about include fake news,
  • 1:50 - 1:54
    unintended bias
    in employment and policing,
  • 1:54 - 1:58
    and autonomous battlefield robots.
  • 1:58 - 2:03
    What is a path forward
    that ensures safety?
  • 2:03 - 2:04
    -I don't know.
  • 2:04 - 2:08
    I can't see a path
    that guarantees safety.
  • 2:08 - 2:10
    We're entering a period
    of great uncertainty,
  • 2:10 - 2:14
    where we're dealing with things
    we've never dealt with before.
  • 2:14 - 2:17
    Normally, the first time you
    deal with something novel,
  • 2:17 - 2:19
    you get it wrong, and we can't
  • 2:19 - 2:20
    afford to get it wrong.
  • 2:20 - 2:22
    -Can't afford to get it wrong, why?
  • 2:22 - 2:25
    -Because they might take over.
  • 2:25 - 2:26
    -Take over from humanity?
  • 2:26 - 2:28
    -Yes, that's a possibility.
  • 2:28 - 2:30
    -Why would they?
    -I'm not saying it will happen.
  • 2:30 - 2:33
    If we could stop them ever wanting to,
    that would be great.
  • 2:33 - 2:37
    But it's not clear we
    can stop them ever wanting to.
  • 2:37 - 2:41
    -Geoffrey Hinton told us he
    has no regrets,
  • 2:41 - 2:44
    because of AI's potential for good.
  • 2:44 - 2:47
    But he says, now is the moment
  • 2:47 - 2:50
    to run experiments to understand AI,
  • 2:50 - 2:53
    for governments to impose regulations,
  • 2:53 - 2:55
    and for a world treaty to
  • 2:55 - 2:58
    ban the use of military robots.
  • 2:58 - 3:02
    He reminded us
    of Robert Oppenheimer, who,
  • 3:02 - 3:05
    after inventing the atomic bomb,
  • 3:05 - 3:08
    campaigned against the hydrogen bomb.
  • 3:08 - 3:10
    A man who changed the world
  • 3:10 - 3:14
    and found the world beyond his control.
  • 3:14 - 3:17
    -It may be we look back
    and see this as a kind of turning point,
  • 3:17 - 3:19
    when humanity had to make the decision
  • 3:19 - 3:22
    about whether to develop
    these things further,
  • 3:22 - 3:25
    and what to do to protect themselves
    if they did.
  • 3:25 - 3:27
    Um, I don't know.
  • 3:27 - 3:29
    I think my main message is,
  • 3:29 - 3:34
    there's enormous uncertainty
    about what's going to happen next.
  • 3:34 - 3:36
    These things do understand.
  • 3:36 - 3:39
    And because they understand,
    we need to think hard
  • 3:39 - 3:41
    about what's going to happen,
    and we just don't know.
Title:
"Godfather of AI" Geoffrey Hinton (2) 9m 24s 27ms - end
Video Language:
English
Duration:
03:48

English subtitles

Revisions