-
Whether you think
artificial intelligence
-
will save the world or end it, you
-
have Geoffrey Hinton to thank.
-
Hinton has been called
"the godfather of AI,"
-
a British computer scientist
whose controversial ideas
-
help make advanced
artificial intelligence possible,
-
and so change the world.
-
Hinton believes that AI
will do enormous good.
-
But tonight he has a warning.
-
He says that A.I. systems
may be more intelligent
-
than we know, and there's
a chance the machines could take over,
-
which made us ask the question.
-
[ticking](Unknown Speaker) The story
will continue in a moment.
-
(Host) Does humanity
know what it's doing?
-
-No. Um.
-
I think we're moving into a period when,
-
for the first time ever,
we may have things
-
more intelligent than us.
-
-You believe they can understand?
-Yes.
-
-You believe they are intelligent?
-Yes.
-
-You believe these systems
have experiences of their own,
-
and can make decisions
based on those experiences.
-
-In the same sense as people do, yes.
-
-Are they conscious?
-
-I think they probably don't
have much self-awareness at present.
-
So in that sense, I don't
think they're conscious.
-
Will they have self-awareness,
consciousness?
-
-Oh yes, I think they will in time.
-
-And so human beings
will be the second-most
-
intelligent beings on the planet.
-Yeah.
-
-Geoffrey Hinton told us,
the artificial intelligence he
-
set in motion was an accident,
born of a failure.
-
In the 1970s, at the University
of Edinburgh, he dreamed
-
of simulating a neural network
on a computer, simply as a tool
-
for what he was really studying:
the human brain.
-
But back then, almost no one
-
thought software could mimic the brain.
-
His PhD advisor told him to drop it,
-
before it ruined his career.
-
Hinton says he failed
to figure out the human mind,
-
but the long pursuit
led to an artificial version.
-
-It took much, much longer
than I expected.
-
It took 50 years before it worked well,
-
but in the end it did work well.
-
-At what point did you realize that you
-
were right about neural networks,
-
and most everyone else was wrong?
-
-I always thought I was right.
-
-In 2019, Hinton
and collaborators Yann LeCun
-
on the left, and Yoshua Bengio,
won the Turing Award.
-
The Nobel Prize of computing.
-
To understand how their work
on artificial neural networks
-
helped machines learn to learn,
-
let us take you to a game.
-
Look at that. Oh my goodness.
-
This is Google's AI lab in London,
-
which we first showed you
this past April.
-
Geoffrey Hinton wasn't involved
in this soccer project,
-
but these robots are a great example
of machine learning.
-
The thing to understand
is that the robots
-
were not programed to play soccer.
-
They were told to score.
-
They had to learn how on their own.
-
(Unknown speaker) Wup, goal!
-
-In general, here's how A.I. does it.
-
Hinton and his collaborators
created software in layers,
-
with each layer handling
part of the problem.
-
That's the so-called neural network.
-
But, this is the key, when,
-
for example, the robot scores,
-
a message is sent back down
through all of the layers
-
that says that pathway was right.
-
Likewise, when an answer
is wrong, that message
-
goes down through the network.
-
So correct connections get stronger,
-
wrong connections get weaker,
-
and by trial and error,
the machine teaches itself.
-
You think these AI systems are better
-
at learning than the human mind?
-
-I think they may be, yes.
-
And at present, they're
quite a lot smaller.
-
So, even the biggest chatbots only
-
have about a trillion connections
in them.
-
The human brain has about 100 trillion.
-
And yet, in the trillion connections
in a chatbot,
-
it knows far more than you do
in your 100 trillion connections,
-
which suggests it's
got a much better way
-
of getting knowledge
into those connections.
-
-A much better way of getting knowledge
-
that isn't fully understood.
-
-We have a very good idea of,
sort of, roughly what it's doing,
-
but as soon as it
gets really complicated,
-
we don't actually know what's going on,
-
any more than we know
what's going on in your brain.
-
-What do you mean, we don't
know exactly how it works?
-
It was designed by people.
-
-No it wasn't.
-
What we did was, we
designed the learning algorithm.
-
That's a bit like
designing the principle of evolution.
-
But when this learning algorithm
then interacts with data,
-
it produces complicated neural networks
-
that are good at doing things,
but we don't really
-
understand how they do those things.
-
-What are the implications
of these systems
-
autonomously writing
their own computer code,
-
and executing their own computer code?
-
-That's a serious worry, right?
-
So one of the ways in which
these systems might escape control
-
is by writing their own computer code
to modify themselves,
-
and that's something we
need to seriously worry about.
-
-What do you say to someone who
might argue,
-
if the systems become malevolent,
just turn them off.
-
-They will be able to
manipulate people, right?
-
And these will be very good
at convincing people,
-
because they'll have learned
from all the novels
-
that were ever written,
all the books by Machiavelli,
-
all the political connivances.
-
They'll know that stuff,
they'll know how to do it.
-
-Know-how, of the human kind,
runs in Geoffrey Hinton's family.
-
His ancestors
include mathematician George Boole,
-
who invented the basis of computing.
-
And George Everest, who surveyed India
-
and got that mountain named after him.
-
But as a boy, Hinton himself could never
-
climb the peak of expectations
raised by a domineering father.
-
-Every morning when I went to school,
-
he'd actually say to me,
as I walked down the driveway,
-
"Get in there pitching,
and maybe when you're
-
twice as old as me,
you'll be half as good."
-
-Dad was an authority on beetles.
-
-He knew a lot more about beetles
than he knew about people.
-
-Did you feel that as a child?
-
-A bit. Yes.
-
When he died, we
went to his study at the university,
-
and the walls
were lined with boxes of papers
-
on different kinds of beetle.
-
And just near the door there
was a slightly smaller box
-
that simply said, "Not insects."
-
And that's where he
had all the things about the family.
-
Today, at 75, Hinton recently retired
-
after what he calls,
ten happy years at Google.
-
Now he's professor emeritus
at the University of Toronto,
-
and he happened to mention, he
-
has more academic citations
than his father.
-
Some of his research
led to chatbots like Google's Bard,
-
which we met last spring.
-
Confounding. Absolutely confounding.
-
We asked Bard
to write a story from six words.
-
For sale. Baby shoes. Never worn.
-
Holy cow!
-
The shoes were a gift from my wife,
but we never had a baby.
-
Bard created a deeply human tale
of a man
-
whose wife could not conceive,
and a stranger
-
who accepted the shoes to heal the pain
after her miscarriage.
-
I am rarely speechless.
-
I don't know what to make of this.
-
Chatbots are said to be language models
-
that just predict the next
most likely word based on probability.
-
-You'll hear people saying things like,
-
they're just doing auto-complete,
they're just trying to
-
predict the next word,
and they're just using statistics.
-
Well, it's true, they're
just trying to predict the next word.
-
But if you think about it,
to predict the next word,
-
you have to understand the sentences.
-
So, the idea they're
predicting the next word,
-
so they're not intelligent, is crazy.
-
You have to be really intelligent,
-
to predict the next word
really accurately.
-
-To prove it, Hinton showed us a test he
-
devised for ChatGPT-4, the chatbot
-
from a company called OpenAI.
-
It was sort of reassuring
to see a Turing Award winner
-
mistype and blame the computer.
-
-Oh, damn this thing,
we're going to go back and start again.
-
-That's okay.
-
Hinton's test
was a riddle about house painting.
-
An answer would demand
reasoning and planning.
-
This is what he typed into chat GPT-4.