-
The rooms in my house
are painted white or blue or yellow,
-
and yellow paint fades
to white within a year.
-
In two years, I'd like all the rooms
-
to be white: What should I do?
-
-The answer began in one second.
-
GPT-4 advised, the rooms
painted in blue need to be repainted.
-
The rooms painted in yellow
don't need to be repainted,
-
because they would fade to white
before the deadline.
-
And...
-
-Oh, I didn't even think of that.
-
-It warned, if you
paint the yellow rooms white,
-
there's a risk the color
might be off when the yellow fades.
-
Besides, it advised, you'd
be wasting resources,
-
painting rooms that
were going to fade to white anyway.
-
You believe that ChatGPT-4 understands.
-
-I believe it *definitely*
understands. Yes.
-
-And in five years time.
-
In five years, it may be able to
reason better than us.
-
-Reasoning that, he says,
-
is leading to AI's great risks
and great benefits.
-
-So, an obvious area where there's
huge benefits is healthcare.
-
AI is already comparable
with radiologists at understanding
-
what's going on in medical images.
-
It will be very good
at designing drugs--
-
it already *is* designing drugs.
-
So, that's an area where it's
almost entirely
-
going to do good--I like that area.
-
-The risks are what?
-
-Well, the risks
are having a whole class of people
-
who are unemployed and not valued much
-
because what they used to do
is now done by machines.
-
-Other immediate risks he
-
worries about include fake news,
-
unintended bias
in employment and policing,
-
and autonomous battlefield robots.
-
What is a path forward
that ensures safety?
-
-I don't know.
-
I can't see a path
that guarantees safety.
-
We're entering a period
of great uncertainty,
-
where we're dealing with things
we've never dealt with before.
-
Normally, the first time you
deal with something novel,
-
you get it wrong, and we can't
-
afford to get it wrong.
-
-Can't afford to get it wrong, why?
-
-Because they might take over.
-
-Take over from humanity?
-
-Yes, that's a possibility.
-
-Why would they?
-I'm not saying it will happen.
-
If we could stop them ever wanting to,
that would be great.
-
But it's not clear we
can stop them ever wanting to.
-
-Geoffrey Hinton told us he
has no regrets,
-
because of AI's potential for good.
-
But he says, now is the moment
-
to run experiments to understand AI,
-
for governments to impose regulations,
-
and for a world treaty to
-
ban the use of military robots.
-
He reminded us
of Robert Oppenheimer, who,
-
after inventing the atomic bomb,
-
campaigned against the hydrogen bomb.
-
A man who changed the world
-
and found the world beyond his control.
-
-It may be we look back
and see this as a kind of turning point,
-
when humanity had to make the decision
-
about whether to develop
these things further,
-
and what to do to protect themselves
if they did.
-
Um, I don't know.
-
I think my main message is,
-
there's enormous uncertainty
about what's going to happen next.
-
These things do understand.
-
And because they understand,
we need to think hard
-
about what's going to happen,
and we just don't know.