Placeholder Image

Subtitles section Play video

  • E

  • CONCERNS OVER THE RAPIDLY

  • EXPANDING USE OF ARTIFICIAL

  • INTELLIGENCE RESONATED LOUDLY IN

  • WASHINGTON AND AROUND THE WORLD.

  • VICE PRESIDENT KAMALA HARRIS MET

  • WITH TOP EXECUTIVES FROM

  • COMPANIES LEADING IN AI

  • DEVELOPMENT, MICROSOFT, GOOGLE.

  • THE VICE PRESIDENT DISCUSSED

  • SOME OF THE GROWING RISKS AND

  • TOLD COMPANIES THEY HAD A MORAL

  • OBLIGATION TO DEVELOP AI SAFELY.

  • THAT MEETING CAME JUST DAYS

  • AFTER MY THE LEADING VOICES IN

  • THE FIELD OF AI ANNOUNCED HE WAS

  • QUITTING GOOGLE OVER HIS WORRIES

  • OVER THE FUTURE OF AI AND WHAT

  • HE COULD EVENTUALLY LEAD TO,

  • UNCHECKED.

  • WE WOULD HER ABOUT SOME OF THOSE

  • CONCERNS NOW WITH DR. JEFFREY

  • HINTON, WHICH JOINTLY FROM

  • LONDON.

  • THANK YOU FOR JOINING US.

  • WERE YOU FREE TO EXPRESS ABOUT

  • ARTIFICIAL INTELLIGENCE THAT YOU

  • COULD NOT EXPRESS FREELY WHEN

  • YOU WERE EMPLOYED BY GOOGLE?

  • >> IT WAS NOT THAT I COULD NOT

  • EXPRESS IT FREELY WHEN EMPLOYED

  • BY GOOGLE BUT INEVITABLY, IF YOU

  • WORK FOR A COMPANY YOU TEND TO

  • SELF CENSOR.

  • YOU THINK ABOUT THE IMPACT IT

  • WILL HAVE ON THE COMPANY.

  • I WANT TO BE ABLE TO TALK ABOUT

  • WHAT I NOW PERCEIVE OF THE RISKS

  • OF SUPER INTELLIGENT AI WITHOUT

  • HAVING TO THINK ABOUT THE IMPACT

  • ON GOOGLE.

  • GEOFF: WHAT ARE THOSE RISKS?

  • >> THERE ARE QUITE A FEW

  • DIFFERENT RISKS.

  • THERE IS THE RISK OF PRODUCING A

  • LOT OF FAKE NEWS SO YOU DO NOT

  • KNOW WHAT IS TRUE ANYMORE.

  • THERE IS THE RISK OF ENCOURAGING

  • POLARIZATION BY GETTING PEOPLE

  • TO CLICK ON THINGS.

  • THERE IS THE RISK OF PUTTING

  • PEOPLE OUT OF WORK.

  • IT SHOULD BE WOULD WE MAKE

  • THINGS MORE PRODUCTIVE, GREATLY

  • INCREASE PRODUCTIVITY, IT HELPS

  • EVERYONE.

  • THERE IS THE WERE IT MIGHT JUST

  • HELP THE RICH.

  • THERE IS THE RISK I WANT TO TALK

  • ABOUT.

  • MANY OTHER PEOPLE TALK ABOUT THE

  • OTHER RISKS, INCLUDING BIAS AND

  • DISCRIMINATION.

  • I WANT TO TALK ABOUT A DIFFERENT

  • RISK, THE RISK OF SUPER

  • INTELLIGENT AI TAKING OVER

  • CONTROL FROM PEOPLE.

  • GEOFF: HOW DO THE TWO COMPARED

  • -- HUMAN INTELLIGENCE AND

  • MACHINE INTELLIGENCE?

  • >> THAT IS A VERY GOOD QUESTION

  • AND I HAVE QUITE A LONG ANSWER.

  • BIOLOGICAL INTELLIGENCE USES

  • VERY LITTLE POWER.

  • WE ONLY USE 30 WATTS.

  • WE HAVE HUGE NUMBERS OF

  • CONNECTIONS, LIKE 100 TRILLION

  • BETWEEN NEURONS.

  • LEARNING CAN CHANGE THE STRENGTH

  • OF THE CONNECTIONS.

  • DIGITAL INTELLIGENCE WE HAVE

  • CREATED USES A LOT OF POWER WHEN

  • YOU ARE TRAINING IT.

  • IT HAS FAR FEWER CONNECTIONS,

  • ONLY 1 TRILLION BUT I CAN LEARN

  • MUCH, MUCH MORE THAN ANY ONE

  • PERSON WHICH SUGGESTS THAT IT IS

  • A BETTER LEARNING ALGORITHM THAN

  • THE BRAIN.

  • GEOFF: WHAT WOULD SMARTER THAN

  • HUMAN AI SYSTEMS DO?

  • WHAT IS THE CONCERN THAT YOU

  • HAVE?

  • >> THE QUESTION IS -- WHAT WILL

  • MOTIVATE THEM?

  • THEY COULD EASILY MANIPULATE US

  • IF THEY WANTED TO.

  • IMAGINE YOURSELF AND A

  • TWO-YEAR-OLD CHILD.

  • YOU COULD ASK IF YOU WANT THE

  • PEAS OR THE CAULIFLOWER AND THE

  • CHILD DOES NOT REALIZE IT DOES

  • NOT HAVE TO HAVE EITHER.

  • WE KNOW FOR EXAMPLE THAT YOU CAN

  • INVADE A BUILDING IN WASHINGTON

  • WITHOUT EVER GOING THERE

  • YOURSELF BY MANIPULATIVE PEOPLE.

  • IMAGINE SOMEONE WHO IS MUCH

  • BETTER THAN MANIPULATE PEOPLE

  • THAN OUR CURRENT POLITICIANS.

  • GEOFF: WHY WOULD AI WANT TO DO

  • THAT?

  • WOULD THAT NOT REQUIRE SOME FORM

  • OF SENTENCE?

  • >> LET'S NOT GET CONFUSED ABOUT

  • THAT ISSUE.

  • I DO NOT WANT TO CONFUSE THE

  • ISSUE.

  • LET ME GIVE YOU ONE EXAMPLE

  • WHITE BUT WANT TO DO THAT.

  • SUPPOSE YOU ARE GETTING AI TO DO

  • SOMETHING.

  • YOU GIVE IT A GOAL.

  • YOU GIVE IT THE ABILITY TO

  • CREATE SUB GOALS.

  • YOU CREATE A SUB GOAL OF GETTING

  • A TAXI.

  • ONE THING YOU WILL NOTICE

  • QUICKLY IS THERE IS A SUB GOAL,

  • YOU CAN ACHIEVE IT TO MAKE IT

  • EASIER TO ACHIEVE THE OTHER

  • GOALS.

  • THE SUBGOALS MAKE IT EASIER TO

  • GET MORE CONTROL AND GET MORE

  • POWER.

  • THE MORE POWER YOU HAVE, THE

  • EASIER IT IS TO GET THINGS DONE.

  • WE GIVE A PERFECTLY REASONABLE

  • GOAL, IT DECIDES IN ORDER TO DO

  • THAT I WILL GIVE MYSELF NORMAL

  • POWER.

  • BECAUSE IT IS MUCH SMARTER THAN

  • US AND TRAINED FROM EVERYTHING

  • PEOPLE EVER DID, IT HAS READ

  • EVERY NOVEL, IT KNOWS A LOT

  • ABOUT HOW TO MEND APPEALING

  • PEOPLE.

  • THERE IS THE WORRY IT MIGHT

  • START MANIPULATING US INTO

  • GIVING MORE POWER.

  • WE MIGHT NOT HAVE A CLUE WHAT IS

  • GOING ON.

  • GEOFF: WHEN YOU WERE AT THE

  • FOREFRONT OF THIS TECHNOLOGY

  • DECADES AGO, WHAT DID YOU THINK

  • IT BY DUE?

  • WHAT WERE THE APPLICATIONS YOU

  • HAD IN MIND?

  • >> THERE ARE A HUGE NUMBER OF

  • GOOD APPLICATIONS AND THAT WOULD

  • BE A MISTAKE TO STOP DEVELOPING.

  • IT WILL BE USEFUL IN MEDICINE.

  • WOULD YOU RATHER SEE A FAMILY

  • DOCTOR THAT HAS SEEN A FEW

  • THOUSAND PATIENTS OR A DOCTOR

  • THAT HAS SEEN A FEW MILLION

  • PATIENTS, INCLUDING MANY WITH

  • THE SAME RARE DISEASE YOU HAVE?

  • YOU COULD MAKE BETTER

  • NANOTECHNOLOGY FOR SOLAR PANELS.

  • YOU CAN PREDICT FLOODS AND

  • EARTHQUAKES.

  • YOU CAN DO TREMENDOUS GOOD WITH

  • THIS.

  • GEOFF: IS THE PROBLEM THEN THE

  • TECHNOLOGY OR IS THE PROBLEM THE

  • PEOPLE BEHIND IT?

  • >> IT IS A COMBINATION OF.

  • OBVIOUSLY, MANY OF THE

  • ORGANIZATIONS DEVELOPING THIS

  • OUR DEFENSE DEPARTMENTS.

  • DEFENSE DEPARTMENTS DO NOT

  • NECESSARILY WANT TO BUILD IN "BE

  • NICE TO PEOPLE" AS THE FIRST

  • RULE.

  • SOME DEFENSE DEPARTMENT WOULD

  • LIKE TO BUILD IN "KILL PEOPLE OF

  • A PARTICULAR KIND."

  • WE CANNOT EXPECT THEM TO HAVE

  • GOOD INTENTIONS TOWARD ALL

  • PEOPLE.

  • GEOFF: THERE IS THE QUESTION

  • ABOUT WHAT TO DO TO IT.

  • THE TECHNOLOGY IS ADVANCING

  • FASTER THAN SOCIETIES CAN KEEP

  • PACE WITH.

  • THE CAPABILITIES OF THIS

  • TECHNOLOGY, THEY LEAP FORWARD

  • EVERY FEW MONTHS.

  • WHEN IT IS REQUIRED TO WRITE

  • LEGISLATION, PASS LEGISLATION,

  • THAT TAKES YEARS.

  • >> I HAVE GONE PUBLIC TO TRY TO

  • ENCOURAGE A MUCH MORE -- MANY

  • MORE CREATIVE SCIENTISTS TO GET

  • INTO THIS AREA.

  • I THINK IT IS AN AREA IN WHICH

  • WE CAN ACTUALLY HAVE

  • INTERNATIONAL COLLABORATION.

  • THE MACHINES TAKING OVER AS A

  • THREAT FOR EVERYBODY.

  • IT IS A THREAT FOR THE CHINESE,

  • AMERICANS AND EUROPEANS.

  • JUST LIKE A GLOBAL NUCLEAR WAR.

  • FOR A GLOBAL NUCLEAR WAR, PEOPLE

  • DID COLLABORATE TO REDUCE THE

  • CHANCES OF IT.

  • GEOFF: THERE ARE OTHER EXPERTS

  • IN THE FIELD OF AI WHO SAID THE

  • CONCERNS YOU ARE RAISING, THIS

  • DYSTOPIAN FUTURE, THAT IT

  • DISTRACTS FROM THE VERY REAL AND

  • IMMEDIATE RISKS POSED BY

  • ARTIFICIAL INTELLIGENCE, SOME OF

  • THEM YOU MENTIONED --

  • DISINFORMATION, FRAUD.

  • >> I DO NOT WANT TO DISTRACT

  • FROM THOSE.

  • THEY ARE VERY IMPORTANT CONCERNS

  • AND WE SHOULD BE WORKING ON

  • THOSE, TOO.

  • I JUST WANT TO ADD THIS OTHER

  • EXISTENTIAL THREAT OF IT TAKING

  • OVER.

  • ONE REASON I WANT TO DO THAT IS

  • BECAUSE THAT IS AN AREA IN WHICH

  • I THINK WE CAN GET INTERNATIONAL

  • COLLABORATION.

  • GEOFF: IS THERE ANY TURNING

  • BACK?

  • YOU SAY THERE IS A TIME THAT AI

  • IS MORE INTELLIGENCE THAN US.

  • IS THERE ANY COMING BACK FROM

  • THAT?

  • >> I DO NOT KNOW.

  • WE ARE ENTERING A TIME OF GREAT

  • UNCERTAINTY.

  • WE ARE DEALING WITH THINGS WE

  • HAVE NEVER DEALT WITH BEFORE.

  • IT IS THIS ALIENS HAVE LANDED

  • BUT WE DID NOT TAKE IT IN.

  • GEOFF: HOW SHOULD WE THINK

  • DIFFERENTLY ABOUT ARTIFICIAL

  • INTELLIGENCE?

  • >> WE SHOULD REALIZE THAT WE ARE

  • PROBABLY GOING TO GET THINGS

  • MORE INTELLIGENT THAN US QUITE

  • SOON AND THEY WILL BE WONDERFUL.

  • THEY WILL BE ABLE TO DO ALL

  • SORTS OF THINGS VERY EASILY THAT

  • WE FIND DIFFICULT.

  • THIS HUGE POSITIVE POTENTIAL.

  • BUT OF COURSE THERE IS ALSO HUGE

  • NEGATIVE POSSIBILITIES.

  • I THINK WE SHOULD PUT MORE

  • RESOURCES INTO DEVELOPING AI TO

  • MAKE IT MORE POWERFUL AND

  • FIGURING OUT HOW TO KEEP IT

  • UNDER CONTROL AND MINIMIZING BAD

  • SIDE EFFECTS.

  • GEOFF: THANK YOU SO MUCH FOR

  • YOUR TIME AND SHARING YOUR

  • INSIGHTS WITH US.

  • >> THANK YOU FOR INVITING ME.

E

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it

B1 US

AI教父討論科技發展對社會造成的危險('Godfather of AI' discusses dangers the developing technologies pose to society)

  • 24 2
    Claire Tao posted on 2023/10/17
Video vocabulary