Placeholder Image

Subtitles section Play video

  • After 13.8 billion years of cosmic history,

    譯者: Lilian Chiu 審譯者: Melody Tang

  • our universe has woken up

    在 138 億年的宇宙歷史之後,

  • and become aware of itself.

    我們的宇宙終於覺醒了,

  • From a small blue planet,

    開始意識到自己。

  • tiny, conscious parts of our universe have begun gazing out into the cosmos

    從一顆藍色的小星球,

  • with telescopes,

    我們宇宙中微小、有意識的部分

  • discovering something humbling.

    開始用望遠鏡窺視宇宙,

  • We've discovered that our universe is vastly grander

    有了讓人謙卑的發現。

  • than our ancestors imagined

    我們發現宇宙比我們祖先所想像的

  • and that life seems to be an almost imperceptibly small perturbation

    要大很多很多,

  • on an otherwise dead universe.

    而且生命似乎是非常小的擾動, 小到幾乎無法感知到,

  • But we've also discovered something inspiring,

    若沒有它,宇宙就是死寂的了。

  • which is that the technology we're developing has the potential

    但我們也有振奮人心的發現:

  • to help life flourish like never before,

    那就是我們所開發的技術,

  • not just for centuries but for billions of years,

    有潛能可以協助生命 變得前所未有的繁盛,

  • and not just on earth but throughout much of this amazing cosmos.

    不是只有數世紀, 而是能持續數十億年。

  • I think of the earliest life as "Life 1.0"

    不只是在地球上, 還是在這整個不可思議的宇宙中。

  • because it was really dumb,

    我把最早的生命 視為是「生命 1.0」,

  • like bacteria, unable to learn anything during its lifetime.

    因為它其實很蠢,

  • I think of us humans as "Life 2.0" because we can learn,

    像細菌,它在一生中 無法學習任何東西。

  • which we in nerdy, geek speak,

    我把人類視為是「生命 2.0」, 因為我們可以學習,

  • might think of as installing new software into our brains,

    用很宅的方式來說,

  • like languages and job skills.

    可以視為是把新軟體 安裝到我們的大腦中,

  • "Life 3.0," which can design not only its software but also its hardware

    就像語言以及工作技能。

  • of course doesn't exist yet.

    「生命 3.0」不只能設計 它的軟體,還能設計硬體,

  • But perhaps our technology has already made us "Life 2.1,"

    當然,它還不存在。

  • with our artificial knees, pacemakers and cochlear implants.

    但,也許我們的科技已經 讓我們成為「生命 2.1」了,

  • So let's take a closer look at our relationship with technology, OK?

    因為我們現在有人工膝蓋、 心律調節器,以及耳蝸植入。

  • As an example,

    咱們來更進一步談談 我們與科技的關係,好嗎?

  • the Apollo 11 moon mission was both successful and inspiring,

    舉個例子,

  • showing that when we humans use technology wisely,

    阿波羅 11 月球任務 很成功也很鼓舞人心,

  • we can accomplish things that our ancestors could only dream of.

    展示出當我們人類聰明地使用科技時,

  • But there's an even more inspiring journey

    我們能達成祖先只能夢想的事情。

  • propelled by something more powerful than rocket engines,

    但還有一趟更鼓舞人心的旅程,

  • where the passengers aren't just three astronauts

    由比火箭引擎 更強大的東西所推動,

  • but all of humanity.

    乘客也不只是三個太空人,

  • Let's talk about our collective journey into the future

    而是全人類。

  • with artificial intelligence.

    咱們來談談我們全體與人工智慧

  • My friend Jaan Tallinn likes to point out that just as with rocketry,

    一起前往未來的旅程,

  • it's not enough to make our technology powerful.

    我的朋友楊·塔林常說, 就像火箭學一樣,

  • We also have to figure out, if we're going to be really ambitious,

    光是讓我們的科技 有強大的力量是不足夠的。

  • how to steer it

    若我們真的很有野心的話, 我們還得要想出

  • and where we want to go with it.

    如何操控它,

  • So let's talk about all three for artificial intelligence:

    及我們要和它一起去到哪裡。

  • the power, the steering and the destination.

    所以,咱們針對人工智慧 來談談這三點:

  • Let's start with the power.

    力量、操控,以及目的地。

  • I define intelligence very inclusively --

    咱們先從力量談起。

  • simply as our ability to accomplish complex goals,

    我對於人工智慧的定義非常全面,

  • because I want to include both biological and artificial intelligence.

    就是我們能夠完成 複雜目標的能力,

  • And I want to avoid the silly carbon-chauvinism idea

    因為我想要把生物智慧 和人工智慧都包含進去。

  • that you can only be smart if you're made of meat.

    且我想要排除愚蠢的碳沙文主義,

  • It's really amazing how the power of AI has grown recently.

    它認為若你很聰明, 你就一定是肉做的。

  • Just think about it.

    人工智慧的力量 在近期的成長十分驚人。

  • Not long ago, robots couldn't walk.

    試想看看。

  • Now, they can do backflips.

    沒多久以前,機器人還無法走路。

  • Not long ago,

    現在,它們還會後翻。

  • we didn't have self-driving cars.

    沒多久以前,

  • Now, we have self-flying rockets.

    我們還沒有自動駕駛的汽車。

  • Not long ago,

    現在,我們有自動飛行的火箭。

  • AI couldn't do face recognition.

    沒多久以前,

  • Now, AI can generate fake faces

    人工智慧還無法做臉孔辨識。

  • and simulate your face saying stuff that you never said.

    現在,人工智慧能產生出假臉孔,

  • Not long ago,

    並模擬你的臉孔, 說出你從來沒有說過的話。

  • AI couldn't beat us at the game of Go.

    沒多久以前,

  • Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games

    人工智慧無法在圍棋中打敗人類。

  • and Go wisdom,

    接著, Google DeepMind AlphaZero 的人工智慧

  • ignored it all and became the world's best player by just playing against itself.

    拿來人類三千年的圍棋譜和圍棋智慧,

  • And the most impressive feat here wasn't that it crushed human gamers,

    全部擺在一邊,

  • but that it crushed human AI researchers

    透過和自己比賽的練習, 變成了世界上最厲害的圍棋手。

  • who had spent decades handcrafting game-playing software.

    這裡最讓人印象深刻的功績 並不是它擊垮了人類的棋手,

  • And AlphaZero crushed human AI researchers not just in Go but even at chess,

    而是它擊垮了人類的 人工智慧研究者,

  • which we have been working on since 1950.

    這些研究者花了數十年 手工打造下棋軟體。

  • So all this amazing recent progress in AI really begs the question:

    除了圍棋,AlphaZero 也在西洋棋 擊垮了人類的人工智慧研究者,

  • How far will it go?

    西洋棋從 1950 年起就被研究著。

  • I like to think about this question

    所以,近期這些驚人的 人工智慧進步,讓大家想問:

  • in terms of this abstract landscape of tasks,

    它能做到什麼程度?

  • where the elevation represents how hard it is for AI to do each task

    我在思考這個問題時,

  • at human level,

    想從工作任務的抽象地景來切入,

  • and the sea level represents what AI can do today.

    高度表示人工智慧要把每一項工作

  • The sea level is rising as AI improves,

    做到人類水平的難度,

  • so there's a kind of global warming going on here in the task landscape.

    海平面高度表示現今的 人工智慧能做什麼。

  • And the obvious takeaway is to avoid careers at the waterfront --

    海平面隨著人工智慧的改進而上升,

  • (Laughter)

    所以,在這工作任務地景上, 有類似全球暖化的現象發生。

  • which will soon be automated and disrupted.

    很顯然的結論是: 避免從事在海濱的職業——

  • But there's a much bigger question as well.

    (笑聲)

  • How high will the water end up rising?

    這些工作很快就會被自動化所取代。

  • Will it eventually rise to flood everything,

    但這裡還有一個更大的問題。

  • matching human intelligence at all tasks.

    水面最後會升到多高?

  • This is the definition of artificial general intelligence --

    它最後是否會升高到淹沒一切,

  • AGI,

    在所有工作任務上 都能和人類的智慧匹敵?

  • which has been the holy grail of AI research since its inception.

    這是「強人工智慧」的定義,

  • By this definition, people who say,

    縮寫 AGI,

  • "Ah, there will always be jobs that humans can do better than machines,"

    打從它的一開始, 它就是人工智慧研究的聖杯。

  • are simply saying that we'll never get AGI.

    依這個定義,當有人說:

  • Sure, we might still choose to have some human jobs

    「啊,總是會有些工作, 人類能做得比機器好。」

  • or to give humans income and purpose with our jobs,

    他們說的只是 「我們永遠不會有 AGI。」

  • but AGI will in any case transform life as we know it

    當然,我們仍可選擇 保留一些人類的工作,

  • with humans no longer being the most intelligent.

    或是用工作讓人類保有收入和目的,

  • Now, if the water level does reach AGI,

    但不論如何,AGI 都會 轉變我們所認知的生活,

  • then further AI progress will be driven mainly not by humans but by AI,

    人類將不再是最有智慧的。

  • which means that there's a possibility

    如果海平面真的升到 AGI 的高度,

  • that further AI progress could be way faster

    那麼進一步的人工智慧進步將會 由人工智慧來主導,而非人類,

  • than the typical human research and development timescale of years,

    意思是,有可能進一步的 人工智慧進展會非常快,

  • raising the controversial possibility of an intelligence explosion

    超越用「年」來計算 典型人類研究和發展的時間,

  • where recursively self-improving AI

    提高很受爭議的智慧噴發的可能性,

  • rapidly leaves human intelligence far behind,

    即,不斷遞迴的 自我改進的人工智慧

  • creating what's known as superintelligence.

    很快就會遠遠超越人類的智慧,

  • Alright, reality check:

    創造出所謂的超級人工智慧。

  • Are we going to get AGI any time soon?

    好了,回歸現實:

  • Some famous AI researchers, like Rodney Brooks,

    我們很快就會有 AGI 嗎?

  • think it won't happen for hundreds of years.

    有些知名的人工智慧研究者, 像羅德尼·布魯克斯,

  • But others, like Google DeepMind founder Demis Hassabis,

    認為在數百年內還不會發生。

  • are more optimistic

    但其他人,像 Google DeepMind 的 創辦人傑米斯·哈薩比斯,

  • and are working to try to make it happen much sooner.

    就比較樂觀,

  • And recent surveys have shown that most AI researchers

    且努力想要讓它早點發生。

  • actually share Demis's optimism,

    近期的調查顯示, 大部分的人工智慧研究者

  • expecting that we will get AGI within decades,

    其實和傑米斯一樣樂觀,

  • so within the lifetime of many of us,

    預期我們會在數十年內就有 AGI,

  • which begs the question -- and then what?

    所以,許多人 在有生之年就能看到,

  • What do we want the role of humans to be

    這就讓人不禁想問:接下來呢?

  • if machines can do everything better and cheaper than us?

    我們希望人類扮演什麼角色,

  • The way I see it, we face a choice.

    如果機器每件事都做得 比人類好、成本又更低的話?

  • One option is to be complacent.

    依我所見,我們面臨一個選擇。

  • We can say, "Oh, let's just build machines that can do everything we can do

    選項之一是不假思索的滿足。

  • and not worry about the consequences.

    我們可以說:「咱們來打造機器, 讓它們做所有我們能做的事,

  • Come on, if we build technology that makes all humans obsolete,

    不要擔心結果。

  • what could possibly go wrong?"

    拜託,如果我們能打造科技, 讓全人類變成過時,

  • (Laughter)

    怎可能會出錯的?」

  • But I think that would be embarrassingly lame.

    (笑聲)

  • I think we should be more ambitious -- in the spirit of TED.

    但我覺得那樣是差勁到令人難堪。

  • Let's envision a truly inspiring high-tech future

    我認為我們該更有野心—— 抱持 TED 精神。

  • and try to steer towards it.

    咱們來想像一下 一個真正鼓舞人心的高科技未來,

  • This brings us to the second part of our rocket metaphor: the steering.

    並試著朝它邁進。

  • We're making AI more powerful,

    這就帶我們來到了 火箭比喻的第二部分:操控。

  • but how can we steer towards a future

    我們讓人工智慧的力量更強大,

  • where AI helps humanity flourish rather than flounder?

    但我們要如何將人工智慧導向

  • To help with this,

    協助人類繁盛而非掙扎無助的未來?

  • I cofounded the Future of Life Institute.

    為了協助做到這點,

  • It's a small nonprofit promoting beneficial technology use,

    我共同創辦了「生命未來研究所」。

  • and our goal is simply for the future of life to exist

    它是個小型的非營利機構, 旨在促進有益的科技使用,

  • and to be as inspiring as possible.

    我們的目標很簡單: 希望生命的未來能夠存在,

  • You know, I love technology.

    且越是鼓舞人心越好。

  • Technology is why today is better than the Stone Age.

    你們知道的,我很愛科技。

  • And I'm optimistic that we can create a really inspiring high-tech future ...

    現今之所以比石器時代更好, 就是因為科技。

  • if -- and this is a big if --

    我很樂觀地認為我們能創造出 真的很鼓舞人心的高科技未來……

  • if we win the wisdom race --

    如果——這個「如果」很重要——

  • the race between the growing power of our technology

    如果我們能贏得這場智慧賽跑——

  • and the growing wisdom with which we manage it.

    這場賽跑的兩位競爭者是

  • But this is going to require a change of strategy

    不斷成長的科技力量 和不斷成長的管理科技智慧。

  • because our old strategy has been learning from mistakes.

    這會需要策略的改變,

  • We invented fire,

    因為我們的舊策略 是從錯誤中學習。

  • screwed up a bunch of times --

    我們發明了火,

  • invented the fire extinguisher.

    搞砸了很多次——

  • (Laughter)

    發明了滅火器。

  • We invented the car, screwed up a bunch of times --

    (笑聲)

  • invented the traffic light, the seat belt and the airbag,

    我們發明了汽車,搞砸了很多次——

  • but with more powerful technology like nuclear weapons and AGI,

    發明了紅綠燈、 安全帶,和安全氣囊,

  • learning from mistakes is a lousy strategy,

    但對於更強大的科技, 比如核子武器和 AGI,

  • don't you think?

    從錯誤中學習是很糟的策略,

  • (Laughter)

    對吧?(笑聲)

  • It's much better to be proactive rather than reactive;

    事前的主動比事後的反應更好;

  • plan ahead and get things right the first time

    先計畫好,第一次就把事情做對,

  • because that might be the only time we'll get.

    因為我們可能只有一次機會。

  • But it is funny because sometimes people tell me,

    好笑的是,有時人們告訴我:

  • "Max, shhh, don't talk like that.

    「麥克斯,噓,別那樣說話。

  • That's Luddite scaremongering."

    那是危言聳聽。」

  • But it's not scaremongering.

    但那並非危言聳聽。

  • It's what we at MIT call safety engineering.

    我們在麻省理工學院 稱之為安全工程。

  • Think about it:

    想想看:

  • before NASA launched the Apollo 11 mission,

    在美國太空總署的 阿波羅 11 任務之前,

  • they systematically thought through everything that could go wrong

    他們系統性地設想過 所有可能出錯的狀況,

  • when you put people on top of explosive fuel tanks

    畢竟是要把人類 放在易爆燃料槽上,

  • and launch them somewhere where no one could help them.

    再將他們發射到 沒有人能協助他們的地方。

  • And there was a lot that could go wrong.

    可能會出錯的狀況非常多。

  • Was that scaremongering?

    那是危言聳聽嗎?

  • No.

    不。

  • That's was precisely the safety engineering

    那正是安全工程,

  • that ensured the success of the mission,

    用來確保任務能夠成功。

  • and that is precisely the strategy I think we should take with AGI.

    那正是我認為處理 AGI 時 應該採用的策略。

  • Think through what can go wrong to make sure it goes right.

    想清楚有什麼可能會出錯, 確保它能不要出錯。

  • So in this spirit, we've organized conferences,

    基於這種精神,我們辦了一些會議,

  • bringing together leading AI researchers and other thinkers

    集合了最領先的人工智慧 研究者和其他思想家,

  • to discuss how to grow this wisdom we need to keep AI beneficial.

    討論要如何發展這項必要的智慧, 確保人工智慧是有益的。

  • Our last conference was in Asilomar, California last year

    我們最近一次會議是去年 在加州的阿西洛馬會議中舉辦,

  • and produced this list of 23 principles

    得出了這 23 條原則。

  • which have since been signed by over 1,000 AI researchers

    從那之後,已經有 超過一千名人工智慧研究者

  • and key industry leaders,

    與重要產業領導者簽署。

  • and I want to tell you about three of these principles.

    我想要和各位談其中三條原則。

  • One is that we should avoid an arms race and lethal autonomous weapons.

    其一是我們應該要避免 軍備競賽以及自主的致命武器。

  • The idea here is that any science can be used for new ways of helping people

    想法是,任何科學都能被 用作助人或傷人的新方法。

  • or new ways of harming people.

    比如,生物和化學更有可能會被用來

  • For example, biology and chemistry are much more likely to be used

    做新的藥物或治療方法, 而不是殺人的新方法,

  • for new medicines or new cures than for new ways of killing people,

    因為生物學家和化學家 很努力推動——

  • because biologists and chemists pushed hard --

    且很成功——

  • and successfully --

    禁止生物及化學武器的禁令。

  • for bans on biological and chemical weapons.

    基於同樣的精神,

  • And in the same spirit,

    大部分的人工智慧研究者 想要譴責和禁用自主的致命武器。

  • most AI researchers want to stigmatize and ban lethal autonomous weapons.

    另一條阿西洛馬會議人工智慧原則

  • Another Asilomar AI principle

    是我們應該要減輕 由人工智慧引起的收入不平等。

  • is that we should mitigate AI-fueled income inequality.

    我認為如果能用人工智慧 讓經濟大餅大幅地成長,

  • I think that if we can grow the economic pie dramatically with AI

    而仍無法弄清楚如何分割這塊餅

  • and we still can't figure out how to divide this pie

    來讓每個人都過得更好,

  • so that everyone is better off,

    那我們真該感到羞恥。

  • then shame on us.

    (掌聲)

  • (Applause)

    好,如果你的電腦曾當過機,請舉手。

  • Alright, now raise your hand if your computer has ever crashed.

    (笑聲)

  • (Laughter)

    哇,好多人舉手。

  • Wow, that's a lot of hands.

    那麼你們就會欣賞這條原則:

  • Well, then you'll appreciate this principle

    我們應該要更投入 人工智慧安全的研究,

  • that we should invest much more in AI safety research,

    因為當我們讓人工智慧來主導 更多決策和基礎設施時。

  • because as we put AI in charge of even more decisions and infrastructure,

    我們得要想出該如何將現今 有程式錯誤且可能被駭入的電腦,

  • we need to figure out how to transform today's buggy and hackable computers

    轉變成我們能真正信任的 穩定的人工智慧系統。

  • into robust AI systems that we can really trust,

    要不然,所有這些了不起的新科技 都可能故障、傷害我們,

  • because otherwise,

    或被駭入而轉成對抗我們。

  • all this awesome new technology can malfunction and harm us,

    這項人工智慧安全性的工作必須要 包含人工智慧價值校準的工作,

  • or get hacked and be turned against us.

    因為 AGI 帶來的 真正威脅不是惡意——

  • And this AI safety work has to include work on AI value alignment,

    像愚蠢的好萊塢電影裡那樣——

  • because the real threat from AGI isn't malice,

    而是能力——

  • like in silly Hollywood movies,

    AGI 要完成的目標就是 與我們的目標不一致。

  • but competence --

    比如,當我們人類 讓西非的黑犀牛濱臨絕種時,

  • AGI accomplishing goals that just aren't aligned with ours.

    並不是因為我們邪惡、痛恨犀牛 才這麼做的,對吧?

  • For example, when we humans drove the West African black rhino extinct,

    會這麼做,是因為我們比牠們聰明,

  • we didn't do it because we were a bunch of evil rhinoceros haters, did we?

    我們的目標和牠們的目標不一致。

  • We did it because we were smarter than them

    但 AGI 在定義上 就是比我們聰明的,

  • and our goals weren't aligned with theirs.

    所以要確保我們在 創造了 AGI 之後,

  • But AGI is by definition smarter than us,

    不會淪落到那些犀牛的處境,

  • so to make sure that we don't put ourselves in the position of those rhinos

    我們就得要想出如何 讓機器了解我們的目標,

  • if we create AGI,

    採用並且保持我們的目標。

  • we need to figure out how to make machines understand our goals,

    不過,這些目標該是誰的目標?

  • adopt our goals and retain our goals.

    這些目標該是哪些目標?

  • And whose goals should these be, anyway?

    這就帶我們到了火箭比喻的 第三部分:目的地。

  • Which goals should they be?

    我們要讓人工智慧的力量更強大,

  • This brings us to the third part of our rocket metaphor: the destination.

    試圖想辦法來操控它,

  • We're making AI more powerful,

    但我們如何帶它到哪裡?

  • trying to figure out how to steer it,

    這是幾乎沒人談論的房中大象 (顯而易見又被忽略)——

  • but where do we want to go with it?

    即使在 TED 也沒人在談——

  • This is the elephant in the room that almost nobody talks about --

    因為我們都把目光放在 短期的人工智慧挑戰。

  • not even here at TED --

    聽著,我們人類正試著建造 AGI,

  • because we're so fixated on short-term AI challenges.

    動機是好奇心和經濟,

  • Look, our species is trying to build AGI,

    但如果成功了,我們希望 創造出什麼樣的未來社會?

  • motivated by curiosity and economics,

    最近我們針對這點做了意見調查,

  • but what sort of future society are we hoping for if we succeed?

    結果讓我很驚訝,

  • We did an opinion poll on this recently,

    大部分的人其實希望 我們建造超級人工智慧:

  • and I was struck to see

    全面比我們聰明的人工智慧。

  • that most people actually want us to build superintelligence:

    大家最一致的意見, 就是我們應該要有野心,

  • AI that's vastly smarter than us in all ways.

    並協助生命在宇宙中散播,

  • What there was the greatest agreement on was that we should be ambitious

    但對於該由誰或由什麼來主導, 大家的意見就不那麼一致了。

  • and help life spread into the cosmos,

    有件事讓我覺得很有趣的

  • but there was much less agreement about who or what should be in charge.

    是我看到有些人希望就由機器主導。

  • And I was actually quite amused

    (笑聲)

  • to see that there's some some people who want it to be just machines.

    至於人類該扮演什麼角色, 意見就完全不一致了,

  • (Laughter)

    即使在最基礎的層級也一樣。

  • And there was total disagreement about what the role of humans should be,

    讓咱們來近看

  • even at the most basic level,

    我們或許會選擇的可能未來,好嗎?

  • so let's take a closer look at possible futures

    別誤會我的意思,

  • that we might choose to steer toward, alright?

    我並不是在談太空旅行,

  • So don't get be wrong here.

    只是要談人類進入未來的比喻之旅。

  • I'm not talking about space travel,

    我的一些人工智慧同事 很喜歡的一個選擇是

  • merely about humanity's metaphorical journey into the future.

    建造超級人工智慧, 並保持讓它被人類控制,

  • So one option that some of my AI colleagues like

    就像能被奴役的神一樣,

  • is to build superintelligence and keep it under human control,

    和網路沒有連結,

  • like an enslaved god,

    用來創造無法想像的科技和財富,

  • disconnected from the internet

    全交給控制它的人。

  • and used to create unimaginable technology and wealth

    阿克頓男爵警告我們:

  • for whoever controls it.

    「權力會產生腐敗, 絕對權力絕對腐敗。」

  • But Lord Acton warned us

    你可能會擔心 也許我們人類就是不夠聰明,

  • that power corrupts, and absolute power corrupts absolutely,

    或是沒有足夠的智慧

  • so you might worry that maybe we humans just aren't smart enough,

    來操作這麼多權力。

  • or wise enough rather,

    除了奴役更優秀的智慧的 任何道德疑慮之外,

  • to handle this much power.

    你也許會擔心

  • Also, aside from any moral qualms you might have

    或許超級人工智慧會智勝我們,

  • about enslaving superior minds,

    衝破籓籬和掌管。

  • you might worry that maybe the superintelligence could outsmart us,

    但我也有些同事覺得 讓人工智慧接管也沒不好,

  • break out and take over.

    甚至造成人類絕種也無妨,

  • But I also have colleagues who are fine with AI taking over

    只要我們覺得人工智慧 配得上做我們的後裔就好,

  • and even causing human extinction,

    就像我們的孩子一樣。

  • as long as we feel the the AIs are our worthy descendants,

    但,我們要如何確知人工智慧 已經採用了我們最好的價值觀,

  • like our children.

    而不只是有意識的殭屍, 騙我們將人性賦予它們?

  • But how would we know that the AIs have adopted our best values

    此外,不希望人類絕種的那些人

  • and aren't just unconscious zombies tricking us into anthropomorphizing them?

    也應被容許對此事表達意見吧?

  • Also, shouldn't those people who don't want human extinction

    如果這兩個高科技選項 都不合你們的意,

  • have a say in the matter, too?

    很重要的是要記得 從宇宙的觀點來看低科技是自殺,

  • Now, if you didn't like either of those two high-tech options,

    因為如果我們不遠遠超越 現今的科技,

  • it's important to remember that low-tech is suicide

    問題就不是人類是否會絕種,

  • from a cosmic perspective,

    而是讓我們絕種的會是下一次的

  • because if we don't go far beyond today's technology,

    巨型慧星撞擊、超級火山爆發,

  • the question isn't whether humanity is going to go extinct,

    或是更優的科技 本可解決的其他問題?

  • merely whether we're going to get taken out

    所以,何不接受和吃下這蛋糕……

  • by the next killer asteroid, supervolcano

    這個不是被奴役的 AGI,

  • or some other problem that better technology could have solved.

    而是價值觀和我們一致, 善待我們的 AGI 呢?

  • So, how about having our cake and eating it ...

    那就是亞里艾瑟·尤考斯基 所謂的「友善的人工智慧」,

  • with AGI that's not enslaved

    若我們能做到這點,可能會很棒。

  • but treats us well because its values are aligned with ours?

    它可能不只會除去負面的遭遇,

  • This is the gist of what Eliezer Yudkowsky has called "friendly AI,"

    比如疾病、貧困、

  • and if we can do this, it could be awesome.

    犯罪,和其他苦難,

  • It could not only eliminate negative experiences like disease, poverty,

    它也可能會給予我們自由,

  • crime and other suffering,

    讓我們從各式各樣 新的正面經驗中做選擇——

  • but it could also give us the freedom to choose

    基本上,就是讓我們 成為自己命運的主宰。

  • from a fantastic new diversity of positive experiences --

    所以,總結一下,

  • basically making us the masters of our own destiny.

    我們在科技方面的情況很複雜,

  • So in summary,

    但整體來看是很簡單的。

  • our situation with technology is complicated,

    多數人工智慧研究者預期 AGI 會在數十年內出現,

  • but the big picture is rather simple.

    如果我們沒有先準備好面對它,

  • Most AI researchers expect AGI within decades,

    那可能會成為人類史上最大的錯誤。

  • and if we just bumble into this unprepared,

    讓我們正視事實吧,

  • it will probably be the biggest mistake in human history --

    它會讓殘酷的 全球獨裁主義成為可能,

  • let's face it.

    造成前所未有的不平等、 監控,以及苦難,

  • It could enable brutal, global dictatorship

    甚至讓人類絕種。

  • with unprecedented inequality, surveillance and suffering,

    但如果我們小心地操控,

  • and maybe even human extinction.

    我們可能會有個美好的未來, 人人都過得更好:

  • But if we steer carefully,

    貧窮的人有錢,有錢的人更有錢,

  • we could end up in a fantastic future where everybody's better off:

    每個人都健康,能自由自在地 去實現他們的夢想。

  • the poor are richer, the rich are richer,

    等等,別急。

  • everybody is healthy and free to live out their dreams.

    你們希望未來在政治上 是右派還是左派?

  • Now, hang on.

    你們想要一個有著嚴格 道德規則的虔誠社會,

  • Do you folks want the future that's politically right or left?

    或一個人人可參與的享樂主義社會,

  • Do you want the pious society with strict moral rules,

    就像全年無休的燃燒人節慶?

  • or do you an hedonistic free-for-all,

    你們想要美麗的海灘、森林和湖泊,

  • more like Burning Man 24/7?

    或是偏好用電腦重新排列原子,

  • Do you want beautiful beaches, forests and lakes,

    產生出虛擬經驗?

  • or would you prefer to rearrange some of those atoms with the computers,

    有了友善的人工智慧, 我們就能建立出所有這些社會,

  • enabling virtual experiences?

    並讓大家有自由去選擇 他們想要住在哪個社會中,

  • With friendly AI, we could simply build all of these societies

    因為我們不會再受智慧的限制,

  • and give people the freedom to choose which one they want to live in

    唯一的限制是物理法則。

  • because we would no longer be limited by our intelligence,

    所以,資源和空間會非常龐大 ——

  • merely by the laws of physics.

    天文級的龐大。

  • So the resources and space for this would be astronomical --

    我們的選擇如下:

  • literally.

    我們可以對未來感到滿足,

  • So here's our choice.

    帶著盲目的信念,

  • We can either be complacent about our future,

    相信任何新科技都必然有益,

  • taking as an article of blind faith

    當作真言,不斷對自己 一次又一次地重述,

  • that any new technology is guaranteed to be beneficial,

    而我們像無舵的船漂向淘汰。

  • and just repeat that to ourselves as a mantra over and over and over again

    或是我們可以有野心,

  • as we drift like a rudderless ship towards our own obsolescence.

    努力去想出如何操控我們的科技,

  • Or we can be ambitious --

    以及我們想要去的目的地,

  • thinking hard about how to steer our technology

    創造出驚奇的時代。

  • and where we want to go with it

    我們在這裡讚頌驚奇的時代,

  • to create the age of amazement.

    我覺得精髓應該在於不受控於科技,

  • We're all here to celebrate the age of amazement,

    而是讓它賦予我們力量。

  • and I feel that its essence should lie in becoming not overpowered

    謝謝。

  • but empowered by our technology.

    (掌聲)

  • Thank you.

  • (Applause)

After 13.8 billion years of cosmic history,

譯者: Lilian Chiu 審譯者: Melody Tang

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it

B1 US TED 人工 人類 科技 宇宙 絕種

【TED】Max Tegmark:如何被AI賦能,而不是被AI壓制(How to get empowered, not overpowered, by AI | Max Tegmark)。 (【TED】Max Tegmark: How to get empowered, not overpowered, by AI (How to get empowered, not overpowered, by AI | Max Tegmark))

  • 521 44
    林宜悉 posted on 2021/01/14
Video vocabulary