Subtitles section Play video
Henry: “Will Artificial Intelligence ever replace humans?” is a hotly debated question
亨利:"人工智能是否會取代人類?"這是一個爭論激烈的問題
these days.
這些天來。
Some people claim computers will eventually gain superintelligence, be able to outperform
有些人聲稱計算機最終將獲得超級智能,能夠超過
humans on any task, and destroy humanity.
人類的任何任務,並摧毀人類。
Other people say “don't worry, AI will just be another tool we can use and control,
其他人說 "別擔心,人工智能將只是另一個我們可以使用和控制的工具。
like our current computers.”
像我們目前的計算機一樣"。
So we've got physicist and AI-researcher Max Tegmark back again to share with us the
是以,我們請來了物理學家和人工智能研究者Max Tegmark再次與我們分享
collective takeaways from the recent Asilomar conference on the future of AI that he helped
他在最近舉行的關於人工智能未來的Asilomar會議上的集體收穫。
organize – and he's going to help separate AI myths from AI facts.
組織--他要幫助把人工智能的神話和人工智能的事實分開。
Max: Hello!
馬克斯:你好!
Henry: First of all, Max, machines (including computers) have long been better than us at
亨利。首先,麥克斯,機器(包括電腦)在以下方面早就比我們強了
many tasks, like arithmetic, or weaving, but those are often repetitive and mechanical
許多工作,如算術,或編織,但這些工作往往是重複的和機械的。
operations.
業務。
So why shouldn't I believe that there are some things that are simply impossible for
那麼,為什麼我不應該相信,有些事情是根本不可能的呢?
machines to do as well as people?
機器可以和人一樣做嗎?
Say making minutephysics videos, or consoling a friend?
比方說製作分鐘物理學視頻,或者安慰朋友?
Max: Well, we've traditionally thought of intelligence as something mysterious that
馬克思:好吧,我們傳統上認為智力是一種神祕的東西,它是一種 "不可能"。
can only exist in biological organisms, especially humans.
只能存在於生物機體中,特別是人類。
But from the perspective of modern physical science, intelligence is simply a particular
但從現代物理科學的角度來看,智力只是一種特殊的
kind of information processing and reacting performed by particular arrangements of elementary
某種信息處理和反應,由特定的基本安排來完成。
particles moving around, and there's no law of physics that says it's impossible
粒子四處移動,沒有任何物理定律說這是不可能的。
to do that kind of information processing better than humans already do.
來做這種信息處理,比人類已經做得更好。
It's not a stretch to say that earthworms process information better than rocks, and
毫不誇張地說,蚯蚓比岩石更能處理資訊,而且
humans better than earthworms, and in many areas, machines are already better than humans.
人類比蚯蚓更好,而在許多領域,機器已經比人類更好。
This suggests we've likely only seen the tip of the intelligence iceberg, and that
這表明我們可能只看到了情報的冰山一角,而且
we're on track to unlock the full intelligence that's latent in nature and use it to help
我們正在釋放潛藏在大自然中的全部智慧,並利用它來幫助
humanity flourish - or flounder.
人類蓬勃發展--或陷入困境。
Henry: So how do we keep ourselves on the right side of the “flourish or flounder”
亨利:那麼,我們如何使自己保持在 "蓬勃發展或陷入困境 "的正確一邊呢?
balance?
平衡?
What, if anything, should we really be concerned about with superintelligent AI?
如果有的話,我們真正應該關注的是超級智能AI的什麼?
Max: Here's what has many top AI researchers concerned: not machines or computers turning
馬克思:讓許多頂級人工智能研究人員擔憂的是:不是機器或計算機在轉動
evil, but something more subtle: superintelligence that simply doesn't share our goals.
邪惡,而是更微妙的東西:根本不認同我們目標的超級智能。
If a heat-seeking missile is homing in on you, you probably wouldn't think: “No
如果一枚熱尋的飛彈正在瞄準你,你可能不會想到:"不
need to worry, it's not evil, it's just following its programming.”
需要擔心,它並不邪惡,它只是遵循它的程序。"
No, what matters to you is what the heat-seeking missile does and how well it does it, not
不,對你來說,重要的是熱尋跡飛彈的作用以及它的效果如何,而不是
what it's feeling, or whether it has feelings at all.
它的感覺是什麼,或者它是否有感覺。
The real worry isn't malevolence, but competence.
真正擔心的不是惡意,而是能力。
A superintelligent AI is by definition very good at attaining its goals, so the most important
根據定義,一個超級智能的人工智能在實現其目標方面非常出色,是以最重要的是
thing for us to do is to ensure that its goals are aligned with ours.
我們要做的事情是確保其目標與我們的目標一致。
As an analogy, humans are more intelligent and competent than ants, and if we want
打個比方,人類比螞蟻更有智慧和能力,如果我們想
to build a hydroelectric dam where there happens to be an anthill, there may no malevolence
在恰好有蟻穴的地方建水電站大壩,可能沒有惡意
involved, but, well... too bad for the ants.
涉及,但是,嗯......對螞蟻來說太糟糕了。
Cats and dogs, on the other hand, have done a great job of aligning their goals with the
另一方面,貓和狗在將它們的目標與其他動物的目標相一致方面做得很好。
goals of humans – I mean, even though I'm a physicist, I can't help think kittens
人類的目標--我的意思是,即使我是一個物理學家,我也忍不住想小貓咪
are the cutest particle arrangements in our universe...
是我們宇宙中最可愛的粒子排列...
If we build superintelligence, we'd be better off in the position of cats and dogs than
如果我們建立了超級智能,我們在貓和狗的位置上會比
ants.
螞蟻。
Or better yet, we'll figure out how to ensure that AI adopts our goals rather than the other
或者更好的是,我們會想出如何確保人工智能採用我們的目標,而不是其他的目標。
way around.
的方式。
Henry: And when exactly is superintelligence going to arrive?
亨利。那麼超級智能究竟何時才能到來?
When do we need to start panicking?
我們什麼時候需要開始恐慌?
Max: First of all, Henry, superintelligence doesn't have to be something negative.
馬克思:首先,亨利,超級智能不一定是負面的東西。
In fact, if we get it right, AI might become the best thing ever to happen to humanity.
事實上,如果我們做對了,人工智能可能成為人類有史以來最好的事情。
Everything I love about civilization is the product of intelligence, so if AI amplifies
我喜歡的一切文明都是智能的產物,所以如果人工智能放大了
our collective intelligence enough to solve today's and tomorrow's greatest problems,
我們的集體智慧足以解決今天和明天的最大問題。
humanity might flourish like never before.
人類可能以前所未有的方式蓬勃發展。
Second, most AI researchers think superintelligence is at least decades away...
其次,大多數人工智能研究人員認為超級智能至少還有幾十年的時間...
Buuuut the research needed to ensure that it remains beneficial to humanity (rather
但是,為確保其對人類有益而需要進行的研究(而不是對人類有益)。
than harmful) might also take decades, so we need to start right away.
而不是有害的)也可能需要幾十年,所以我們需要馬上開始。
For example, we'll need to figure out how to ensure machines learn the collective goals
例如,我們需要弄清楚如何確保機器學習集體的目標
of humanity, adopt these goals for themselves, and retain the goals as they keep getting
採納這些目標,並保留這些目標,因為他們不斷地得到這些目標。
smarter.
更加聰明。
And what about when our goals disagree?
那麼,當我們的目標不一致時,怎麼辦?
Should we vote on what the machine's goals should be?
我們是否應該投票決定機器的目標是什麼?
Just do whatever the president wants?
就這樣為所欲為嗎?
Whatever the creator of the superintelligence wants?
無論超級智能的創造者想要什麼?
Let the AI decide?
讓人工智能決定?
In a very real way, the question of how to live with superintelligence is a question
從一個非常現實的角度來看,如何與超級智能相處的問題是一個問題
of what sort of future we want to create for humanity.
我們希望為人類創造什麼樣的未來。
Which obviously shouldn't just be left to AI researchers, as caring and socially skilled
這顯然不應該只留給人工智能研究人員,因為有愛心和社會技能的人
as we are.;)
就像我們一樣;;)
Henry: Thanks, Max!
亨利:謝謝你,麥克斯!
So, uh, how do I get involved to make sure we don't end up living in a superintelligence-powered
那麼,呃,我怎樣才能參與進來,以確保我們不會最終生活在一個由超級智能驅動的
dictatorship?
獨裁統治?
Max: At the Future of Life Institute (Henry interjects: which is sponsoring this video),
馬克思:在未來生命研究所(亨利插話:該研究所贊助了這個視頻)。
we've built a site where you can go to answer questions, ask questions, and otherwise contribute
我們已經建立了一個網站,你可以在那裡回答問題,提出問題,並以其他方式作出貢獻。
your thoughts to help shape the future of AI policy and research.
您的想法將有助於塑造人工智能政策和研究的未來。
Link's in the video description.
鏈接在視頻描述中。
Henry: Awesome.
亨利。真棒。