Subtitles section Play video
Historian Yuval Noah Harari and entrepreneur Mustafa Suleiman are two of the most important voices in the increasingly contentious debate over AI.
歷史學家尤瓦爾-諾亞-哈拉里(Yuval Noah Harari)和企業家穆斯塔法-蘇萊曼(Mustafa Suleiman)是關於人工智能的爭論中最重要的兩個人。
Good to be here.
很高興來到這裡。
Thanks for joining us.
感謝您的參與。
Thanks for having us.
感謝您的光臨。
The economists got them together to discuss what this technology means for our future, from employment and geopolitics to the survival of liberal democracy.
經濟學家們把他們聚在一起,討論這項技術對我們的未來意味著什麼,從就業、地緣政治到自由民主的生存。
If the economic system is fundamentally changed, will liberal democracy as we know it survive?
如果經濟體系發生了根本性的變化,我們所熟知的自由民主制度還能生存下去嗎?
Yuval Noah Harari, welcome.
尤瓦爾-諾亞-哈拉里,歡迎你。
You are a bestselling author, historian, I think a global public intellectual, if not the global public intellectual.
您是暢銷書作家、歷史學家,我認為您還是全球公共知識分子,甚至是全球公共知識分子之首。
Your books from Sapiens to 21 Lessons from the 21st Century have sold huge numbers of copies around the world.
從《智者》到《21 世紀的 21 堂課》,您的著作在全世界銷量巨大。
Thank you for joining us.
感謝您的參與。
It's good to be here.
很高興來到這裡。
Mustafa Suleiman, wonderful that you can join us too.
穆斯塔法-蘇萊曼,很高興你也能加入我們。
You're a friend of The Economist, a fellow director on The Economist board.
你是《經濟學人》的朋友,也是《經濟學人》董事會的董事之一。
You are a man at the cutting edge of creating the AI revolution.
您是創造人工智能革命的最前沿人物。
You are a co-founder of DeepMind.
您是 DeepMind 的聯合創始人之一。
You're now a co-founder and CEO of Inflection AI.
您現在是 Inflection AI 公司的聯合創始人兼首席執行官。
You are building this future, but you've also just published a book called The Coming Wave, which makes us a little concerned about this revolution that is being unleashed.
你正在建設這個未來,但你也剛剛出版了一本名為《即將到來的浪潮》的書,這讓我們對這場正在掀起的革命有些擔憂。
You're both coming from different backgrounds.
你們都來自不同的背景。
You are a historian, a commenter, a man who I believe doesn't use smartphones very much.
您是一位歷史學家,一位評論家,一位我相信不怎麼使用智能手機的人。
Not very much, no.
不是很多,沒有。
Mustafa, as I know from our board meetings, is right at the cutting edge of this, pushing everyone to go faster.
穆斯塔法,正如我在董事會會議上所瞭解的那樣,正站在這個領域的最前沿,推動著每個人走得更快。
So, two very different perspectives, but I thought it would be really interesting to bring the two of you together to have a conversation about what is happening, what is going to happen, what are the opportunities, but also what is at stake and what are the risks.
是以,你們的觀點截然不同,但我認為,把你們兩位召集在一起,就正在發生的事情、將要發生的事情、機遇以及利害關係和風險展開對話,會非常有趣。
So, let's start, Mustafa, with you.
穆斯塔法,我們從你開始吧。
And you are building this future.
而你們正在建設這個未來。
So, paint us a picture of what the future is going to be like.
那麼,請為我們描繪一下未來的景象吧。
And I'm going to give you a timeframe to keep it specific.
我會給你一個時間框架,讓它具體化。
So, let's say, I think you wrote in your book that within three to five years that you thought it was plausible that AIs could have human-level capability across a whole range of things.
所以,比方說,我想你在書中寫道,你認為在三到五年內,人工智能可以在一系列事情上具備人類水準的能力。
So, let's take five years, 2028.
那麼,讓我們用五年的時間,也就是 2028 年。
What does the world look like?
世界是什麼樣子的?
How will I interact with AIs?
我將如何與人工智能互動?
What will we all be doing and not doing?
我們都要做什麼,不做什麼?
Well, let's just look back over the last 10 years to get a sense of the trajectory that we're on and the incredible momentum that I think everybody can now see with the generative AI revolution.
好吧,讓我們回顧一下過去的 10 年,瞭解一下我們的發展軌跡,以及我認為每個人現在都能看到的生成式人工智能革命的驚人勢頭。
Over the last 10 years, we've become very, very good at classifying information.
在過去的 10 年裡,我們變得非常非常擅長對資訊進行分類。
We can understand it, we sort it, label it, organize it, and that classification has been critical to enabling this next wave because we can now read the content of images.
我們可以理解它、分類它、標記它、組織它,這種分類對實現下一波浪潮至關重要,因為我們現在可以讀取影像的內容。
We can understand text pretty well.
我們能很好地理解文字。
We can classify audio and transcribe it into text.
我們可以對音頻進行分類,並將其轉錄為文本。
The machines can now have a pretty good sense of the conceptual representations in those ideas.
現在,機器可以很好地感知這些想法中的概念表徵。
The next phase of that is what we're seeing now with the generative AI revolution.
下一階段就是我們現在看到的生成式人工智能革命。
We can now produce new images, new videos, new audio, and, of course, new language.
我們現在可以製作新的影像、新的視頻、新的音頻,當然還有新的語言。
And in the last year or so, with the rise of chat GPT and other AI models, it's pretty incredible to see how plausible and accurate and very finessed to these new language models are.
而在過去一年左右的時間裡,隨著哈拉 GPT 和其他人工智能模型的興起,我們看到這些新的語言模型是多麼可信、準確和精細,這真是令人難以置信。
In the next five years, the frontier model companies, those of us at the very cutting edge who are training the very largest AI models, are going to train models that are over 1,000 times larger than what you currently see today in GPT-4.
未來五年,前沿模型公司,也就是我們這些正在訓練最大人工智能模型的最前沿公司,將訓練出比目前在 GPT-4 中看到的模型大 1000 多倍的模型。
And with each new order of magnitude and compute, that is 10x more compute used, we tend to see really new capabilities emerge.
每增加一個數量級和計算能力,也就是增加 10 倍的計算能力,我們就會看到真正的新功能出現。
And we predict that the new capabilities that will come this time, over the next five years, will be the ability to plan over multiple time horizons.
我們預測,在未來五年內,將出現的新能力將是在多個時間範圍內進行規劃的能力。
Instead of just generate new text in a one shot, the model will be able to generate a sequence of actions over time.
該模型不再是一次性生成新文本,而是能夠隨著時間的推移生成一系列動作。
And I think that that's really the character of AI that we'll see in the next five years.
我認為,這才是未來五年我們將看到的人工智能的真正特徵。
Artificial capable AIs, AIs that can't just say things, they can also do things.
人工智能不僅能說話,還能做事。
What does that actually mean in practice?
這實際上意味著什麼?
Just use your imagination.
發揮你的想象力吧。
Tell me what my life will be like in 2028.
告訴我 2028 年我的生活會是什麼樣子。
How will I interact with them?
我將如何與他們互動?
What will I do?
我該怎麼辦?
What will be different?
有什麼不同?
I've actually proposed a modern Turing test, which tries to evaluate for exactly this point.
實際上,我已經提出了一個現代圖靈測試,它試圖評估的正是這一點。
The last Turing test simply evaluated for what a machine could say, assuming that what it could say represented its intelligence.
最後一次圖靈測試只是評估機器能說什麼,假定它能說什麼就代表它的智能。
Now that we're kind of approaching that moment where these AI models are pretty good, arguably they've passed the Turing test, or maybe they will in the next few years.
現在,我們正在接近這樣一個時刻:這些人工智能模型已經相當不錯了,可以說它們已經通過了圖靈測試,或者說它們在未來幾年內就會通過圖靈測試。
The real question is, how can we measure what they can do?
真正的問題是,我們如何衡量它們的能力?
So I've proposed a test which involves them going off and taking a $100,000 investment, and over the course of three months, trying to set about creating a new product, researching the market, seeing what consumers might like, generating some new images, some blueprints of how to manufacture that product, contacting a manufacturer, getting it made, negotiating the price, drop shipping it, and then ultimately collecting the revenue.
是以,我提出了一個測試方案,讓他們拿出 10 萬美元投資,在三個月的時間裡,嘗試著手創造一種新產品,研究市場,看看消費者可能會喜歡什麼,生成一些新圖片,繪製一些如何生產該產品的藍圖,聯繫製造商,生產產品,談好價格,下單發貨,然後最終收取收益。
And I think that over a five-year period, it's quite likely that we will have an ACI, an artificial capable intelligence, that can do the majority of that task autonomously.
我認為,在五年的時間裡,我們很有可能會擁有一個 ACI(人工智能),一個能夠自主完成大部分任務的人工智能。
It won't be able to do the whole thing, there are many tricky steps along the way, but significant portions of that.
它無法完成整個過程,途中會有許多棘手的步驟,但其中相當一部分是可以完成的。
It will be able to make phone calls to other humans to negotiate.
它還能打電話給其他人進行談判。
It will be able to call other AIs in order to establish the right sequence in a supply chain, for example.
例如,它可以呼叫其他人工智能,以便在供應鏈中建立正確的順序。
And of course, it will learn to use APIs, Application Programming Interfaces, so other websites or other knowledge bases or other information stores.
當然,它還將學會使用 API(應用程序編程接口),以便使用其他網站或其他知識庫或其他資訊存儲。
And so, you know, the world is your oyster.
所以,你知道,世界是你的牡蠣。
You can imagine that being applied to many, many different parts of our economy.
你可以想象,這適用於我們經濟的許多不同部分。
So Yuval, a man who doesn't use a smartphone very much, you listen to this, does it fill you with horror, and do you agree with it?
那麼尤瓦爾,一個不怎麼使用智能手機的人,你聽了這番話,是否感到恐懼,你同意嗎?
Do you think that's the kind of thing that is likely to happen in the next five years?
你認為未來五年會出現這種情況嗎?
I would take it very seriously.
我會非常認真地對待它。
I don't know.
我不知道。
I'm not coming from within the industry, so I cannot comment on how likely it is to happen.
我不是業內人士,所以無法評論這種可能性有多大。
But when I hear this as a historian, for me, what we just heard, this is the end of human history.
但是,當我作為一個歷史學家聽到這句話時,對我來說,我們剛才聽到的,是人類歷史的終結。
Not the end of history, the end of human-dominated history.
不是歷史的終結,而是人類主導的歷史的終結。
History will continue with somebody else in control.
歷史將在別人的掌控下繼續。
Because what we just heard is basically Mustafa telling us that in five years, there'll be a technology that can make decisions independently and that can create new ideas independently.
因為我們剛才聽到的基本上是穆斯塔法在告訴我們,五年後將會出現一種能夠獨立做出決策、能夠獨立創造新想法的技術。
This is the first time in history we confronted something like this.
這是我們有史以來第一次遇到這樣的事情。
Every previous technology in history, from a stone knife to nuclear bombs, it could not make decisions, like the decision to drop the bomb on Hiroshima was not made by the atom bomb.
歷史上的每一種技術,從石刀到核彈,都無法做出決定,就像在廣島投擲原子彈的決定不是由原子彈做出的。
It was made by President Truman.
這是杜魯門總統提出的。
And similarly, every previous technology in history, it could only replicate our ideas.
同樣,歷史上的每一種技術都只能複製我們的想法。
Like radio or the printing press, it could make copies and disseminate the music or the poems or the novels that some human wrote.
就像收音機或印刷機一樣,它可以複製和傳播某些人創作的音樂、詩歌或小說。
Now we have a technology that can create completely new ideas.
現在,我們有了一種可以創造全新創意的技術。
And it can do it at a scale far beyond what humans are capable of.
而且其規模遠遠超出人類的能力。
So it can create new ideas and in important areas within five years, we'll be able to enact them.
是以,它可以創造新的想法,在重要領域,我們將在五年內將其付諸實施。
And that is a profound shift.
這是一個深刻的轉變。
Before we go on to the many ways in which this could be the end of human history, as you put it, and the potential downsides and risks of this, can we just for a second, just indulge me?
在我們繼續討論你所說的這可能是人類歷史終結的多種方式以及潛在的弊端和風險之前,我們能不能先聽我說幾句?
I'm an optimist at heart.
我骨子裡是個樂觀主義者。
Can we talk about the possibilities?
我們能談談可能性嗎?
What are the potential upsides of this?
這樣做的潛在好處是什麼?
Because there are many, and they are really substantial.
因為有很多,而且真的很有分量。
I think you wrote that there is the potential that this technology can help us deal with incredibly difficult problems and create tremendously positive outcomes.
我認為你在信中寫道,這項技術有可能幫助我們解決令人難以置信的難題,並創造出巨大的積極成果。
So can we just briefly start with that before we go down the road of the terrible things?
那麼,我們能不能在討論那些可怕的事情之前,先簡單說說這個問題?
When I say the end of human history, again, I'm not talking necessarily about the destruction of humankind or anything like that.
當我說人類歷史的終結時,我並不是在說人類一定會毀滅或類似的事情。
There are many positive potential.
有許多積極的潛力。
It's just that control, power, is shifting away from human beings to an alien intelligence, to a non-human intelligence.
只是,控制權、權力正在從人類手中轉移到外星智慧生物,轉移到非人類智慧生物手中。
We'll also get to that because there's a question of how much power, but let's stick with the potential upsides first.
我們還將討論這個問題,因為這涉及到功率大小的問題,但我們還是先說說潛在的好處吧。
The opportunities must suffer.
機會必須受苦。
Everything that we have created in human history is a product of our intelligence.
人類歷史上所創造的一切,都是我們智慧的產物。
Our ability to make predictions and then intervene on those predictions to change the course of the world is, in a very abstract way, the way we have produced our companies and our products and all the value that has changed our century.
我們能夠做出預測,然後對這些預測進行干預,從而改變世界的進程,從一個非常抽象的角度來說,這就是我們生產公司和產品的方式,以及改變我們這個世紀的所有價值。
If you think about it, just a century ago, a kilo of grain would have taken 50 times more labor to produce than it does today.
想想看,就在一個世紀前,生產一公斤穀物所需的勞動力是現在的 50 倍。
That efficiency, which is the trajectory you've seen in agriculture, is likely to be the same trajectory that we will see in intelligence.
這種效率,就是你們在農業領域看到的軌跡,也很可能是我們在智能領域看到的軌跡。
Everything around us is a product of intelligence, and so everything that we touch with these new tools is likely to produce far more value than we've ever seen before.
我們周圍的一切都是智能的產物,是以,我們用這些新工具接觸到的一切都可能產生比我們以前所看到的更多的價值。
I think it's important to say these are not autonomous tools by default.
我認為有必要說明的是,這些並非默認的自主工具。
These capabilities don't just naturally emerge from the models.
這些能力並不是從模型中自然產生的。
We attempt to engineer capabilities, and the challenge for us is to be very deliberate and precise and careful about those capabilities that we want to emerge from the model, that we want to build into the model, and the constraints that we build around it.
我們試圖設計能力,而我們面臨的挑戰是要非常謹慎、準確和小心地對待我們希望從模型中產生的能力、我們希望在模型中建立的能力以及我們圍繞模型建立的約束條件。
It's super important not to anthropomorphically project ideas and potential intentions or potential agency or potential autonomy into these models.
不要把想法、潛在意圖、潛在代理權或潛在自主權擬人化地投射到這些模型中,這一點非常重要。
The governance challenge for us over the next couple of decades to ensure that we contain this wave is to ensure that we always get to impose our constraints on the trajectory of this development.
在未來幾十年裡,為確保遏制這一浪潮,我們在治理方面面臨的挑戰是確保我們始終能夠對這一發展的軌跡施加限制。
But the capabilities that will arise will mean, for example, potentially transformative improvements in human health, speeding up the process of innovation, dramatic changes in the way scientific discovery is done, tough problems, whether it's climate change.
但是,所產生的能力將意味著,例如,人類健康的潛在變革性改善、創新進程的加速、科學發現方式的巨大變化、棘手問題(無論是氣候變化還是其他問題)的解決。
A lot of the big challenges that we face could be much more easily addressed with this capability.
有了這種能力,我們面臨的許多重大挑戰都可以更容易地得到解決。
Everybody is going to have a personal intelligence in their pocket, a smart and capable aide, a chief of staff, a research assistant, constantly prioritizing information for you, putting together the right synthesized nugget of knowledge that you need to take action on at any given moment.
每個人的口袋裡都會有一個個人情報員、一個精明能幹的助手、一個參謀長、一個研究助理,不斷地為你安排資訊的優先順序,把你在任何時候都需要採取行動的正確的綜合知識彙總起來。
And that for sure is going to make us all much, much smarter and more capable.
這肯定會讓我們變得更聰明、更有能力。
Does that part of it sound appealing to you, Yuval?
這部分內容聽起來吸引你嗎,尤瓦爾?
Yes, absolutely.
是的,當然。
I mean, again, if there was no positive potential, we wouldn't be sitting here.
我的意思是,再說一遍,如果沒有積極的潛力,我們就不會坐在這裡了。
Nobody would develop it.
沒有人會開發它。
Nobody would invest in it.
沒有人願意投資。
Again, it's so appealing.
同樣,它是如此吸引人。
The positive potential is so enormous in everything, again, from much better health care, higher living standards, solving things like climate change.
同樣,從更好的醫療保健、更高的生活水準到解決氣候變化等問題,一切方面的積極潛力都是如此巨大。
This is why it's so tempting.
這就是它如此誘人的原因。
This is why we are willing to take the enormous risks involved.
這就是我們願意承擔巨大風險的原因。
I'm just worried that in the end the deal will not be worth it.
我只是擔心到頭來這筆交易不值得。
And I would comment especially on, again, the notion of intelligence.
我想再次特別談談智力的概念。
I think it's overrated.
我認為它被高估了。
I mean, Homo sapiens at present is the most intelligent entity on the planet.
我的意思是,目前智人是地球上最聰明的實體。
It's simultaneously also the most destructive entity on the planet.
它同時也是地球上最具破壞性的實體。
And in some ways, also the most stupid entity on the planet.
在某些方面,也是這個星球上最愚蠢的實體。
The only entity that puts the very survival of the ecosystem in danger.
唯一會危及生態系統生存的實體。
So you think we are trading off more intelligence with more destructive risk?
所以,你認為我們是在用更多的破壞性風險來換取更多的情報?
Yes.
是的。
Again, it's not deterministic.
同樣,這也不是決定性的。
I don't think that we are doomed.
我不認為我們註定要失敗。
I mean, if I thought that, what's the point of talking about it if we can't prevent the worst case scenario?
我的意思是,如果我這麼想,如果我們不能防止最壞的情況發生,談論它還有什麼意義呢?
Well, I was hoping you thought you'd have some agency in actually effecting...
好吧,我希望你認為你在實際影響......
We still have agency.
我們仍然有代理權。
There are a few more years, I don't know how many, 5, 10, 30.
還有幾年,我不知道有多少年,5 年、10 年、30 年。
We still have agency.
我們仍然有代理權。
We are still the ones in the driver's seat shaping the direction this is taking.
我們仍然是主導方向的人。
No technology is deterministic.
任何技術都不是決定性的。
This is something, again, we learned from history.
這也是我們從歷史中學到的。
You can use the same technology in different ways.
你可以用不同的方式使用相同的技術。
You can decide which way to develop it.
您可以決定以哪種方式進行開發。
So we still have agency.
所以,我們仍然有代理權。
This is why you have to think very, very carefully about what we are developing.
這就是為什麼你必須非常、非常仔細地考慮我們正在開發的東西。
Well, thinking very carefully about it is something that Mustafa has been doing in this book.
穆斯塔法在這本書中一直在認真思考這個問題。
And I want to now go through some of the most commonly discussed risks.
現在我想介紹一些最常討論的風險。
I was trying to work out how I would go in sort of order of badness.
我在試著按照糟糕程度排序。
So I'm starting with one that is discussed a lot, but relative to human extinction is perhaps less bad, which is the question of jobs.
是以,我先從一個討論得很多,但相對於人類滅絕來說可能沒那麼糟糕的問題說起,那就是就業問題。
And will, you know, artificial intelligence essentially destroy all jobs because A.I.s will be better than humans at everything?
那麼,人工智能是否會因為在任何事情上都比人類做得好而從根本上毀掉所有工作呢?
You know, I'm an economist by training.
要知道,我可是受過專業訓練的經濟學家。
I, you know, history suggests to me that that has never happened before, that the lump of labor fallacy indeed is a fallacy.
我,你知道,歷史告訴我,這種情況以前從未發生過,勞動組合謬誤確實是一種謬誤。
But tell me what you think about that.
不過,請告訴我你是怎麼想的。
Do you think there is a risk to jobs?
您認為就業會有風險嗎?
It depends on the time frame.
這取決於時間框架。
So over a 10 to 20 year period, my intuition, and you're right that so far the evidence doesn't support this, is that there isn't really going to be a significant threat to jobs.
是以,在 10 到 20 年的時間裡,我的直覺是,就業不會受到重大威脅,你說得沒錯,目前的證據並不支持這一點。
There's plenty of demands.
要求很多。
There will be plenty of work.
工作會很多。
Right.
對
Over a 30 to 50 year time horizon, it's very difficult to speculate.
在 30 至 50 年的時間跨度內,很難進行推測。
I mean, at the very least, we can say that two years ago, we thought that these models could never do empathy.
我的意思是,至少我們可以說,兩年前,我們認為這些模型不可能做到移情。
We said that humans were always going to preserve kindness and understanding and care for one another as a special skill that humans have.
我們說過,人類總是會把善良、理解和相互關愛作為一種特殊的技能保留下來。
Four years ago, we said, well, AIs will never be creative.
四年前,我們說,人工智能永遠不會有創造力。
You know, humans will always be the creative ones inventing new things, making these amazing leaps between new ideas.
要知道,人類總是創造性地發明新事物,在新想法之間實現驚人的飛躍。
It's self-evident now that both of those two capabilities are things that these models do incredibly well.
現在不言而喻,這兩項功能都是這些機型做得非常出色的地方。
And so I think for a period of time, AIs augment our skills.
是以,我認為在一段時間內,人工智能會增強我們的技能。
They make us faster, more efficient, more accurate, more creative, more empathetic and so on and so forth.
它們讓我們更快、更高效、更準確、更有創造力、更有同理心,等等等等。
Over a many decade period, it's much harder to say what are the set of skills that are the permanent preserve of the human species, given that these models are clearly very, very capable.
在長達十年的時間裡,很難說哪些技能是人類的永久專利,因為這些模型顯然非常非常能幹。
And that's where the containment challenge really comes in.
這就是遏制挑戰的真正所在。
We have to make decisions.
我們必須做出決定。
We have to decide as a species what is and what isn't acceptable over a 30 year period.
作為一個物種,我們必須在 30 年內決定什麼是可以接受的,什麼是不可以接受的。
And that means politics and governance.
這意味著政治和治理。
With regard to jobs, I agree that the scenario that there just won't be any jobs, this is an unlikely scenario in the next few decades.
關於就業,我同意 "沒有工作 "的說法,但這在未來幾十年是不太可能發生的。
But we have to look more carefully at time and space.
但我們必須更仔細地審視時間和空間。
I mean, in terms of time, the transition period is the danger.
我的意思是,就時間而言,過渡期是最危險的。
I mean, some jobs disappear, some jobs appear.
我是說,有些工作會消失,有些工作會出現。
People have to transition.
人們必須轉型。
Just remember that Hitler rose to power in Germany because of three years of 25 percent unemployment.
請記住,希特勒之所以能在德國崛起,是因為三年的失業率高達 25%。
So we are not talking about, say, no jobs at all.
是以,我們並不是說,比如說,完全沒有工作。
But if because of the upheavals caused in the job market by AI, we have like, I don't know, three years of 25 percent unemployment, this could cause huge social and political disruptions.
但是,如果因為人工智能對就業市場造成的動盪,我們的失業率會達到 25%,我也不知道是三年,這可能會造成巨大的社會和政治混亂。
And then the even bigger issue is one of space, that the disappearance of jobs and the new jobs will be created in different parts of the world.
還有一個更大的問題是空間問題,即工作崗位的消失和新工作崗位的產生將在世界不同地區進行。
So we might see a situation when there is immense demand for more jobs in California or Texas or China, whereas entire countries lose their economic basis.
是以,我們可能會看到這樣一種情況:加利福尼亞州、德克薩斯州或中國對更多工作崗位有巨大需求,而整個國家卻失去了經濟基礎。
So you need a lot more computer engineers and yoga trainers and whatever in California.
是以,加州需要更多的電腦工程師和瑜伽教練什麼的。
But you don't need any textile workers at all in Guatemala or Pakistan because this has all been automated.
但在瓜地馬拉或巴基斯坦,你根本不需要任何紡織工人,因為這一切都已經自動化了。
So it's not just the total number of jobs on the planet, it's the distribution between different countries.
是以,這不僅僅是地球上工作崗位的總數,而是不同國家之間的分佈情況。
And let's also try to remember that work is not the goal.
我們還要努力記住,工作不是目的。
Work is not our desired end state.
工作不是我們想要的最終狀態。
We did not create civilization so that we could have full employment.
我們創造文明並不是為了實現充分就業。
We created civilization so that we could reduce suffering for everybody.
我們創造文明,就是為了減少每個人的痛苦。
And the quest for abundance is a real one.
追求富足是真實的。
We have to produce more with less.
我們必須少花錢多辦事。
There is no way of getting rid of the fact that population growth is set to explode over the next century.
下個世紀,人口增長將呈爆炸式增長,這是無法迴避的事實。
There are practical realities about the demographic and geographic and climate trajectories that we're on, which are going to drive forward our need to produce exactly these kinds of tools.
我們所處的人口、地理和氣候軌跡都是現實存在的,這將促使我們更需要開發這類工具。
And I think that that should be an aspiration.
我認為這應該是一個願望。
Many, many people do work that is drudginess and exhausting and tiring and they don't find flow.
很多很多人做的工作都是枯燥乏味、疲憊不堪的,他們找不到流動的感覺。
They don't find their identity and it's pretty awful.
他們找不到自己的身份,這非常糟糕。
So I think that we have to focus on the prize here, which is one of a question of capturing the value that these models will produce and then thinking about redistribution.
是以,我認為我們必須把重點放在這個獎項上,這是一個獲取這些模式將產生的價值,然後再考慮重新分配的問題。
And ultimately, the transition is exactly what's at stake.
歸根結底,這正是過渡時期的關鍵所在。
We have to manage that transition with taxation, but just with redistribution, I would say that the difficulty again, the political, historical difficulty, I think there will be immense new wealth created by these technologies.
我們必須通過稅收來管理這一過渡,但僅就再分配而言,我想說的困難還是政治和歷史上的困難,我認為這些技術將創造出巨大的新財富。
I'm less sure that the government will be able to redistribute this wealth in a fair way on a global level.
我不太確定政府能否在全球範圍內公平地重新分配這些財富。
I just don't see the U.S. government raising taxes on corporations in California and sending the money to help unemployed textile workers in Pakistan or Guatemala kind of retrain for the new job market.
我只是不認為美國政府會對加利福尼亞州的企業增稅,然後把這些錢用來幫助巴基斯坦或瓜地馬拉的失業紡織工人進行再培訓,以適應新的就業市場。
Well, that actually gets us to the second potential risk, which is the risk of AI to the political system as a whole.
那麼,這實際上就涉及到了第二個潛在風險,即人工智能對整個政治體系的風險。
And you made a very good point, Yuval, in one of your writings where you reminded us that liberal democracy was really born of the Industrial Revolution and that today's political system is really a product of the economic system that we are in.
尤瓦爾,你在一篇文章中提出了一個非常好的觀點,你提醒我們,自由民主實際上誕生於工業革命,而今天的政治制度實際上是我們所處的經濟制度的產物。
And so there is, I think, a very good, fair question of if the economic system is fundamentally changed, will liberal democracy as we know it survive?
是以,我認為有一個非常好的、公平的問題:如果經濟體系發生了根本性的變化,我們所熟知的自由民主還能生存下去嗎?
Yeah.
是啊
And on top of that, it's not just the Industrial Revolution.
此外,這不僅僅是工業革命。
It's the new information technologies of the 19th and 20th century.
這是 19 世紀和 20 世紀的新信息技術。
Before the 19th century, you don't have any example in history of a large scale democracy.
在 19 世紀之前,歷史上沒有任何大規模民主的例子。
I mean, you have examples on a very small scale, like in hunter-gatherer tribes or in city-states like ancient Athens, but you don't have any example that I know of of millions of people spread over a large territory, an entire country, which managed to build and maintain a democratic system.
我的意思是,在很小的範圍內,比如在狩獵採集部落或像古代雅典這樣的城邦中,你會有這樣的例子,但據我所知,你還沒有任何一個例子,能讓分佈在大片領土(整個國家)上的數百萬人設法建立並維持民主制度。
Why?
為什麼?
Because democracy is the conversation.
因為民主就是對話。
And there was no information technology and communication technology that enabled a conversation between millions of people over an entire country.
當時還沒有信息技術和通信技術能夠讓整個國家的數百萬人進行對話。
Only when first newspapers and then telegraph and radio and television came along, this became possible.
只有先有報紙,後有電報、廣播和電視,這才成為可能。
So modern democracy as we know it, it's built on top of specific information technology.
是以,我們所知的現代民主,是建立在特定信息技術之上的。
Once the information technology changes, it's an open question whether democracy can survive.
一旦信息技術發生變化,民主能否生存下去還是個未知數。
And the biggest danger now is the opposite than what we faced in the Middle Ages.
而現在最大的危險與我們在中世紀所面臨的恰恰相反。
It was impossible to have a conversation between millions of people because they just couldn't communicate.
數百萬人之間不可能進行對話,因為他們根本無法溝通。
But in the 21st century, something else might make the conversation impossible.
但在 21 世紀,另一種情況可能會讓對話變得不可能。
If trust between people collapses, again, if AI, if you go online, which is now the main way we converse on the level of the country and the online space is flooded by non-human entities that maybe masquerade as human beings, you talk with someone, you have no idea if it's even human.
如果人與人之間的信任崩塌,同樣,如果人工智能,如果你上網,這是我們現在在國家層面交流的主要方式,而網絡空間充斥著非人類實體,它們可能偽裝成人類,你與某人交談,你甚至不知道他是不是人類。
You see something, you see a video, you hear an audio, you have no idea if this is really, is this true?
你看到了一些東西,你看到了一段視頻,你聽到了一段音頻,你不知道這是不是真的,這是真的嗎?
Is this fake?
這是假的嗎?
Is this a human?
這是人類嗎?
It's not a human.
它不是人類。
I mean, in this situation, unless we have some guardrails, again, conversation collapses.
我的意思是,在這種情況下,除非我們有一些防護措施,否則,對話又會崩潰。
Is that what you mean when you say AI risks hacking the operating system?
你說人工智能有黑進作業系統的風險,是這個意思嗎?
This is one of the things, again, if bots can impersonate people, it's basically like what happens in the financial system.
這也是其中的一個問題,同樣,如果機器人可以冒充人類,這基本上就像金融系統中發生的事情一樣。
Like people invented money and it was possible to counterfeit money, to create fake money.
就像人們發明了貨幣,可以偽造貨幣,製造假幣。
The only way to save the financial system from collapse was to have very strict regulations against fake money because the technology to create fake money was always there.
拯救金融體系免於崩潰的唯一辦法就是對假幣制定非常嚴格的法規,因為製造假幣的技術始終存在。
But there was very strict regulation against it because everybody knew if you allow fake money to spread, the financial system, the trust in money collapses.
但當時有非常嚴格的監管措施,因為每個人都知道,如果允許假幣氾濫,金融體系和對貨幣的信任就會崩潰。
And now we are in an analogous situation with the political conversation that now it's possible to create fake people.
現在,我們在政治對話中也遇到了類似的情況,即現在有可能製造出假冒的人。
And if we don't ban that, then trust will collapse.
如果我們不加以禁止,信任就會崩潰。
We'll get to the banning or not banning in a minute.
我們稍後再談禁止與否的問題。
Democratizing access to the right to broadcast has been the story of the last 30 years.
廣播權的民主化是過去 30 年的大事。
Hundreds of millions of people can now create podcasts and blogs and they're free to broadcast their thoughts on Twitter and the Internet.
現在,數以億計的人可以創建播客和博客,他們可以自由地在 Twitter 和互聯網上發表自己的想法。
Broadly speaking, I think that has been an incredibly positive development.
總的來說,我認為這是一個非常積極的發展。
You no longer have to get access to the top newspaper or you get the skills necessary to be part of that institution.
你不再需要進入頂級報社,也不再需要掌握成為該機構一員所需的技能。
Many people at the time feared that this would destroy our credibility and trust in the big news outlets and institutions.
當時很多人擔心,這會破壞我們對大型新聞媒體和機構的信譽和信任。
I think that we've adapted incredibly well.
我認為我們適應得非常好。
Yes, it has been a lot of turmoil and unstable.
是的,這段時間動盪不安。
But with every one of these new waves, I think we adjust our ability to discern truth, to dismiss nonsense.
但我認為,每一次新的浪潮都會調整我們辨別真理、摒棄無稽之談的能力。
And there are both technical and governance mechanisms which will emerge in the next wave, which we can talk about to address things like bot impersonation.
下一波浪潮將出現技術和治理機制,我們可以討論這些機制,以解決機器人冒名頂替等問題。
I mean, I'm completely with you.
我的意思是,我完全同意你。
I mean, we should have a ban on impersonation of digital people.
我的意思是,我們應該禁止冒充數字人。
It shouldn't be possible to create a digital Zanny and have that be platformed on Twitter talking all kinds of nonsense.
不可能創造出一個數字贊尼,讓她在推特上大放厥詞。
Zanny is very smart.
贊尼非常聰明。
I mean, it's enough with the real world.
我的意思是,現實世界已經足夠了。
So I think that there are technical mechanisms that we can do to prevent those kinds of things.
是以,我認為我們可以通過一些技術機制來防止這類事情的發生。
And that's why we're talking about them.
這就是我們談論它們的原因。
There are mechanisms.
有一些機制。
We just need to employ them.
我們只需要僱用他們。
I would say two things.
我想說兩件事。
First of all, it's a very good thing that more people were given a voice.
首先,讓更多人發表意見是一件非常好的事情。
It's different with bots.
機器人則不同。
Bots don't have freedom of speech.
機器人沒有言論自由。
So banning bots...
所以禁止機器人...
Well, they shouldn't have freedom of speech.
那麼,他們就不應該有言論自由。
They shouldn't have freedom of speech.
他們不應該有言論自由。
That's very important.
這一點非常重要。
Yes, there have been some wonderful developments in the last 30 years.
是的,在過去的 30 年裡,我們取得了一些了不起的發展。
Still, I'm very concerned that when you look at countries like the United States, like the UK to some extent, like my home country of Israel, I'm struck by the fact that we have the most sophisticated information technology in history and we are no longer able to talk to each other.
不過,我還是非常擔心,當你看看美國這樣的國家,比如英國(在某種程度上),比如我的祖國以色列,我就會被這樣一個事實所震驚:我們擁有歷史上最先進的信息技術,但我們卻不再能夠互相交流。
That my impression, maybe your impression of American politics or politics in other democracies is different.
這是我的印象,也許你對美國政治或其他民主國家政治的印象有所不同。
My impression is that trust is collapsing.
我的印象是,信任正在崩潰。
The conversation is collapsing, that people can no longer agree who won the last elections.
對話正在崩潰,人們再也無法就誰贏得了上次選舉達成一致。
Like the most basic fact in a democracy.
就像民主社會中最基本的事實一樣。
Who won the last?
最後誰贏了?
We had huge disagreements before, but I feel that now it's different, that really the conversation is breaking down.
我們之前有很大的分歧,但我覺得現在不一樣了,對話真的破裂了。
I'm not sure why, but it's really troubling that at the same time that we have really the most powerful information technology in history and people no longer can talk with each other.
我不知道為什麼,但與此同時,我們擁有了歷史上最強大的信息技術,而人們卻不再能相互交流,這確實令人不安。
It's a very good point.
說得非常好。
We actually had a, you may have seen it, we had a big cover package on looking at what the impact might be in the short term on elections and on the political system.
實際上,你可能已經看到了,我們有一個大的封面方案,研究短期內對選舉和政治體制可能產生的影響。
And we concluded actually AI was likely to have a relatively small impact in the short term because there was already so little trust.
我們的結論是,人工智能在短期內的影響可能相對較小,因為信任度已經非常低了。
So it was a sort of double edged answer.
是以,這是一種雙刃劍式的答案。
You know, it was it was not going to make a huge difference, but only because things were pretty bad as they were.
要知道,這並不會產生巨大的影響,只是因為當時的情況非常糟糕。
But you both said there needs to be regulation.
但你們都說需要監管。
Before we get to the precisely how the unit that we have that would do that is the nation state and national governments.
在我們討論民族國家和國家政府是如何做到這一點之前,我們先討論一下民族國家和國家政府是如何做到這一點的。
Yet you, Mustafa, in your book, worry that actually one of the potential dangers is that the powers of the nation state are eroded.
然而,穆斯塔法,你在書中擔心,潛在的危險之一其實是民族國家的權力受到侵蝕。
Could you talk through that as the sort of the third in my escalating sense of risks?
你能把這一點作為我不斷升級的風險意識中的第三點談談嗎?
The challenge is that at the very moment when we need the nation state to hold us accountable, the nation state is struggling under the burden of a lack of trust and huge polarization and a breakdown in our political process.
我們面臨的挑戰是,就在我們需要民族國家對我們負責的時候,民族國家卻在缺乏信任、兩極分化嚴重以及政治進程崩潰的重壓下苦苦掙扎。
And so combined with that, the latest models are being developed by the private companies and by the open source.
是以,私營公司和開放源代碼公司正在開發最新的模型。
It's important to recognize it isn't just the biggest AI developers.
重要的是要認識到,這不僅僅是最大的人工智能開發商的問題。
There's a huge proliferation of these techniques widely available on open source code that people can download from the Web for free.
這些技術在開放源代碼中大量湧現,人們可以從網上免費下載。
And they're probably about a year or a year and a half behind the absolute cutting edge of the big models.
它們可能比最先進的大型機型落後一年或一年半。
And so we have this dual challenge.
是以,我們面臨著雙重挑戰。
Like, how do you hold centralized power accountable when the existing mechanism is basically a little bit broken?
比如,在現有機制基本有點崩潰的情況下,如何讓中央集權負責?
And how do you address this mass proliferation issue when it's unclear how to stop anything in mass proliferation on the Web?
在不清楚如何阻止網絡大規模擴散的情況下,如何解決大規模擴散問題?
That's a really big challenge.
這真是一個巨大的挑戰。
What we've started to see is self-organizing initiatives on the part of the companies.
我們開始看到的是企業自發組織的倡議。
Right.
對
So getting together and agreeing to sign up proactively to self oversight, both in terms of audit, in terms of capabilities that we won't explore, et cetera, et cetera.
是以,大家聚在一起,同意主動簽署自我監督協議,無論是在審計方面,還是在我們不會探討的能力方面,等等等等。
Now, I think that's only partially reassuring to people, clearly, maybe not even reassuring at all.
現在,我認為這隻能讓人們部分放心,顯然,也許根本不能讓人放心。
But the reality is, I think it's the right first step, given that we haven't actually demonstrated the large scale harms to arise from AIs just yet.
但實際上,我認為這是正確的第一步,因為我們還沒有真正證明人工智能會帶來大規模的危害。
I mean, this is one of the first occasions, I think, in general purpose waves of technology that we're actually starting to adopt a precautionary principle.
我的意思是,我認為,在通用技術浪潮中,這是我們第一次真正開始採用預防原則。
I'm a big advocate of that.
我非常推崇這一點。
I think that we should be approaching a do no harm principle.
我認為,我們應該奉行不傷害原則。
And that may mean that we have to leave some of the benefits on the tree and some fruit may just not be picked for a while.
這可能意味著,我們必須把一些好處留在樹上,有些果實可能暫時摘不下來。
And we might lose some gains over a couple of years where we may look back in hindsight and think, oh, well, we could have actually gone a little bit faster there.
幾年下來,我們可能會失去一些成果,事後回想起來,我們可能會想,哦,其實我們可以走得更快一點。
I think that's the right trade off.
我認為這是正確的選擇。
This is a moment of caution.
這是一個值得警惕的時刻。
Things are accelerating extremely quickly and we can't yet do the balance between the harms and benefits perfectly well until we see how this wave unfolds a little bit.
事情發展得非常快,在我們看到這股浪潮如何一點點展開之前,我們還無法很好地平衡弊與利。
So I like the fact that our company Inflection AI and the other big developers are trying to take a little bit more of a cautious approach.
是以,我很高興我們公司 Inflection AI 和其他大型開發商正試圖採取更加謹慎的態度。
I think that's a really interesting point because, you know, we are having this conversation.
我認為這是一個非常有趣的觀點,因為你知道,我們正在進行這樣的對話。
You have written, both of you, extensively about the challenges posed by this technology.
你們兩位都寫過大量文章,談到這項技術帶來的挑戰。
There's now a parlor game amongst, you know, practitioners in this world about, you know, what is the risk of extinction level events where there's a huge amount of talk about this.
現在,這個世界上的從業者們都在玩一種會客遊戲,你知道,滅絕級事件的風險有多大,人們對此議論紛紛。
And I don't know, in fact, I should probably ask you what percentage of your time, probably right now, it's, you know, close to 100 percent of your time is focused on the risk since you're promoting your book.
我不知道,事實上,我也許應該問你,你有多少時間,也許現在,你知道,你接近100%的時間都在關注風險,因為你在宣傳你的書。
But it's it is there's a lot of attention on this, which is which is good.
但是,現在有很多人關注這個問題,這是件好事。
We are thinking about it early.
我們考慮得很早。
So that gets us, I think, now to the most important part of our conversation, which is what do we do?
是以,我認為,我們現在要談的最重要的部分是,我們該怎麼做?
And you, Mustafa, you lay out a 10 point plan, which is, you know, the kind of action, do kind of thing that someone who doesn't just comment like you and I do, but actually does things would do.
而你,穆斯塔法,你提出了一個 10 點計劃,你知道,這是一種行動,一種不只是像你我一樣發表評論,而是真正做事的人會做的事情。
So tell us what do we need to do as humanity, as governments, as societies to ensure that we capture the gains from this technology, but we minimise the risks.
那麼請告訴我們,作為人類、作為政府、作為社會,我們需要做些什麼來確保我們從這項技術中獲益,同時又將風險降到最低。
There are some very practical things.
有一些非常實用的東西。
I mean, so, for example, red teaming these models means adversarially testing them and trying to put them under as much pressure as possible to push them to generate advice, for example, on how to generate a biological or chemical weapon, how to create a bomb, for example, or even push them to be very sexist, racist, biased in some way.
我的意思是,舉例來說,對這些模型進行 "紅隊 "測試,意味著對它們進行對抗性測試,並試圖給它們施加儘可能大的壓力,迫使它們提出建議,例如,如何製造生物或化學武器,如何製造炸彈,甚至迫使它們在某些方面表現出性別歧視、種族主義和偏見。
And that already is pretty significant.
這已經非常重要了。
We can see their weaknesses.
我們可以看到他們的弱點。
I mean, part of the release of these models in the last year has given everybody, I think, the opportunity to see not just how good they are, but also their weaknesses.
我的意思是,去年發佈的這些模型,我認為給了大家一個機會,不僅看到了它們有多好,也看到了它們的弱點。
And that is reassuring.
這一點令人欣慰。
We need to do this out in the open.
我們需要在公開場合這樣做。
That's why I'm a huge fan of the open source community as it is at the moment, because real developers get to play with the models and actually see how hard it is to produce the capabilities that sometimes I think we fear that they're just going to be super manipulative and persuasive and, you know, destined to be awful.
這就是為什麼我是開源社區的忠實粉絲,因為真正的開發者可以使用這些模型,並真正看到開發這些功能有多麼困難。
So that's the first thing is doing it out in the open.
所以,首先要做的就是公開。
The second thing is that we have to share the best practices.
其次,我們必須分享最佳做法。
And so there's a competitive tension there because safety is going to be an asset.
是以,這裡存在著一種競爭緊張關係,因為安全將成為一種資產。
You know, I'm going to deliver a better product to my consumers if I have a safer model.
要知道,如果我有一個更安全的模式,我就能為消費者提供更好的產品。
But of course, there's got to be a requirement that if I discover a vulnerability, a weakness in the model, then I should share that just as we have done for actually decades in many waves of technology, not just in software security, for example, but in flight aviation.
但當然,必須有這樣的要求,即如果我發現了模型中的漏洞和薄弱環節,那麼我就應該與大家分享,就像我們幾十年來在許多技術浪潮中所做的那樣,不僅是在軟件安全領域,在飛行航空領域也是如此。
You know, the black box recorder, for example, if there's a significant incident, not only does it record all the telemetry on board the aircraft, but also everything that the pilots say in the cockpit.
要知道,以黑匣子記錄儀為例,如果發生重大事故,它不僅會記錄飛機上的所有遙測數據,還會記錄飛行員在駕駛艙內所說的一切。
And if there's a significant safety incident, then that's shared all around the world with all of the competitors, which is great.
如果發生重大安全事故,全球所有競爭對手都會共享這一資訊,這非常好。
Aircrafts are one of the safest ways to get around, despite, you know, on the face of it, if you described it to an alien, being 40,000 feet in the sky is a very strange thing to do.
飛機是最安全的出行方式之一,儘管從表面上看,如果你向外星人描述,在 4 萬英尺的高空是一件非常奇怪的事情。
So I think there's precedent there that we can we can follow.
是以,我認為有先例可循。
I do also agree that it's probably time for us to explicitly declare that we should not allow these tools to be used for electioneering.
我也同意,現在也許是我們明確宣佈不允許利用這些工具進行競選活動的時候了。
I mean, we cannot trust them yet.
我的意思是,我們還不能相信他們。
We cannot trust them to be stable and reliable.
我們無法相信它們會穩定可靠。
We cannot allow people to be using them for counterfeit digital people.
我們不能允許有人利用它們來偽造數字人。
Clearly, we've talked about that already.
顯然,我們已經談過這個問題了。
So there are some capabilities which we can start to take off the table.
是以,我們可以開始取消一些能力。
Another one would be autonomy.
另一個是自治。
Right.
對
Right now, I think autonomy is a pretty dangerous set of methods.
現在,我認為自治是一套相當危險的方法。
It's exciting.
太激動人心了
It represents a possibility that could be truly incredible.
它代表著一種可能,而這種可能確實令人難以置信。
But we haven't wrapped our hands around what the risks and limitations are.
但我們還不清楚其中的風險和侷限性。
Likewise, training an AI to update and improve its own code.
同樣,訓練人工智能更新和改進自己的代碼也是如此。
This notion of recursive self-improvement, right?
這種遞歸自我完善的概念,對嗎?
Closing the loop so that the AI is in charge of defining its own goals, acquiring more resources, updating its own code with respect to some objective.
關閉循環,讓人工智能負責確定自己的目標、獲取更多資源、更新自己的代碼以實現某些目標。
These are pretty dangerous capabilities.
這些能力非常危險。
Just as we have KYC, Know Your Customer, or just as we license development developers of nuclear technologies and all the components involved in that supply chain, there'll be a moment where if some of the big technology providers want to experiment with those capabilities, then they should expect there to be robust audits.
就像我們有 "瞭解你的客戶"(KYC),或者就像我們為核技術開發商和供應鏈中涉及的所有組件頒發許可證一樣,如果一些大型技術提供商想要嘗試這些功能,那麼他們就應該期待有強有力的審計。
They should expect them to be licensed and there should be independent oversight.
他們應該期望這些機構獲得許可,並且應該有獨立的監督。
So how do you get that done?
那麼,怎樣才能做到這一點呢?
And there seem to me there are several challenges in doing it.
在我看來,這樣做有幾個挑戰。
One is the division between the relatively few leading edge models of which you have one and then the larger tail of open source models where the ability to build the model is decentralized.
一個是相對較少的前沿模式(你有一個),另一個是更大的開源模式(建立模式的能力是分散的)。
Lots of people have access to it.
很多人都能使用它。
My sense is that the capabilities of the latter are a little bit behind the capabilities of the former, but they are growing all the time.
我的感覺是,後者的能力稍稍落後於前者,但它們一直在增長。
And so if you have really considerable open source capability, what is not to stop the angry teenager in some small town developing capabilities that could shut down the local hospital?
是以,如果你擁有真正可觀的開放源碼能力,有什麼能阻止某個小鎮上的憤怒少年開發出可以關閉當地醫院的能力呢?
And how do you, in your regulatory framework, guard against that?
你們的監管框架如何防範這種情況?
Part of the challenge is that these models are getting smaller and more efficient.
挑戰之一是這些機型越來越小,效率越來越高。
And we know that from the history of technologies.
我們從技術發展史中就能瞭解到這一點。
Anything that is useful and valuable to us gets cheaper, easier to use, and it proliferates far and wide.
任何對我們有用和有價值的東西都會變得更便宜、更容易使用,並廣泛傳播。
So the destiny of this technology over a two, three, four decade period has to be proliferation.
是以,在二、三、四十年間,這項技術的命運一定是擴散。
And we have to confront that reality.
我們必須正視這一現實。
It isn't a contradiction to name the fact that proliferation seems to be inevitable, but containing centralized power is an equivalent challenge.
擴散似乎是不可避免的,但控制中央集權也是同樣的挑戰,這並不矛盾。
So there is no easy answer to that.
是以,沒有簡單的答案。
I mean, beyond surveilling the internet, it is pretty clear that in 30 years' time, like you say, garage tinkerers will be able to experiment.
我的意思是,除了對互聯網進行監控外,很明顯,30 年後,就像你說的,車庫裡的工匠們就能進行實驗了。
If you look at the trajectory on synthetic biology, we now have desktop synthesizers.
如果你看看合成生物學的發展軌跡,我們現在已經有了桌面合成器。
That is the ability to engineer new synthetic compounds.
這就是設計新合成化合物的能力。
They cost about $20,000 and they basically enable you to create potentially molecules which are more transmissible or more lethal than we had with COVID.
它們的成本約為 2 萬美元,基本上可以讓你創造出比 COVID 更易傳播或更致命的潛在分子。
You can basically experiment.
您基本上可以進行試驗。
And the challenge there is that there's no oversight.
挑戰在於沒有監督。
You buy it off the shelf.
你可以買現成的。
You don't need a great deal of training, probably an undergraduate in biology today, and you'll be able to experiment.
你不需要太多的訓練,可能今天還是生物學的大學生,你就可以做實驗了。
Now, of course, they're going to get smaller, easier to use and spread far and wide.
現在,它們當然會變得更小巧、更易使用、傳播更廣。
And so my book, I'm really trying to popularize the idea that this is the defining containment challenge of the next few decades.
是以,在我的書中,我試圖普及這樣一個觀點,即這是未來幾十年中決定性的遏制挑戰。
So you use the word containment, which is interesting because, you know, Yuval, I'm sure the word containment with you brings immediately, you know, inspires images of George Kennan and, you know, the post-war, Cold War dynamic.
所以你用了 "遏制 "這個詞,這很有意思,因為尤瓦爾,我相信 "遏制 "這個詞會讓你立刻聯想到喬治-肯南,聯想到戰後冷戰的動態。
And we're now, we're in a geopolitical world now that whether or not you call it a new Cold War is one of great tension between the US and China.
我們現在所處的地緣政治世界,無論你是否稱之為新冷戰,中美之間的關係都非常緊張。
Can this kind of containment, as Mustafa calls it, be done when you have the sort of tensions you've got between the world's big players?
當世界大國之間的關係如此緊張時,穆斯塔法所說的這種 "遏制 "能實現嗎?
Are the, is the right paradigm thinking about the arms control treaties of the Cold War?
冷戰時期的軍備控制條約是否是正確的思考範式?
How do we go about doing this at a kind of international level?
我們如何在國際層面開展這項工作?
I think this is the biggest problem, that if it was a question of, you know, humankind versus a common threat of these new intelligent alien agents here on Earth, then yes, I think that there are ways we can contain them.
我認為這才是最大的問題所在,如果這是一個人類與地球上這些新的外星智慧生物的共同威脅的問題,那麼是的,我認為我們有辦法遏制它們。
But if the humans are divided among themselves and are in an arms race, then it becomes almost impossible to contain this alien intelligence.
但是,如果人類彼此分裂,進行軍備競賽,那麼幾乎不可能遏制這種外星智慧。
And there is, I'm tending to think of it more in terms of really an alien invasion.
而且,我更傾向於認為這是外星人入侵。
That's like somebody coming and telling us that, you know, there is a fleet, an alien fleet of spaceships coming from planet Zircon or whatever with highly intelligent beings.
這就好比有人來告訴我們,你知道,有一支艦隊,一支外星飛船艦隊正從鋯石星球或其他星球駛來,上面有高智商的生物。
They'll be here in five years and take over the planet.
他們五年內就會來到這裡,佔領地球。
Maybe they'll be nice.
也許他們會很友善。
Maybe they'll solve cancer and climate change, but we are not sure.
也許它們能解決癌症和氣候變化問題,但我們還不能確定。
This is what we are facing, except that the aliens are not coming in spaceships from planet Zircon.
這就是我們所面臨的問題,只不過外星人不是乘坐飛船從鋯石星球來的。
They are coming from the laboratory.
它們來自實驗室。
He's sitting right next to you, the creator of the aliens.
他就坐在你旁邊,外星人的創造者。
I honestly think this is an unhelpful characterization of the nature of the technology.
老實說,我認為這是對技術本質的無益描述。
An alien has, by default, agency.
默認情況下,外國人擁有代理權。
These are going to be tools that we can apply in narrow settings.
這些將是我們可以在狹窄環境中使用的工具。
Yes, but let's say they potentially have agency.
是的,但假設他們可能有代理權。
We can try to prevent them from having agency, but we know that they are going to be highly intelligent and at least potentially have agency.
我們可以試圖阻止他們擁有代理權,但我們知道他們將是高智商的,至少有可能擁有代理權。
And this is a very, very frightening mix, something we never confronted before.
這是一個非常、非常可怕的組合,是我們以前從未面對過的。
Again, atom bombs didn't have a potential for agency.
同樣,原子彈也不具有代理的可能性。
Printing presses did not have a potential for agency.
印刷機沒有代理的潛力。
This thing, again, unless we contain it and the problem of content is very difficult because potentially they'll be more intelligent than us.
同樣,除非我們控制住它,否則內容問題就很難解決,因為他們有可能比我們更聰明。
How do you prevent something more intelligent than you from developing the agency it has?
如何防止比你更聰明的東西發展它所擁有的代理權?
I'm not saying it's impossible.
我不是說這不可能。
I'm just saying it's very, very difficult.
我只是說這非常非常困難。
I think our best bet is not to kind of think in terms of some kind of rigid regulation.
我認為,我們最好的選擇是不要考慮某種嚴格的監管。
You should do this.
你應該這樣做。
You shouldn't do that.
你不該這麼做
It's in developing new institutions, living institutions that are capable of understanding the very fast developments and reacting on the fly.
這就需要發展新的機構,有生命力的機構,能夠理解快速的發展,並在第一時間做出反應。
At present, the problem is that the only institutions who really understand what is happening are the institutions who develop the technology.
目前的問題是,只有開發技術的機構才能真正瞭解正在發生的事情。
The governments, most of them seem quite clueless about what's happening.
大多數政府似乎對發生的事情一無所知。
Also, universities.
此外,還有大學。
I mean, the amount of talent and the amount of the economic resources in the private sector is far, far higher than in the universities.
我的意思是,私營部門的人才數量和經濟資源數量遠遠高於大學。
So and again, I appreciate that there are actors in the private sector like Mustafa who are thinking very seriously about regulation and containment.
是以,我再次感謝私營部門中像穆斯塔法這樣的行為者,他們非常認真地思考監管和遏制問題。
But we must have an external entity in the game.
但我們必須有一個外部實體參與遊戲。
And for that, we need to develop new institutions that will have the human resources, that will have the economic and technological resources and also will have the public trust, because without public trust, it won't work.
為此,我們需要建立新的機構,這些機構將擁有人力資源、經濟和技術資源,還將獲得公眾的信任,因為沒有公眾的信任,一切都將行不通。
Are we capable of creating such new institutions?
我們有能力創建這樣的新機構嗎?
I don't know.
我不知道。
I do think Yuval raises an important point, which is that we started this conversation and you were painting the picture of five years time and you were saying that the AIs would be ubiquitous.
我認為尤瓦爾提出了一個重要的觀點,那就是在我們開始對話時,你描繪的是五年後的景象,你說人工智能將無處不在。
We'd all have our own ones, but that they would have the capability to act, not just to process information.
我們每個人都有自己的 "大腦",但 "大腦 "有行動能力,而不僅僅是處理資訊。
They would have the creativity they have now and the ability to act.
他們將擁有現在的創造力和行動能力。
But already from these generative AI models, the power that we've seen in the last year, two, three, four years has been that they have been able to act in ways that you and your other, your fellow technologists didn't anticipate.
但是,從這些生成式人工智能模型中,我們已經看到,在過去的一年、兩年、三年、四年裡,它們已經能夠以你和你的其他技術同行們沒有預料到的方式採取行動。
They reached, you know, you didn't anticipate, you know, the speed with which they would, you would win at go or so forth.
他們達到了,你知道,你沒有預料到,你知道,他們的速度,你會在圍棋或諸如此類的比賽中獲勝。
There was a, the striking thing about them is that they have developed in unanticipatedly fast ways.
他們的驚人之處在於,他們的發展速度出乎意料地快。
So if you combine that with capability, you don't have to go as far as Yuval is saying and saying that they're all more intelligent than humans, but there is an unpredictability there that I think does raise the concerns that Yuval raises, which is you, their creators can't quite predict what powers they will have.
是以,如果你把這一點與能力結合起來,你就不必像尤瓦爾說的那樣,說它們都比人類更聰明,但存在著一種不可預測性,我認為這確實會引起尤瓦爾提出的擔憂,那就是你,它們的創造者無法完全預測它們會擁有什麼能力。
They may not be fully autonomous, but they will be moving some ways towards that.
它們可能不會完全自主,但會朝著這個方向邁進。
And so how do you guard against that?
那麼,如何防範這種情況呢?
Or how do you, you know, red teaming, you use the phrase, which is that I understand it is that, you know, you keep checking what's happening and tweak them when you've seen what's...
或者,你如何,你知道,紅隊,你使用的短語,這是我的理解是,你知道,你不斷檢查發生了什麼事,並調整它們,當你已經看到了什麼......
Well, you pressure test them, you try to make them fail.
你要對它們進行壓力測試,讓它們失敗。
You can't pressure test for everything in advance.
你不可能事先對所有東西進行壓力測試。
So there is a, I think a very real point that Yuval is making about as the capabilities increase, so the risks increase of relying on you and other creator companies to make the same.
是以,我認為尤瓦爾提出的一個非常現實的觀點是,隨著能力的提高,依靠你和其他創造者公司製造同樣產品的風險也會增加。
I mean, it's a very fair question.
我的意思是,這是一個非常公平的問題。
And that's why I've long been calling for the precautionary principle.
這就是為什麼我長期以來一直在呼籲預防原則。
We should both take some capabilities off the table and classify those as high risk.
我們都應該取消一些能力,並將其列為高風險能力。
I mean, frankly, the EU AI Act, which has been in draft for three and a half years, is very sensible as a risk based framework that applies to each application domain, whether it's health care or self-driving or facial recognition.
我的意思是,坦率地說,《歐盟人工智能法》草案已經起草了三年半,作為一個基於風險的框架,它非常明智,適用於每個應用領域,無論是醫療保健、自動駕駛還是面部識別。
And it basically takes certain capabilities off the table when that threshold is exceeded.
一旦超過這個臨界值,某些功能就會被取消。
I listed a few earlier autonomy, for example, clearly a capability that it has the potential to be high risk, recursive self-improvement, the same story.
我前面列舉了一些自主性,例如,自主性顯然是一種能力,它有可能是高風險的,遞歸的自我完善,也是同樣的道理。
So this is the moment when we have to adopt a precautionary principle, not through any fear mongering, but just as a logical, sensible way to proceed.
是以,此時此刻,我們必須採取預防原則,而不是製造恐懼,而是作為一種合乎邏輯的、明智的方法。
Another model, which I think is very sensible, is to take an IPCC style approach, an international consensus around an investigatory power to establish the scientific fact basis for where we are with respect to capabilities.
另一種模式,我認為是非常明智的,就是採取政府間氣候變化專門委員會(IPCC)式的方法,圍繞調查權達成國際共識,為我們在能力方面的現狀建立科學事實基礎。
And that has been an incredibly valuable process.
這是一個非常寶貴的過程。
Set aside the negotiation and the policymaking, just the evidence observing where are we.
撇開談判和決策不談,只看證據,觀察我們現在的處境。
You don't have to take it from me.
你不必從我這裡拿走。
You should have to take an independent panel of experts who I would personally grant access to everything in my company if they were a trusted, true, impartial actor.
你應該成立一個獨立的專家小組,如果他們是值得信賴的、真實的、公正的行為者,我個人會允許他們接觸我公司的一切。
Without question, we would grant complete access.
毫無疑問,我們將允許完全訪問。
And I know that many of the other companies would do the same.
我知道,許多其他公司也會這樣做。
Again, people are drawn towards the kind of scenario of the AI creates a lethal virus, Ebola plus Covid and kills everybody.
人們再次被人工智能創造出一種致命病毒(埃博拉病毒加 Covid)並殺死所有人的情景所吸引。
Let's go in the more economic direction, financial systems like you gave as a new Turing test, the idea of AI making money.
讓我們從更經濟的方向出發,像你提出的金融系統作為新的圖靈測試,人工智能賺錢的想法。
What's wrong with making money?
賺錢有什麼不好?
Wonderful thing.
妙不可言
So let's say that you have an AI which has a better understanding of the financial system than most humans, most politicians, maybe most bankers.
比方說,你有一個人工智能,它比大多數人類、大多數政治家,甚至大多數銀行家都更瞭解金融系統。
And let's think back to the 2007-2008 financial crisis.
讓我們回想一下 2007-2008 年的金融危機。
It started with this, how was it called, CBO, CBU, these credit defaults.
事情的起因是這樣的,怎麼說來著,CBO,CBU,這些信貸違約。
Exactly.
沒錯。
Something that these genius mathematicians invented.
這些天才數學家發明的東西
Nobody understood them except for a handful of genius mathematicians in Wall Street, which is why nobody regulated them.
除了華爾街的少數天才數學家,沒有人瞭解它們,這就是為什麼沒有人對它們進行監管。
And almost nobody saw the financial crash coming.
幾乎沒有人預見到金融風暴的到來。
What happens?
會發生什麼?
Again, this kind of apocalyptic scenario, which you don't see in Hollywood science fiction movies, the AI invents a new class of financial devices that nobody understands.
同樣,這種世界末日般的場景在好萊塢科幻電影中是看不到的,人工智能發明了一種無人理解的新型金融設備。
It's beyond human capability to understand.
這是人類無法理解的。
It's such complicated math, so much data.
數學如此複雜,數據如此之多。
Nobody understands it.
沒有人理解它。
It makes billions of dollars, billions and billions of dollars, and then it brings down the world economy.
它賺取了數十億美元、數十億美元、數十億美元,然後卻拖垮了世界經濟。
And no human being understands what the hell is happening.
沒有人明白到底發生了什麼。
Like the prime ministers, the presidents, the financial ministers, what is happening?
比如首相、總統、財政部長,到底發生了什麼?
And again, this is not fantastic.
再說一遍,這並不美妙。
I mean, we saw it with human mathematicians in 2007-2008.
我的意思是,我們在 2007-2008 年的人類數學家身上看到了這一點。
I think that's one, you know, you can easily paint pictures here that make you want to jump off the nearest cliff.
我想,你知道,你可以很容易地在這裡描繪出讓你想從最近的懸崖跳下去的畫面。
And, you know, that's that's one.
而且,你知道,這是一個。
But actually, my other response to Mustafa's laying out of where you say, well, we just need to rule out certain actions is to go back to the geopolitics.
但實際上,我對穆斯塔法的闡述的另一個迴應是,你說,好吧,我們只需要排除某些行動,那就回到地緣政治上來。
Is it sensible for a country to rule out certain capabilities if the other side is not going to rule them out?
如果對方不排除某些能力,那麼一國排除這些能力是否明智?
So you have a you have a kind of political economy problem going down the road that you know, we this is a moment when we collectively in the West have to establish our values and stand behind them.
是以,你有一個政治經濟學的問題,你知道,我們現在是西方國家集體確立我們的價值觀並支持這些價值觀的時候了。
What we cannot have is the race to the bottom that says just because they're doing it, we should take the same risk.
我們不能因為他們在做,我們也應該冒同樣的風險,就競相效仿。
If we adopt that approach and cut corners left, right and center, we'll ultimately pay the price.
如果我們採用這種方法,左右開弓,偷工減料,我們最終將付出代價。
And that's not an answer to, well, they're going to go off and do it anyway.
而這並不是回答說,反正他們會去做的。
We've certainly seen that with lethal autonomous weapons.
在致命的自主武器方面,我們當然已經看到了這一點。
I mean, there's been a negotiation in the UN to regulate lethal autonomous weapons for over 20 years, and they barely reached agreement on the definition, the definition of lethal autonomous weapons, let alone any consensus.
我的意思是,聯合國就管制致命自主武器進行了20多年的談判,但他們幾乎沒有就致命自主武器的定義達成一致,更不用說達成任何共識了。
So that's not great, but we do have to accept that it's the inevitable trajectory.
是以,這並不是很好,但我們必須接受這是不可避免的軌跡。
And from our own perspective, we have to decide what we are prepared to tolerate in society with respect to free acting AIs, facial surveillance, facial recognition and, you know, generally autonomous systems.
從我們自己的角度來看,我們必須決定我們準備在社會中容忍什麼,比如自由行動的人工智能、面部監控、面部識別,以及一般意義上的自主系統。
I mean, so far we've taken a pretty cautious approach and we don't have drones flying around everywhere.
我的意思是,到目前為止,我們採取了相當謹慎的態度,沒有讓無人機到處亂飛。
We can already it's totally possible technically to autonomously fly a drone to navigate around London.
我們已經可以從技術上完全實現無人機在倫敦的自主飛行。
We've ruled it out, right?
我們已經排除了這個可能性,對嗎?
We don't yet have autonomous self-driving cars, even though, you know, with some degree of harm, they are actually pretty well functioning.
我們還沒有自動駕駛的自駕車,儘管,你知道,在一定程度的傷害下,它們實際上運作得很好。
So the regulatory process is also a cultural process of what we think is socially and politically acceptable at any given moment.
是以,監管過程也是一個文化過程,即在任何特定時刻,我們認為什麼是社會和政治上可以接受的。
And I think an appropriate level of caution is what we're seeing.
我認為我們看到的是適當的謹慎。
We didn't agree about much, but I completely agree on that, that we need in many fields a coalition of the willing.
我們在很多問題上意見並不一致,但我完全同意這一點,即我們在很多領域都需要一個自願者聯盟。
And if some actors in the world don't want to join, it's in our interest to, again, something like banning bots impersonating people.
如果世界上有些演員不願意加入,那麼禁止機器人冒名頂替也符合我們的利益。
So some countries will not agree, but that doesn't matter.
是以,有些國家不會同意,但這並不重要。
To protect our societies, it's still a very good idea to have these kinds of regulations.
為了保護我們的社會,制定此類法規仍然是一個非常好的主意。
So that area of agreement is one to bring us to a close.
是以,這一領域的一致意見將為我們畫上句號。
But I want to end by asking both of you.
最後,我想問問你們兩位。
And you first, Mustafa, you are, you know, both raising alarms, but you are heavily involved in creating this future.
首先,穆斯塔法,你知道,你既發出了警報,又在很大程度上參與了未來的創造。
Why do you carry on?
你為什麼還要繼續?
I personally believe that it is possible to get the upsides and minimize the downsides in the AI that we have created.
我個人認為,在我們創造的人工智能中,有可能獲得好處,並將壞處降到最低。
PI, which stands for personal intelligence, is one of the safest in the world today.
PI 是個人智能的縮寫,是當今世界上最安全的系統之一。
It doesn't produce the racist, toxic, biased screeds that they did two years ago.
它不會像兩年前那樣發表種族主義、有毒、有偏見的言論。
It doesn't fall victim to any of the jailbreaks, the prompt hacks, the adversarial red teams.
它不會成為任何越獄、即時黑客和敵對紅隊的犧牲品。
None of those work.
這些都行不通。
And we've made safety an absolute number one priority in the design of our product.
在產品設計中,我們絕對將安全放在首位。
So my goal has been to do my very best to demonstrate a path forward in the best possible way.
是以,我的目標是盡我所能,以最佳方式展示前進的道路。
This is an inevitable unfolding over multiple decades.
這是幾十年來不可避免的發展過程。
This really is happening.
這真的發生了。
The coming wave is coming.
浪潮即將來臨。
And I think my contribution is to try to demonstrate in the best way that I can a manifestation of a personal intelligence which really does adhere to the best safety constraints that we could possibly think of.
我認為,我的貢獻在於以我所能達到的最佳方式,展示了一種個人智能的表現形式,它確實遵守了我們所能想到的最佳安全限制。
So Yuval, you've heard Mustafa's explanation for why he continues.
尤瓦爾,你已經聽過穆斯塔法對他繼續工作原因的解釋了。
You look back over human history.
回顧人類歷史
Now, as you look forward, is this a technology and a pace of innovation that humanity will come to regret?
現在,展望未來,這種技術和創新速度是否會讓人類感到遺憾?
Or should Mustafa carry on?
還是讓穆斯塔法繼續?
It could be.
可能是
I can't predict the future.
我無法預測未來。
I would say that we invest so much in developing artificial intelligence.
我想說的是,我們在開發人工智能方面投入了大量資金。
And we haven't seen anything yet.
我們還什麼都沒看到。
Like it's still the very first baby steps of artificial intelligence in terms of like you think about, I don't know, the evolution of organic life.
就像你想想有機生命的進化一樣,這仍然是人工智能的第一步。
This is like the amoeba of artificial intelligence.
這就像人工智能中的變形蟲。
And it won't take millions of years to get to T-Rex.
而且不需要幾百萬年就能找到霸王龍。
Maybe it will take 20 years to get to T-Rex.
也許要花 20 年時間才能找到 T-Rex。
And but one thing to remember is that we also our own minds have a huge scope for development.
但要記住的一點是,我們自己的思想也有巨大的發展空間。
Also with humanity.
也有人性。
We haven't seen our full potential yet.
我們還沒有看到自己的全部潛力。
And if we invest for every dollar a minute that we invest in artificial intelligence, we invest another dollar a minute in developing our own consciousness, our own mind, I think will be OK.
如果我們在人工智能上每投入 1 美元,就能在開發我們自己的意識、我們自己的思想上再投入 1 美元,我想這就沒問題了。
But I don't see it happening.
但我認為這不會發生。
I don't see this kind of investment in human beings that we are seeing in the machine.
我沒有在機器上看到這種對人類的投資。
For me, this conversation with the two of you has been just that investment.
對我來說,與你們兩位的談話就是這種投資。
Thank you both very much indeed.
非常感謝兩位。
Thank you.
謝謝。
Thank you.
謝謝。
Thank you.
謝謝。