Placeholder Image

Subtitles section Play video

  • Inside a nondescript building in the heart of San Francisco, one of the world's buzziest startups is making our AI-powered future feel more real than ever before.

    在舊金山市中心一棟不起眼的大樓裡,世界上最熱門的初創公司之一正在讓我們感覺人工智能驅動的未來比以往任何時候都更加真實。

  • They're behind two monster hits, ChatGPT and Ali, and somehow beat the biggest tech giants to market, kicking off a competitive race that's forced them all to show us what they've got.

    他們是 ChatGPT 和阿里這兩款熱門遊戲的幕後推手,並以某種方式擊敗了最大的科技巨頭,掀起了一場競爭激烈的競賽,迫使他們都向我們展示自己的實力。

  • But how did this under-the-radar startup pull it off?

    但是,這家名不見經傳的初創公司是如何做到這一點的呢?

  • We're inside OpenAI, and we're going to get some answers.

    我們在 OpenAI 內部,我們將得到一些答案。

  • Is it magic?

    是魔法嗎?

  • Is it just algorithms?

    僅僅是算法嗎?

  • Is it going to save us or destroy us?

    它將拯救我們還是毀滅我們?

  • Let's go find out.

    讓我們一探究竟。

  • I love the plants.

    我喜歡這些植物。

  • It feels so alive.

    它給人一種生機勃勃的感覺。

  • So amazing.

    太神奇了

  • I love it here.

    我喜歡這裡。

  • It's giving me very Westworld spa vibes.

    這給我《西部世界》水療中心的感覺。

  • It's almost like suspended in space and time a little bit.

    它幾乎就像懸浮在時空中一樣。

  • Yeah, it is a little bit of futuristic feel.

    是的,有點未來主義的感覺。

  • This is one of the most introspective minds at OpenAI.

    這是 OpenAI 最善於反思的人之一。

  • We all know Sam Altman, the CEO.

    我們都認識首席執行官山姆-奧特曼。

  • But Mira Moradi is a chief architect behind OpenAI's strategy.

    但米拉-莫拉迪(Mira Moradi)是 OpenAI 戰略背後的首席架構師。

  • This looks like the OpenAI logo.

    這看起來像是 OpenAI 的徽標。

  • It is.

    就是這樣。

  • Ilya actually painted this.

    這其實是伊利亞畫的。

  • Ilya, the chief scientist.

    伊利亞,首席科學家。

  • Yes.

    是的。

  • What is the flower meant to symbolize?

    這朵花象徵著什麼?

  • My guess is that it's AI that loves humanity.

    我的猜測是,愛人類的是人工智能。

  • We're very focused on dealing with the challenges of hallucination, truthfulness, reliability, alignment of these models.

    我們非常專注於應對幻覺、真實性、可靠性、這些模型的一致性等挑戰。

  • Has anyone left because they're like, you know what, I disagree?

    有人因為 "你知道嗎,我不同意 "而離開嗎?

  • There have been, over time, people that left to start other organizations because of disagreements on the strategy around deployment.

    隨著時間的推移,有些人因為在部署戰略上出現分歧而離開,成立了其他組織。

  • And how do you find common ground when disagreements do arise?

    當出現分歧時,如何找到共同點?

  • You want to be able to have this constant dialogue and figure out how to systematize these concerns.

    你希望能夠不斷進行對話,並找出如何將這些問題系統化的方法。

  • What is the job of a CTO?

    首席技術官的工作是什麼?

  • It's a combination of guiding the teams on the ground, thinking about longer-term strategy, figuring out our gaps, and making sure that the teams are well-supported to succeed.

    這是指導實地團隊、思考長期戰略、找出我們的差距,以及確保團隊獲得良好支持以取得成功的綜合工作。

  • Yeah.

    是啊

  • Sounds like a big job.

    聽起來是個大工程。

  • Solving impossible problems.

    解決不可能解決的問題

  • Solving impossible problems, yeah.

    解決不可能解決的問題

  • When you were making the decision about releasing Chat GPT into the wild, I'm sure there was like a go or no-go moment.

    當你決定將 Chat GPT 發佈到野外時,我相信你一定經歷了 "去 "還是 "不去 "的抉擇。

  • Take me back to that day.

    帶我回到那一天

  • We had Chat GPT for a while, and we sort of hit a point where we could really benefit from having more feedback from how people are using it, what are the risks, what are the limitations, and learn more about this technology that we have created and start bringing it in the public consciousness.

    我們已經使用 Chat GPT 有一段時間了,我們發現,如果能從人們如何使用它、有哪些風險、有哪些限制等方面獲得更多反饋,我們就能真正受益。

  • It became the fastest-growing tech product in history.

    它成為歷史上增長最快的科技產品。

  • It did.

    的確如此。

  • Did that surprise you?

    你感到驚訝嗎?

  • I mean, what was your reaction to the world's reaction?

    我是說,你對世界的反應是什麼?

  • We were surprised by how much it captured the imaginations of the general public and how much people just loved spending time talking to this AI system and interacting with it.

    讓我們感到驚訝的是,它如此吸引大眾的想象力,人們如此喜歡花時間與這個人工智能系統交談和互動。

  • Chat GPT can now mimic a human.

    哈拉 GPT 現在可以模仿人類。

  • It can write.

    它可以寫

  • It can code.

    它可以編碼。

  • At the most basic level, how does this all happen?

    在最基本的層面上,這一切是如何發生的呢?

  • So, Chat GPT is a neural network that has been trained on a huge amount of data on a massive supercomputer, and the goal during this training process was to predict the next word in a sentence, and it turns out that as you train larger and larger models and more and more data, the capabilities of these models also increase.

    是以,Chat GPT 是一個在大型超級計算機上通過海量數據訓練出來的神經網絡,訓練過程中的目標是預測句子中的下一個單詞,事實證明,隨著訓練的模型越來越大,數據越來越多,這些模型的能力也會越來越強。

  • They become more powerful, more helpful, and as you invest more on alignment and safety, they become more reliable and safe over time.

    隨著時間的推移,它們會變得更強大、更有用,而且隨著你在調整和安全方面投入更多,它們也會變得更可靠、更安全。

  • Open AI has kind of turbocharged this competitive frenzy.

    開放式人工智能為這種競爭狂潮注入了新的活力。

  • Do you think you can beat Google at its own game?

    你認為你能在谷歌的遊戲中擊敗它嗎?

  • Do you think you can take significant market share in search?

    您認為您能在搜索領域佔據重要的市場份額嗎?

  • We didn't set out to dominate search.

    我們並不是要主宰搜索。

  • What Chat GPT offers is a different way to understand information, and you could be, you know, searching, but you're searching in a much more intuitive way versus keyword-based.

    Chat GPT 提供的是一種理解資訊的不同方式,你可以進行搜索,但搜索方式要比基於關鍵字的搜索直觀得多。

  • I think the whole world is sort of now moving in this direction.

    我認為現在整個世界都在朝著這個方向發展。

  • The air of confidence, obviously, that Chat GPT sometimes delivers an answer with.

    顯然,哈拉 GPT 有時會讓人感到信心十足。

  • Why not just sometimes say, I don't know?

    為什麼有時不說 "我不知道 "呢?

  • The goal is not to predict the next word reliably or safely.

    我們的目標不是可靠或安全地預測下一個單詞。

  • When you have such general capabilities, it's very difficult to handle some of the limitations, such as what is correct.

    當你有了這樣的通用能力,就很難處理一些限制,比如什麼是正確的。

  • Some of these texts and some of the data is biased.

    其中一些文本和一些數據有失偏頗。

  • Some of it may be incorrect.

    其中有些可能是不正確的。

  • Isn't this going to accelerate the misinformation problem?

    這不是會加速誤報問題的解決嗎?

  • I mean, we haven't been able to crack it on social media for like a couple of decades.

    我的意思是,我們在社交媒體上已經有幾十年無法破解了。

  • Misinformation is a really complex, hard problem.

    錯誤信息確實是一個複雜而棘手的問題。

  • Right now, one of the things that I'm most worried about is the ability of models like GPT-4 to make up things.

    現在,我最擔心的事情之一就是 GPT-4 等模型的編造能力。

  • We refer to this as hallucinations.

    我們稱之為幻覺。

  • So they will convincingly make up things and it requires being aware and just really knowing that you cannot fully, blindly rely on what the technology is providing as an output.

    是以,他們會令人信服地胡編亂造,這就要求我們必須意識到,不能完全、盲目地依賴技術提供的輸出結果。

  • I want to talk about this term hallucination because it's a very human term.

    我想談談 "幻覺 "這個詞,因為這是一個非常人性化的詞。

  • Why use such a human term for basically an AI that's just making mistakes?

    為什麼要用這麼人性化的詞語來形容一個只會犯錯的人工智能?

  • A lot of these general capabilities are actually quite human-like.

    這些一般能力中,有很多其實都很像人類。

  • Sometimes when we don't know the answer to something, we will just make up an answer.

    有時,當我們不知道某件事情的答案時,我們就會編造一個答案。

  • We will rarely say, I don't know.

    我們很少會說,我不知道。

  • And so, there is a lot of human hallucination in a conversation and sometimes we don't do it on purpose.

    是以,對話中有很多人類的幻覺,有時我們並不是故意的。

  • Should we be worried about AI though that feels more and more human?

    我們是否應該擔心感覺越來越像人類的人工智能?

  • Like, should AI have to identify itself as artificial when it's interacting with us?

    比如,人工智能在與我們互動時,是否應該表明自己是人工智能?

  • I think it's a different kind of intelligence.

    我認為這是一種與眾不同的智慧。

  • It is important to distinguish output that's been provided by a machine versus another human.

    重要的是要區分機器輸出和人工輸出。

  • But we are moving towards a world where we're collaborating with these machines more and more and so output will be hybrid.

    但是,我們正在邁向這樣一個世界:我們越來越多地與這些機器合作,是以產出將是混合型的。

  • All of the data that you're training this AI on, it's coming from writers, it's coming from artists.

    你訓練人工智能的所有數據都來自作家和藝術家。

  • How do you think about giving value back to those people when these are also people who are worried about their jobs going away?

    當這些人也在擔心自己的工作不保時,你如何考慮讓他們重新獲得價值?

  • I don't know exactly how it would work in practice that you can sort of account for information created by everyone on the internet.

    我不知道在實踐中如何才能對互聯網上每個人創造的資訊進行分類。

  • I think there are definitely going to be jobs that will be lost and jobs that will be changed as AI continues to advance and integrate in the workforce.

    我認為,隨著人工智能的不斷進步和融入勞動力大軍,肯定會有工作崗位流失,也會有工作崗位發生變化。

  • Prompt engineering is a job today.

    快速工程是當今的一項工作。

  • That's not something that we could have predicted.

    這不是我們能預料到的。

  • Think of prompt engineers like AI whisperers.

    把提示工程師想象成人工智能耳語者。

  • They're highly skilled at selecting the right words to coax AI tools into generating the most accurate and illuminating responses.

    他們善於選擇合適的詞語,哄騙人工智能工具生成最準確、最有啟發性的回答。

  • It's a new job born from AI that's fetching hundreds of thousands of dollars a year.

    這是人工智能催生的新工作,年薪高達數十萬美元。

  • What are some tips to being an ace prompt engineer?

    成為王牌提示工程師有哪些祕訣?

  • You know, it's this ability to really develop an intuition for how to get the most out of the model.

    你知道,這種能力能夠真正形成一種直覺,知道如何最大限度地利用模型。

  • How to prompt it in the right ways, give it enough context for what you're looking for.

    如何以正確的方式進行提示,提供足夠的背景資訊,以滿足您的需求。

  • One of the things that we talked about earlier was hallucinations and these large language models not having the ability to always be highly accurate.

    我們之前談到的其中一件事就是幻覺,這些大型語言模型並不能始終保持高度準確性。

  • So I'm asking the model with a browsing plugin to fact check this information.

    是以,我要求帶有瀏覽插件的模型對這些資訊進行事實核查。

  • And it's now browsing the web.

    它現在正在瀏覽網頁。

  • So there's this report that these workers in Kenya were getting paid $2 an hour to do the work on the backend to make answers less toxic.

    是以有報道稱,肯亞的這些工人每小時能拿到 2 美元的報酬,他們在後端進行工作,以減少答案的毒性。

  • And my understanding is this work is, it can be difficult, right?

    我的理解是,這項工作可能很困難,對嗎?

  • Because you're reading texts that might be disturbing and trying to clean them up.

    因為你正在閱讀那些可能令人不安的文本,並試圖將它們清理乾淨。

  • So we need to use contractors sometimes to scale.

    是以,我們有時需要利用承包商來擴大規模。

  • We chose that particular contractor because of their known safety standards.

    我們之所以選擇該承包商,是因為他們的安全標準眾所周知。

  • And since then we've stopped working with them.

    從那以後,我們就不再與他們合作了。

  • But as you said, this is difficult work and we recognize that and we have mental health standards and wellness standards that we share with contractors.

    但正如你所說,這是一項艱鉅的工作,我們認識到這一點,我們有心理健康標準和健康標準,並與承包商共享。

  • I think a lot about my kids and them having relationships with AI someday.

    我經常想到我的孩子,想到他們有一天會與人工智能建立關係。

  • How do you think about what the limits should be and what the possibilities should be when you're thinking about a child?

    在考慮孩子的問題時,你如何看待孩子的極限和可能性?

  • I think we should be very careful in general with putting very powerful systems in front of more vulnerable populations.

    我認為,在將非常強大的系統置於更脆弱的人群面前時,我們總體上應該非常謹慎。

  • There are certainly checks and balances in place because it's still early and we still don't understand all the ways in which this could affect people.

    當然也有制衡措施,因為現在還為時尚早,我們還不瞭解這可能對人們產生影響的所有方式。

  • There's all this talk about, you know, relationships and AI.

    大家都在談論關係和人工智能。

  • Like, could you see yourself developing a relationship with an AI?

    比如,你能想象自己與人工智能發展關係嗎?

  • I'd say yes as a reliable tool that enhances my life, makes my life better.

    我會說 "是的",因為它是一個可靠的工具,能改善我的生活,讓我的生活更美好。

  • As we ponder the existential idea that we might all have relationships with AI someday, there's an AI gold rush happening in Silicon Valley.

    在我們思考 "有一天我們都可能與人工智能發生關係 "這一存在主義觀點時,硅谷正在掀起一場人工智能淘金熱。

  • Venture capitalists are pouring money into anything AI startups, hoping to find the next big thing.

    風險資本家們正在向任何人工智能初創企業投入資金,希望找到下一個風口。

  • Reid Hoffman, the co-founder of LinkedIn and an early investor in Facebook, knows a thing or two about striking gold.

    雷德-霍夫曼(Reid Hoffman)是 LinkedIn 的聯合創始人,也是 Facebook 的早期投資人,他深諳淘金之道。

  • He was an early open AI backer and is, in a way, trying to take society's hand and guide us all through the age of AI.

    他是早期的人工智能開放支持者,在某種程度上,他正試圖牽起社會的手,引導我們共同度過人工智能時代。

  • I mean, gosh, 12 years we've been talking.

    我是說,天哪,我們已經談了 12 年了。

  • Maybe longer.

    也許更長。

  • That's awesome.

    太棒了

  • A long time.

    很長時間

  • You have been on the ground floor of some of the biggest tech platform shifts in history.

    您曾親歷了歷史上最大規模的科技平臺變革。

  • The beginnings of the internet, mobile.

    互聯網、行動電話的雛形。

  • Do you think AI is gonna be even bigger?

    你認為人工智能會變得更大嗎?

  • I think so.

    我想是的。

  • It builds on the internet, mobile, cloud, data.

    它以互聯網、移動、雲和數據為基礎。

  • All of these things come together to make AI work.

    所有這些共同作用,讓人工智能發揮作用。

  • And so that causes it to be the crescendo, the addition to all of this.

    是以,這也是所有這一切的高潮和補充。

  • I mean, one of the problems with the current discourse is that it's too much of the fear-based versus hope-based.

    我的意思是,當前討論中的一個問題是,恐懼與希望的對立太多了。

  • Imagine a tutor on every smartphone for every child in the world.

    想象一下,世界上每個孩子的智能手機上都有一個輔導老師。

  • That's possible.

    這是可能的。

  • That's line of sight from what we see with current AI models today.

    這與我們現在看到的人工智能模型是一致的。

  • You coined this term, blitzscaling.

    你創造了 "閃電縮放 "這個詞。

  • Blitzscaling, in its precise definition, is prioritizing speed over efficiency in an environment of uncertainty.

    閃電戰的確切定義是,在不確定的環境中,速度優先於效率。

  • How do you go as fast as possible in order to be the first to scale?

    如何以最快的速度率先擴大規模?

  • Does AI blitzscale?

    人工智能是否會閃擊?

  • Well, it certainly seems like it today, doesn't it?

    今天看來的確如此,不是嗎?

  • I think the speed at which we will integrate it into our lives will be faster than we integrated the iPhone into our lives.

    我認為,我們將其融入生活的速度將比我們將 iPhone 融入生活的速度更快。

  • There's gonna be a co-pilot for every profession.

    每個職業都會有一個副駕駛。

  • And if you think about that, that's huge.

    如果你仔細想想,這就很了不起了。

  • And not professional activities, because it's gonna write my kids' papers, right?

    而不是專業活動,因為它要寫我孩子的論文,對吧?

  • My kids' high school papers?

    我孩子的高中論文?

  • Yes, although the hope is that in the interaction with it, they'll learn to create much more interesting papers.

    是的,不過我們希望他們能在與它的互動中學會創作更有趣的論文。

  • You and Elon Musk go way back.

    你和埃隆-馬斯克是老相識了。

  • He co-founded OpenAI with Sam Altman, the CEO of OpenAI.

    他與 OpenAI 首席執行官 Sam Altman 共同創立了 OpenAI。

  • You and I have talked a lot over the years about how you have been sort of this node in the PayPal mafia, and you can talk to everyone and maybe you disagree, but you are all still friends.

    多年來,你和我經常談論你是如何成為貝寶黑手黨中的一個節點,你可以和每個人交談,也許你們意見相左,但你們仍然是朋友。

  • What did Elon say that got you interested so early?

    埃隆說了什麼讓你這麼早就感興趣?

  • Part of the reason I got back into AI, and I was part of sitting around the table in the crafting of OpenAI, was that Elon came to me and said, look, this AI thing is coming.

    我重新回到人工智能領域的部分原因是,埃隆找到我說:"你看,人工智能就要來了。"我也參與了 OpenAI 的籌建工作。

  • Once I started digging into it, I realized that this pattern, that we're gonna see the next generation of amazing capabilities coming from these computational devices.

    一旦我開始深入研究,我就意識到這種模式,我們將看到下一代驚人的能力來自這些計算設備。

  • And then, one of the things I had been arguing with Elon at the time about, was that Elon was constantly using the word robocalypse, which we, as human beings, tend to be more easily and quickly motivated by fear than by hope.

    然後,我當時一直在和埃隆爭論的一個問題是,埃隆一直在使用 "機器人啟示錄"(robocalypse)這個詞,而作為人類,我們往往更容易、更快地被恐懼所驅使,而不是被希望所驅使。

  • So you're using the term robocalypse, and everyone imagines the Terminator and all the rest.

    所以你用了 "機器人啟示錄 "這個詞 每個人都會想象出 "終結者 "之類的東西

  • Sounds pretty scary.

    聽起來挺嚇人的。

  • It sounds very scary.

    聽起來很嚇人。

  • Robocalypse doesn't sound like something we want.

    機器人啟示錄聽起來不像是我們想要的東西。

  • Yeah, stop saying that.

    是啊,別再說了。

  • Because actually, in fact, the chance that I could see anything like a robocalypse happening is so de minimis relative to everything else.

    因為事實上,相對於其他一切,我能看到類似機器人啟示錄發生的可能性微乎其微。

  • So you did come together on OpenAI.

    所以,你們確實在 OpenAI 上走到了一起。

  • How did that happen?

    怎麼會這樣?

  • I think it started with Elon and Sam having a bunch of conversations.

    我想,這一切都源於埃隆和薩姆的一番對話。

  • And then, since I know both of them quite well, I got called in.

    然後,因為我和他們都很熟,我就被叫去了。

  • And I was like, look, I think this could really make sense.

    我就想,聽著,我覺得這真的很有意義。

  • Something should be the counterweight to all of the natural work that's gonna happen within commercial realms.

    在商業領域中,應該有一些東西與所有的自然工作相抗衡。

  • How do we make sure that one company doesn't dominate the industry, but the tools are provided across the industry so innovation can benefit from startups and all the rest?

    我們如何確保一家公司不會主宰整個行業,而是為整個行業提供工具,使創新能夠從初創企業和其他所有企業中獲益?

  • I was like, great.

    我當時想,太好了。

  • And let's do this thing, OpenAI.

    讓我們開始吧,OpenAI。

  • I did ask ChatGPT what questions I should ask you.

    我問過 ChatGPT,我應該問你什麼問題。

  • I thought its questions were pretty boring.

    我覺得它的問題很無聊。

  • Yes.

    是的。

  • Your answers were pretty boring, too.

    你的回答也很無聊。

  • So we're not getting replaced anytime soon.

    所以,我們不會很快被取代。

  • But clearly, this has really struck a nerve.

    但很顯然,這真的觸動了我的神經。

  • There are people out there who are gonna fall for it.

    有人會上當受騙。

  • Yes.

    是的。

  • Shouldn't we be worried about that?

    難道我們不應該為此擔心嗎?

  • Okay, so everyone's encountered a crazy person who's drunk off their ass at a cocktail party who says really odd things, or at least every adult has.

    好吧,每個人都遇到過在雞尾酒會上喝得爛醉如泥的瘋子,他們會說一些非常奇怪的話,至少每個成年人都遇到過。

  • And, you know, that's not like the world didn't end.

    而且,你知道,世界並沒有毀滅。

  • Right?

    對不對?

  • We do have to pay attention to areas that are harmful.

    我們確實必須關注那些有害的領域。

  • Like, for example, someone's depressed, they're thinking about self-harm.

    比如,有人情緒低落,想自殘。

  • You want all channels by which they could get in the self-harm to be limited.

    您希望限制他們可以進行自我傷害的所有管道。

  • That isn't just chatbots.

    這不僅僅是哈拉機器人的問題。

  • That could be communities of human beings.

    這可能是人類社區。

  • That could be search engines.

    這可能是搜索引擎。

  • You have to pay attention to all the dimensions of it.

    你必須關注它的方方面面。

  • How are we overestimating AI?

    我們是如何高估人工智能的?

  • It still doesn't really do something that I would say is original to an expert.

    它仍然沒有真正做到我所說的專家的原創性。

  • So, for example, one of the questions I asked was how would Reid Hoffman make money by investing in artificial intelligence?

    例如,我提出的一個問題是,裡德-霍夫曼如何通過投資人工智能賺錢?

  • And the answer he gave me was a very smart, very well-written answer that would have been written by a professor at a business school who didn't understand venture capital.

    他給我的答案非常聰明,寫得非常好,應該是一個不瞭解風險投資的商學院教授寫的。

  • Right?

    對不對?

  • So it seems smart.

    所以這看起來很聰明。

  • Would study large markets.

    會研究大型市場。

  • Would realize what products would be substituted in the large markets.

    會意識到哪些產品會在大市場中被替代。

  • Would find teams to go do that and invest in them.

    會找團隊去做,並對他們進行投資。

  • And this was all written, very credible, and completely wrong.

    這些都是寫出來的,非常可信,而且完全錯誤。

  • The newest edge of the information is still beyond these systems.

    這些系統仍然無法獲得最新的邊緣資訊。

  • Billions of dollars are going into AI.

    數十億美元正投入到人工智能領域。

  • My inbox is filled with AI pitches.

    我的收件箱裡塞滿了人工智能推銷。

  • Last year it was crypto and Web3.

    去年是加密貨幣和 Web3。

  • How do we know this isn't just the next bubble?

    我們怎麼知道這不是下一個保麗龍?

  • I do think that the generative AI is the thing that has the broadest touch of everything.

    我確實認為,生成式人工智能是最能廣泛觸及一切的東西。

  • Now, which places are the right places to invest?

    現在,哪些地方適合投資?

  • I think those are still things we're working on now, obviously, as venture capitalists.

    我認為,作為風險投資人,這些顯然仍是我們現在正在努力的方向。

  • Part of what we do is we try to figure that out in advance, you know, years before other people see it coming.

    我們所做的部分工作就是試圖提前弄清這一點,你知道的,比別人早幾年看到它的到來。

  • But I think that there will be massive new companies built.

    但我認為,將會有大量新公司成立。

  • It does seem, in some ways, like a lot of AI is being developed by an elite group of companies and people.

    從某種程度上來說,很多人工智能似乎都是由一群精英公司和精英人士開發的。

  • Is that something that you see happening?

    你看到這種情況發生了嗎?

  • In some ideal universe, you'd say, for a technology that would impact billions of people, somehow billions of people should directly be involved in creating it.

    在某個理想世界裡,你會說,對於一項會影響數十億人的技術來說,應該有數十億人直接參與創造。

  • But that's not how any technology anywhere in history gets built.

    但歷史上任何一項技術都不是這樣誕生的。

  • And there's reasons you have to build it at speed.

    這也是你必須加快建造速度的原因。

  • But the question is, how do you get the right conversations and the right issues on the table?

    但問題是,如何讓正確的對話和正確的問題擺上桌面?

  • So do you see an AI mafia forming?

    你認為人工智能黑手黨正在形成嗎?

  • I definitely think that there is, because you're referring to the PayPal mafia.

    我肯定認為有,因為你指的是貝寶黑手黨。

  • Of course.

    當然。

  • I definitely think that there's a network of folks who have been deeply involved over the last few years will have a lot of influence on how the technology happens.

    我肯定認為,過去幾年裡深入參與其中的人們將對技術的發展產生很大的影響。

  • Do you think AI will shake up the big tech hierarchy significantly?

    您認為人工智能會大幅撼動大型科技公司的層級結構嗎?

  • What it certainly does is it creates a wave of disruption.

    當然,這也會帶來一股破壞浪潮。

  • For example, with these large language models in search, what do you want?

    例如,在搜索中使用這些大型語言模型時,你想要什麼?

  • Do you want 10 blue links?

    你想要 10 個藍色鏈接嗎?

  • Or do you want an answer?

    還是你想要一個答案?

  • In a lot of search cases, you want an answer.

    在很多搜索案例中,你都想要一個答案。

  • And a generated answer that's like a mini Wikipedia page is awesome.

    生成的答案就像一個小型維基百科頁面,非常棒。

  • That's a shift.

    這是一個轉變。

  • So I think we'll see a profusion of startups doing interesting things in this.

    是以,我認為我們會看到大量初創企業在這方面做出有趣的事情。

  • But can the next Google or Facebook really emerge if Google and Facebook or Meta and Apple and Amazon are running the Playbook and Microsoft?

    但是,如果谷歌和 Facebook 或 Meta、蘋果和亞馬遜都在運行 Playbook 和微軟,下一個谷歌或 Facebook 真的能出現嗎?

  • Do I think that we'll be another one to three companies that will be the size of the five big tech giants emerging, possibly from AI?

    我是否認為,我們會再出現一到三家規模與五大科技巨頭相當的公司,可能來自人工智能?

  • Absolutely, yes.

    當然,是的。

  • Now, does that mean that one of them is gonna collapse?

    現在,這是否意味著其中一個會崩潰?

  • No, not necessarily.

    不,不一定。

  • And it doesn't need to.

    它並不需要這樣。

  • The more that we have, the better.

    我們擁有的越多越好。

  • So what are the next big five?

    那麼,下一個五巨頭是什麼?

  • Well, that's what we're trying to invest in.

    這正是我們要投資的。

  • You're on the board of Microsoft.

    你是微軟董事會成員

  • Obviously, Microsoft is making a big AI push.

    顯然,微軟正在大力推進人工智能。

  • Did you bring Satya and Sam or have any role in bringing Satya and Sam closer together?

    是你讓薩特雅和薩姆走到一起的,還是你在拉近薩特雅和薩姆的距離方面發揮了什麼作用?

  • Because Microsoft obviously has $10 billion now in open AI.

    因為很明顯,微軟現在在開放式人工智能領域擁有 100 億美元的資金。

  • Well, I think I could, I probably have a, you know, both of them are close to me and know me and trust me well.

    嗯,我想我可以,我可能有一個,你知道,他們兩個都和我很親近,很瞭解我,也很信任我。

  • So I think I've helped facilitate understanding and communications.

    是以,我認為我幫助促進了理解和溝通。

  • Elon left open AI years ago and pointed out that it's not as open as it used to be.

    埃隆幾年前就離開了開放式人工智能,並指出人工智能已經不像以前那麼開放了。

  • He said he wanted it to be a nonprofit counterweight to Google.

    他說,他希望該公司成為與谷歌抗衡的非營利組織。

  • Now, it's a closed source maximum profit company effectively controlled by Microsoft.

    現在,它是一家由微軟實際控制的封閉源代碼最大利潤公司。

  • Does he have a point?

    他說的有道理嗎?

  • Well, he's wrong on a number of levels there.

    他在很多方面都錯了。

  • So one is it's run by a 501c3.

    其一,它是由 501c3 經營的。

  • It is a nonprofit.

    它是一家非營利組織。

  • But it does have a for-profit part.

    但它確實有營利的部分。

  • The commercial system, which is all carefully done is to bring in capital to support the nonprofit mission.

    商業系統的所有精心設計都是為了引入資本,支持非營利組織的使命。

  • Now, get to the question of, for example, open.

    現在,我們來談談開放的問題。

  • So Dolly was ready for four months before it was released.

    是以,《多莉》準備了四個月才上映。

  • Why did it delay for four months?

    為什麼拖延了四個月?

  • Because it was doing safety training.

    因為它正在進行安全培訓。

  • It said, well, we don't wanna have this being used to create child sexual material.

    它說,我們不想讓它被用來製作兒童性材料。

  • We don't wanna have this being used for assaulting individuals or doing deep fakes.

    我們不想讓它被用來襲擊他人或進行深度偽造。

  • So we're not gonna open source it.

    所以我們不會開源。

  • We're gonna release it through an API so we can be seeing what the results are and making sure it doesn't do any of these harms.

    我們將通過應用程序接口發佈它,這樣我們就能看到結果,確保它不會造成任何危害。

  • So it's open because it has open access to the APIs, but it's not open because it's open source.

    所以它是開放的,因為它開放了應用程序接口,但它不是開放的,因為它是開源的。

  • There are folks out there who are angry actually about open AIs branching out from nonprofit to for-profit.

    實際上,有些人對開放式人工智能從非營利性轉向營利性感到憤怒。

  • Is there a bit of a bait and switch there?

    這是不是有點誘餌和調包?

  • The cleverness that Sam and everyone else figured out is they could say, look, we can do a market commercial deal where we say we'll give you commercial licenses to parts of our technology in various ways.

    山姆和其他人的聰明之處在於,他們可以說,聽著,我們可以進行市場商業交易,我們說,我們將以各種方式向你們提供我們技術的部分商業許可。

  • And then we can continue our mission of beneficial AI.

    然後,我們就可以繼續開展有益的人工智能任務。

  • The AI graveyard is filled with algorithms that got into trouble.

    人工智能墳場裡到處都是陷入困境的算法。

  • How can we trust open AI or Microsoft or Google or anyone to do the right thing?

    我們怎麼能相信開放的人工智能、微軟、谷歌或任何人都會做正確的事?

  • Well, we need to be more transparent.

    我們需要更加透明。

  • But on the other hand, of course, a problem exactly as you're alluding to is people say, well, the AI should say that or shouldn't say that.

    當然,另一方面,正如你所提到的問題,人們會說,人工智能應該這麼說,或者不應該這麼說。

  • We can't even really agree on that ourselves.

    我們自己都無法達成一致。

  • So we don't want that to be litigated by other people.

    是以,我們不希望其他人對此提起訴訟。

  • We want that to be a social decision.

    我們希望這是一個社會決定。

  • So how does this shake out globally?

    那麼,全球範圍內的情況又會如何呢?

  • We should be trying to build the industries of the future.

    我們應該努力建設未來的工業。

  • That's what's the most important thing.

    這才是最重要的。

  • And it's one of the reasons why I tend to very much speak against people like, oh, we should be slowing down.

    這也是我傾向於反對人們說 "我們應該放慢腳步 "的原因之一。

  • Do you have any intention of slowing down?

    你打算放慢腳步嗎?

  • We've been very vocal about these risks for many, many years.

    多年來,我們一直對這些風險諱莫如深。

  • One of them is acceleration.

    其中之一就是加速度。

  • And I think that's a significant risk that we as a society need to grapple with.

    我認為這是我們社會需要應對的一個重大風險。

  • Building safe AI systems that are general is very complex.

    構建安全、通用的人工智能系統非常複雜。

  • It's incredibly hard.

    這是難以置信的困難。

  • So what does responsible innovation look like to you?

    那麼,對你來說,負責任的創新是什麼樣的?

  • Would you support, for example, a federal agency like the FDA that vets technology like it vets drugs?

    例如,您是否會支持像美國食品及藥物管理局這樣的聯邦機構,像審查藥品一樣審查技術?

  • I think some sort of trusted authority that can audit the systems based on some agreed upon principles would be very helpful.

    我認為,建立某種可信賴的權威機構,根據一些商定的原則對系統進行審計,會非常有幫助。

  • I've heard AI experts talk about the potential for the good future versus the bad future.

    我曾聽人工智能專家談論過好未來與壞未來的潛力。

  • In the bad future, there's talk about this leading human extinction.

    在糟糕的未來,有人說這會導致人類滅絕。

  • Are those people wrong?

    這些人錯了嗎?

  • There is certainly a risk that when we have these AI systems that are able to set their own goals, they decide that their goals are not aligned with ours.

    當然,當我們擁有這些能夠設定自己目標的人工智能系統時,它們可能會決定自己的目標與我們的目標不一致。

  • And they do not benefit from having us around.

    我們的存在對他們沒有任何好處。

  • And could lead to human extinction.

    並可能導致人類滅絕。

  • That is a risk.

    這是一種風險。

  • I don't think this risk has gone up or down from the things that have been happening in the past few months.

    我不認為這種風險會因為過去幾個月發生的事情而上升或下降。

  • I think it's certainly been quite hyped.

    我認為它確實被誇大了。

  • And there is a lot of anxiety around it.

    人們為此焦慮不安。

  • If we're talking about the risk for human extinction, have you had a moment where you're just like, wow, this is big?

    如果我們談論的是人類滅絕的風險,你是否有過這樣的時刻:哇,這是個大問題?

  • I think a lot of us at OpenAI joined because we thought that this would be the most important technology that humanity would ever create.

    我認為,OpenAI 的許多成員之所以加入,是因為我們認為這將是人類創造的最重要的技術。

  • But of course, the risks on the other hand are also pretty significant.

    當然,另一方面的風險也相當大。

  • And this is why we're here.

    這就是我們在這裡的原因。

  • Do OpenAI employees still vote on AGI and when it will happen?

    OpenAI 的員工還在投票決定 AGI 及其實現時間嗎?

  • I actually don't know.

    其實我也不知道。

  • What is your prediction about AGI now?

    你現在對 AGI 有何預測?

  • And how far away it really is?

    它到底有多遠?

  • We're still quite far away from being at a point where these systems can make decisions autonomously and discover new knowledge.

    我們距離這些系統能夠自主決策和發現新知識的階段還很遙遠。

  • But I think I have more certainty around the advent of having powerful systems in our future.

    但我認為,在我們的未來,我對強大系統的出現有了更多的把握。

  • Should we even be driving towards AGI?

    我們是否應該向 AGI 邁進?

  • And do humans really want it?

    人類真的想要它嗎?

  • Advancements in society come from pushing human knowledge.

    社會的進步源於人類知識的進步。

  • Now that doesn't mean that we should do so in careless and reckless ways.

    但這並不意味著我們應該粗心大意、不計後果地這樣做。

  • I think there are ways to guide this development versus bring it to a screeching halt because of our potential fears.

    我認為有辦法引導這一發展,而不是因為我們的潛在恐懼而使其戛然而止。

  • So the train has left the station and we should stay on it.

    所以,火車已經離站,我們應該留在車上。

  • That's one way to put it.

    這是一種說法。

Inside a nondescript building in the heart of San Francisco, one of the world's buzziest startups is making our AI-powered future feel more real than ever before.

在舊金山市中心一棟不起眼的大樓裡,世界上最熱門的初創公司之一正在讓我們感覺人工智能驅動的未來比以往任何時候都更加真實。

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it

B1 US 人工 智能 認為 ai 人類 埃隆

走進 ChatGPT 架構師 OpenAI,特邀 Mira Murati | The Circuit with Emily Chang (Inside OpenAI, the Architect of ChatGPT, featuring Mira Murati | The Circuit with Emily Chang)

  • 119 1
    松崎洋介 posted on 2025/01/02
Video vocabulary