Subtitles section Play video
At Google, we are fully in our Gemini era.
Google,我們完全進入了 Gemini 時代。
Today, all of our two billion user products use Gemini.
如今,我們所有 20 億用戶的產品都在使用 Gemini。
Gemini 1.5 Pro is available today in Workspace Labs.
Gemini 1.5 Pro 今天可在 Workspace Labs 中使用。
Let's see how this comes to life with Google Workspace.
讓我們看看 Google Workspace 是如何實現這一點的。
People are always searching their emails in Gmail.
人們總是在 Gmail 中搜索他們的電子郵件。
We are working to make it much more powerful with Gemini.
我們正努力通過 Gemini 使其功能更加強大。
Now we can ask Gemini to summarize all recent emails from the school.
現在我們可以請 Gemini 總結學校最近發來的所有電子郵件。
Maybe you were traveling this week and you couldn't make the PTA meeting.
也許你本周正在旅行,無法參加 PTA 會議。
The recording of the meeting is an hour long.
會議錄音長達一小時。
If it's from Google Meet, you can ask Gemini to give you the highlights.
如果是來自 Google Meet,你可以請 Gemini 給你重點介紹。
People love using photos to search across their life.
人們喜歡用照片來搜索自己的生活。
With Gemini, you're making that a whole lot easier.
有了 Gemini,這一切就變得簡單多了。
And Ask Photos can also help you search your memories in a deeper way.
Ask Photos 還可以幫助你更深入地搜尋你的記憶。
For example, you might be reminiscing about your daughter Lucia's early milestones.
例如,你可能正在回憶女兒 Lucia 的早期里程碑。
You can ask Photos, show me how Lucia's swimming has progressed.
你可以問 Photos,讓我看看 Lucia 的游泳進度如何。
Here, Gemini goes beyond a simple search, recognizing different contexts and photos, packages it up all together in a summary.
在這裡,Gemini 不侷限於簡單的搜索,識別不同的背景和照片,並將其打包成一個摘要。
Unlocking knowledge across formats is why we built Gemini to be multimodal from the ground up.
跨格式解鎖知識是我們從頭開始將 Gemini 打造為多模式的原因。
It's one model with all the modalities built in.
這是一個內建了所有模式的模型。
We've been rolling out Gemini 1.5 Pro with long context in preview over the last few months.
在過去的幾個月裡,我們一直在推出具有長上下文預覽的 Gemini 1.5 Pro。
So today, we are expanding the context window to 2 million tokens.
所以今天,我們將上下文窗口擴展到 2 百萬個 tokens。
So far, we've talked about two technical advances-- multimodality and long context.
到目前為止,我們已經討論了兩項技術進步——多模態和長上下文。
Each is powerful on its own, but together they unlock deeper capabilities and more intelligence.
每一個本身都很強大,但它們結合起來可以釋放出更深層的功能和更多智慧。
But what if it could go even further?
但如果能更進一步呢?
That's one of the opportunities we see with AI Agents.
這是我們在 AI 助理中看到的機會之一。
Think about them as intelligence systems that show reasoning, planning, and memory, are able to think multiple steps ahead,
將它們視為具有推理、計劃和記憶能力的智慧系統,能夠提前思考多個步驟,
work across software and systems all to get something done on your behalf, and most importantly, under your supervision.
跨軟體和系統工作都是為了代表你完成工作,最重要的是,在你的監督下完成工作。
Today we have some exciting new progress to share about the future of AI assistants that we're calling Project Astra.
今天,我們要分享一些關於 AI 助理未來的令人興奮的新進展,我們稱之為 Project Astra。
For a long time, we've wanted to build a universal AI agent that can be truly helpful in everyday life.
長期以來,我們一直希望建立一個能夠真正為日常生活提供幫助的通用 AI 助理。
Here's a video of our prototype, which you'll see has two parts.
這是我們原型的影片,你將看到它分為兩個部分。
Each part was captured in a single take in real time.
每個部分都是一次性即時拍攝的。
What does that part of the code do?
這部分代碼有什麼作用?
This code defines encryption and decryption functions.
該程式碼定義了加密和解密函數。
It seems to use AES-CBC encryption to encode and decode data based on a key and an initialization vector, IV.
它似乎使用 AES-CBC 加密根據金鑰和初始化向量 IV 來編碼和解碼資料。
Do you remember where you saw my glasses?
你還記得在哪裡看到過我的眼鏡嗎?
Yes, I do.
是的,我記得。
Your glasses were on the desk near a red apple.
你的眼鏡就放在桌子上紅蘋果旁邊。
Give me a band name for this duo.
幫我給這雙人樂隊取個名字。
Golden Stripes.
金色條紋。
Nice.
很好。
Thanks, Gemini.
謝謝,Gemini。
Today, we're introducing Gemini 1.5 Flash.
今天我們要介紹的是 Gemini 1.5 Flash。
Flash is a lighter weight model compared to Pro.
與 Pro 相比,Flash 是重量更輕的型號。
It's designed to be fast and cost efficient to serve at scale, while still featuring multimodal reasoning capabilities and breakthrough long context.
它旨在快速且經濟高效地提供大規模服務,同時仍具有多模式推理功能和突破性的長上下文。
There's one more area I'm really excited to share with you.
還有一個領域我很想與大家分享。
Our teams have made some incredible progress in generative video.
我們的團隊在生成式影片中取得了令人難以置信的進展。
Today, I'm excited to announce our newest, most capable generative video model called Veo.
今天,我很高興地宣布我們最新、最強大的影片生成模型 Veo。
Veo creates high quality 1080p videos from text, image, and video prompts.
Veo 根據文字、圖像和影片指令創造高品質的 1080p 影片。
It can capture the details of your instructions in different visual and cinematic styles.
它可以以不同的視覺和電影風格捕捉你指令的細節。
For 25 years, we've invested in world class technical infrastructure.
25 年來,我們投資於世界一流的技術基礎設施。
Today, we are excited to announce the sixth generation of CPUs called Trillium.
今天,我們非常高興地宣佈推出第六代 CPU,名為 Trillium。
Trillium delivers a 4.7x improvement in compute performance per chip over the previous generation.
上一代產品相比,Trillium 的計算能力提高了 4.7 倍,每塊晶片的性能更高。
Google search is generative AI at the scale of human curiosity, and it's our most exciting chapter of search yet.
Google 搜尋是人類好奇心規模的生成式 AI,這是我們迄今為止最令人興奮的搜尋篇章。
All the advancements you'll see today are made possible by a new Gemini model customized for Google Search.
你今天看到的所有進步都是透過為 Google 搜尋客製化的新 Gemini 模型實現的。
What really sets this apart is our three unique strengths.
真正讓它與眾不同的是我們的三個獨特優勢。
This is search in the Gemini era.
這是 Gemini 時代的搜索。
By the end of the year, AI Overviews will come to over a billion people.
今年底,AI Overviews 將惠及超過 10 億人。
We're making AI Overviews even more helpful for your most complex questions, the types that are really more 10 questions in one.
我們正在使 AI Overviews 對你最複雜的問題更加有幫助,這些問題實際上是 10 個問題合而為一。
You can ask your entire question with all its sub questions and get an overview in seconds.
你可以提出整個問題及其所有子問題,並在幾秒鐘內獲得概述。
I'm really excited to share that soon you'll be able to ask questions with video.
我很高興與大家分享,很快你就可以透過影片提問了。
Why will this not stay in place?
這個為什麼不會保持不動?
And in a near instant, Google gives me an AI Overview.
就在一瞬間,Google 給了我一個 AI Overview。
I get some reasons this might be happening, and steps I can take to troubleshoot.
我了解了可能發生這種情況的一些原因,以及我可以採取的故障排除步驟。
Since last may, we've been hard at work making Gemini for Workspace even more helpful for businesses and consumers across the world.
自去年五月以來,我們一直在努力讓 Gemini for Workspace 為世界各地的企業和消費者提供更多幫助。
Now, I can simply type out my question right here in the mobile card and say something like, compare my roof repair bids by price and availability.
現在,我可以簡單地在行動卡中輸入我的問題,並說出類似內容,以價格和可用性比較我的屋頂維修出價。
This new Q&A feature makes it so easy to get quick answers on anything in my inbox.
這個新的問答功能使我可以輕鬆快速地獲得收件匣中任何內容的答案。
Today, we'll show you how Gemini is delivering our most intelligent AI experience.
今天,我們將向你展示 Gemini 如何提供我們最聰明的 AI 體驗。
We're rolling out a new feature that lets you customize it for your own needs and create personal experts on any topic you want.
我們正在推出一項新功能,讓你可以根據自己的需求進行自訂,並針對你想要的任何主題建立個人專家。
We're calling these Gems.
我們稱之為 Gems。
They're really simple to set up.
它們的設置非常簡單。
Just tap to create a Gem, write your instructions once, and come back whenever you need it.
只需點擊即可建立 Gem,編寫一次說明,然後在需要時返回。
Starting today, Gemini Advanced subscribers get access to Gemini 1.5 Pro with one million tokens.
從今天開始,Gemini Advanced 訂閱者可以使用 100 萬個 tokens 存取 Gemini 1.5 Pro。
That is the longest context window of any chatbot in the world.
這是世界上聊天機器人中最長的上下文視窗。
You can upload a PDF up to 1,500 pages long or multiple files to get insights across a project.
你可以上傳長達 1,500 頁的 PDF 或多個文件,以深入了解整個專案。
Now, we all know that chatbots can give you ideas for your next vacation.
現在,我們都知道聊天機器人可以為你的下一個假期提供建議。
But there's a lot more that goes into planning a great trip.
但規劃一次愉快的旅行還需要考慮很多因素。
It requires reasoning that considers space-time logistics, and the intelligence to prioritize and make decisions.
這需要考慮時空物流的推理、以及確定優先次序和做出決策的智慧。
That reasoning and intelligence all come together in the new trip planning experience in Gemini Advanced.
Gemini Advanced 的全新旅行規劃體驗將推理與智慧融為一體。
We've embarked on a multi-year journey to reimagine Android with AI at the core.
我們已經踏上了以 AI 為核心重新構想 Android 的多年旅程。
Now we're making Gemini context aware so it can anticipate what you're trying to do and provide more helpful suggestions in the moment.
現在,我們正在使 Gemini 具有上下文感知能力,以便它可以預測你正在嘗試執行的操作,並立即提供更多有用的建議。
Let me show you how this works.
讓我來告訴你如何操作。
So my friend Pete is asking if I want to play pickleball this weekend.
我的朋友皮特問我這個週末是否想玩匹克球。
But I'm new to this pickleball thing, and I can bring up Gemini to help with that.
但我對匹克球運動還是個新手,我可以讓 Gemini 來幫忙。
Gemini knows I'm looking at a video, so it proactively shows me an ask this video chip, so let me tap on that.
Gemini 知道我正在看影片,所以它主動向我顯示詢問該影片晶片,所以讓我點擊它。
And now I can ask specific questions about the video.
現在我可以提出有關影片的具體問題。
So for example, what is the two bounce rule?
例如,什麼是兩次反彈規則?
So give it a moment-- and there.
所以,給它一點時間... 出現了。
I get a nice, distinct answer.
我得到了一個漂亮而明確的答案。
Starting with Pixel later this year, we'll be expanding what's possible with our latest model, Gemini Nano with multimodality.
從今年稍後的 Pixel 開始,我們將擴展我們最新模型 Gemini Nano 的多模態功能。
This means your phone can understand the world the way you understand it.
這意味著你的手機可以按照你理解的方式理解世界。
So not just through text input, but also through sights, sounds, and spoken language.
因此,不僅可以透過文字輸入,還可以透過視覺、聲音和口語。
Now let's shift gears and talk about Gemma, our family of open models, which are crucial for driving AI innovation and responsibility.
現在讓我們換個話題來談談 Gemma,我們的開放模型系列,它對於推動 AI 創新和責任至關重要。
Today's newest member, PaliGemma, our first vision language open model, and it's available right now.
今天的最新成員 PaliGemma,我們的第一個視覺語言開放模型,現已推出。
I'm also excited to announce that we have Gemma 2 coming.
我還很高興地宣布,《Gemma 2》即將推出。
It's the next generation of Gemma, and it will be available in June.
這是新一代的 Gemma,並將於 6 月上市。
So in a few weeks, we'll be adding a new 27 billion parameter model to Gemma 2.
因此,幾週後,我們將向 Gemma 2 添加一個新的 270 億參數模型。
To us, building AI responsibly means both addressing the risks and maximizing the benefits for people and society.
對我們來說,負責任地建構 AI 意味著既要解決風險,又要最大限度地為人類和社會帶來利益。
We're improving our models with an industry standard practice called Red Teaming, in which we test our own models and try to break them to identify weaknesses.
我們正在透過稱為「紅隊」的行業標準實踐來改進我們的模型,在這種實踐中,我們測試我們自己的模型並嘗試打破它們以找出弱點。
I'm excited to introduce LearnLM, our new family of models based on Gemini and fine tuned for learning.
我很高興向大家介紹 LearnLM,這是我們基於 Gemini 並針對學習進行了微調的新模型系列。
Another example is a new feature in YouTube that uses LearnLM to make educational videos more interactive, allowing you to ask a clarifying question, get a helpful explanation, or take a quiz.
另一個例子是 YouTube 中的一項新功能,它使用 LearnLM 使教育影片更具互動性,讓你可以提出澄清問題、獲得有用的解釋或參加測驗。
All of this shows the important progress we have made as we take a bold and responsible approach to making AI helpful for everyone.
這一切都顯示我們在採取大膽和負責任的方法讓 AI 造福每個人時所取得的重要進展。
To everyone here in Shoreline and the millions more watching around the world, here's to the possibilities ahead and creating them together.
致 Shoreline 的每個人以及世界各地數百萬人的觀眾,這是未來的可能性,並共同創造它們。
Thank you.
謝謝。