Subtitles section Play video
So, I started my first job as a computer programmer
譯者: Helen Chang 審譯者: SF Huang
in my very first year of college --
大一時我開始了第一份工作: 程式設計師,
basically, as a teenager.
當時我還算是個青少女。
Soon after I started working,
開始為軟體公司寫程式後不久,
writing software in a company,
公司裡的一個經理走到我身邊,
a manager who worked at the company came down to where I was,
悄悄地問:
and he whispered to me,
「他能判斷我是否說謊嗎?」
"Can he tell if I'm lying?"
當時房裡沒別人。
There was nobody else in the room.
「『誰』能不能判斷你說謊與否? 而且,我們為什麼耳語呢?」
"Can who tell if you're lying? And why are we whispering?"
經理指著房裡的電腦,問:
The manager pointed at the computer in the room.
「『他』能判斷我是否說謊嗎?」
"Can he tell if I'm lying?"
當時那經理與接待員有曖昧關係。
Well, that manager was having an affair with the receptionist.
(笑聲)
(Laughter)
那時我仍是個青少女。
And I was still a teenager.
所以,我用耳語大聲地回答他:
So I whisper-shouted back to him,
「能,電腦能判斷你撒謊與否。」
"Yes, the computer can tell if you're lying."
(笑聲)
(Laughter)
沒錯,我笑了,但可笑的人是我。
Well, I laughed, but actually, the laugh's on me.
如今,有些計算系統
Nowadays, there are computational systems
靠分析、判讀面部表情, 就能判斷出情緒狀態,
that can suss out emotional states and even lying
甚至判斷是否說謊。
from processing human faces.
廣告商,甚至政府也對此很感興趣。
Advertisers and even governments are very interested.
我之所以成為程式設計師,
I had become a computer programmer
是因為自幼便極為喜愛數學和科學。
because I was one of those kids crazy about math and science.
過程中我學到核子武器,
But somewhere along the line I'd learned about nuclear weapons,
因而變得非常關心科學倫理。
and I'd gotten really concerned with the ethics of science.
我很苦惱。
I was troubled.
但由於家庭狀況,
However, because of family circumstances,
我必須儘早就業。
I also needed to start working as soon as possible.
因此我告訴自己,
So I thought to myself, hey, let me pick a technical field
選擇一個在科技領域中 能簡單地找到頭路,
where I can get a job easily
又無需處理涉及倫理道德 這類麻煩問題的工作吧。
and where I don't have to deal with any troublesome questions of ethics.
所以我選擇了電腦。
So I picked computers.
(笑聲)
(Laughter)
是啊,哈哈哈!大家都笑我。
Well, ha, ha, ha! All the laughs are on me.
如今,電腦科學家
Nowadays, computer scientists are building platforms
正建構著可控制數十億人 每天接收訊息的平台。
that control what a billion people see every day.
他們設計的汽車 可以決定要輾過哪些人。
They're developing cars that could decide who to run over.
他們甚至建造能殺人的 戰爭機器和武器。
They're even building machines, weapons,
從頭到尾都是倫理的問題。
that might kill human beings in war.
機器智慧已經在此。
It's ethics all the way down.
我們利用計算來做各種決策,
Machine intelligence is here.
同時也是種新形態的決策。
We're now using computation to make all sort of decisions,
我們以計算來尋求解答, 但問題沒有單一的正解,
but also new kinds of decisions.
而是主觀、開放、具價值觀的答案。
We're asking questions to computation that have no single right answers,
問題像是,
that are subjective
「公司應該聘誰?」
and open-ended and value-laden.
「應該顯示哪個朋友的哪項更新?」
We're asking questions like,
「哪個罪犯更可能再犯?」
"Who should the company hire?"
「應該推薦哪項新聞或哪部電影?」
"Which update from which friend should you be shown?"
我們使用電腦雖有一段時間了,
"Which convict is more likely to reoffend?"
但這是不同的。
"Which news item or movie should be recommended to people?"
這是歷史性的轉折,
Look, yes, we've been using computers for a while,
因我們不能主導計算機 如何去做這樣的主觀決定,
but this is different.
無法像主導計算機去開飛機、造橋樑
This is a historical twist,
或登陸月球那樣。
because we cannot anchor computation for such subjective decisions
飛機會更安全嗎? 橋樑會搖擺或倒塌嗎?
the way we can anchor computation for flying airplanes, building bridges,
那兒已有相當明確的基準共識,
going to the moon.
有自然的法則指引著我們。
Are airplanes safer? Did the bridge sway and fall?
但我們沒有
There, we have agreed-upon, fairly clear benchmarks,
判斷凌亂人事的錨點或基準。
and we have laws of nature to guide us.
使事情變得更為複雜的是, 因軟體越來越強大,
We have no such anchors and benchmarks
但也越來越不透明,越複雜難懂。
for decisions in messy human affairs.
過去十年
To make things more complicated, our software is getting more powerful,
複雜的演算法有長足的進步:
but it's also getting less transparent and more complex.
能辨識人臉,
Recently, in the past decade,
能解讀手寫的字,
complex algorithms have made great strides.
能檢測信用卡欺詐,
They can recognize human faces.
阻擋垃圾郵件,
They can decipher handwriting.
能翻譯不同的語言,
They can detect credit card fraud
能判讀醫學影像查出腫瘤,
and block spam
能在西洋棋和圍棋賽中 擊敗人類棋手。
and they can translate between languages.
這些進步主要來自所謂的 「機器學習」法。
They can detect tumors in medical imaging.
機器學習不同於傳統的程式編寫。
They can beat humans in chess and Go.
編寫程式是下詳細、精確、 齊全的計算機指令;
Much of this progress comes from a method called "machine learning."
機器學習更像是 餵大量的數據給系統,
Machine learning is different than traditional programming,
包括非結構化的數據,
where you give the computer detailed, exact, painstaking instructions.
像我們數位生活產生的數據;
It's more like you take the system and you feed it lots of data,
系統翻撈這些數據來學習。
including unstructured data,
至關重要的是,
like the kind we generate in our digital lives.
這些系統不在產生 單一答案的邏輯系統下運作;
And the system learns by churning through this data.
它們不會給出一個簡單的答案,
And also, crucially,
而是以更接近機率的形式呈現:
these systems don't operate under a single-answer logic.
「這可能更接近你所要找的。」
They don't produce a simple answer; it's more probabilistic:
好處是:這方法強而有力。
"This one is probably more like what you're looking for."
谷歌的人工智慧系統負責人稱之為:
Now, the upside is: this method is really powerful.
「不合理的數據有效性。」
The head of Google's AI systems called it,
缺點是,
"the unreasonable effectiveness of data."
我們未能真正明白 系統學到了什麼。
The downside is,
事實上,這就是它的力量。
we don't really understand what the system learned.
這不像下指令給計算機;
In fact, that's its power.
而更像是訓練
This is less like giving instructions to a computer;
我們未能真正了解 或無法控制的機器寵物狗。
it's more like training a puppy-machine-creature
這是我們的問題。
we don't really understand or control.
人工智慧系統出錯時會是個問題;
So this is our problem.
即使它弄對了還是個問題,
It's a problem when this artificial intelligence system gets things wrong.
因碰到主觀問題時, 我們不知哪個是哪個。
It's also a problem when it gets things right,
我們不知道系統在想什麼。
because we don't even know which is which when it's a subjective problem.
就拿招募人員的演算法來說,
We don't know what this thing is thinking.
亦即以機器學習來僱用人的系統,
So, consider a hiring algorithm --
這樣的系統用 已有的員工數據來訓練機器,
a system used to hire people, using machine-learning systems.
指示它尋找和僱用那些
Such a system would have been trained on previous employees' data
類似公司現有的高績效員工的人。
and instructed to find and hire
聽起來不錯。
people like the existing high performers in the company.
我曾參加某會議,
Sounds good.
聚集人資經理和高階主管,
I once attended a conference
高層人士,
that brought together human resources managers and executives,
使用這種系統招聘。
high-level people,
他們超級興奮,
using such systems in hiring.
認為這種系統會使招聘更為客觀,
They were super excited.
較少偏見,
They thought that this would make hiring more objective, less biased,
有利於婦女和少數民族
and give women and minorities a better shot
避開有偏見的管理人。
against biased human managers.
看哪!靠人類僱用是有偏見的。
And look -- human hiring is biased.
我知道。
I know.
我的意思是, 在早期某個編寫程式的工作,
I mean, in one of my early jobs as a programmer,
有時候我的直屬主管會在
my immediate manager would sometimes come down to where I was
大清早或下午很晚時來到我身旁,
really early in the morning or really late in the afternoon,
說:「日娜,走,吃午飯!」
and she'd say, "Zeynep, let's go to lunch!"
我被奇怪的時間點所困惑。
I'd be puzzled by the weird timing.
下午 4 點。吃午餐?
It's 4pm. Lunch?
我很窮,
I was broke, so free lunch. I always went.
因為是免費的午餐,所以總是會去。
I later realized what was happening.
後來我明白到底是怎麼回事。
My immediate managers had not confessed to their higher-ups
我的直屬主管沒讓她的主管知道,
that the programmer they hired for a serious job was a teen girl
他們僱來做重要職務的程式設計師,
who wore jeans and sneakers to work.
是個穿牛仔褲和運動鞋
I was doing a good job, I just looked wrong
來上班的十幾歲女孩。
and was the wrong age and gender.
我工作做得很好, 只是外表形象看起來不符,
So hiring in a gender- and race-blind way
年齡和性別不對。
certainly sounds good to me.
因此,性別和種族 不列入考慮的僱用系統
But with these systems, it is more complicated, and here's why:
對我而言當然不錯。
Currently, computational systems can infer all sorts of things about you
但使用這些系統會更複雜,原因是:
from your digital crumbs,
目前的計算系統
even if you have not disclosed those things.
可從你零散的數位足跡 推斷出關於你的各種事物,
They can infer your sexual orientation,
即使你未曾披露過。
your personality traits,
他們能推斷你的性取向,
your political leanings.
個性的特質,
They have predictive power with high levels of accuracy.
政治的傾向。
Remember -- for things you haven't even disclosed.
他們的預測能力相當精準。
This is inference.
請記住:知道你未曾公開的事情
I have a friend who developed such computational systems
是推理。
to predict the likelihood of clinical or postpartum depression
我有個朋友開發這樣的計算系統:
from social media data.
從社交媒體數據來預測 臨床或產後抑鬱症的可能性。
The results are impressive.
結果非常優異。
Her system can predict the likelihood of depression
她的系統
months before the onset of any symptoms --
能在出現任何症狀的幾個月前 預測出抑鬱的可能性,
months before.
是好幾個月前。
No symptoms, there's prediction.
雖沒有症狀,已預測出來。
She hopes it will be used for early intervention. Great!
她希望它被用來早期干預處理。
But now put this in the context of hiring.
很好!
So at this human resources managers conference,
但是,設想若把這系統 用在僱人的情況下。
I approached a high-level manager in a very large company,
在這人資經理會議中,
and I said to her, "Look, what if, unbeknownst to you,
我走向一間大公司的高階經理,
your system is weeding out people with high future likelihood of depression?
對她說:
They're not depressed now, just maybe in the future, more likely.
「假設在你不知道的情形下,
What if it's weeding out women more likely to be pregnant
那個系統被用來排除 未來極有可能抑鬱的人呢?
in the next year or two but aren't pregnant now?
他們現在不抑鬱, 只是未來『比較有可能』抑鬱。
What if it's hiring aggressive people because that's your workplace culture?"
如果它被用來排除 在未來一兩年比較有可能懷孕,
You can't tell this by looking at gender breakdowns.
但現在沒懷孕的婦女呢?
Those may be balanced.
如果它被用來招募激進性格者, 以符合你的職場文化呢?」
And since this is machine learning, not traditional coding,
透過性別比例無法看到這些問題,
there is no variable there labeled "higher risk of depression,"
因比例可能是均衡的。
"higher risk of pregnancy,"
而且由於這是機器學習, 不是傳統編碼,
"aggressive guy scale."
沒有標記為「更高抑鬱症風險」、
Not only do you not know what your system is selecting on,
「更高懷孕風險」、
you don't even know where to begin to look.
「侵略性格者」的變數;
It's a black box.
你不僅不知道系統在選什麼,
It has predictive power, but you don't understand it.
甚至不知道要從何找起。
"What safeguards," I asked, "do you have
它就是個黑盒子,
to make sure that your black box isn't doing something shady?"
具有預測能力,但你不了解它。
She looked at me as if I had just stepped on 10 puppy tails.
我問:「你有什麼能確保
(Laughter)
你的黑盒子沒在暗地裡 做了什麼不可告人之事?
She stared at me and she said,
她看著我,彷彿我剛踩了 十隻小狗的尾巴。
"I don't want to hear another word about this."
(笑聲)
And she turned around and walked away.
她盯著我,說:
Mind you -- she wasn't rude.
「關於這事,我不想 再聽妳多說一個字。」
It was clearly: what I don't know isn't my problem, go away, death stare.
然後她就轉身走開了。
(Laughter)
提醒你們,她不是粗魯。
Look, such a system may even be less biased
她的意思很明顯:
than human managers in some ways.
我不知道的事不是我的問題。
And it could make monetary sense.
走開。惡狠狠盯著。
But it could also lead
(笑聲)
to a steady but stealthy shutting out of the job market
這樣的系統可能比人類經理 在某些方面更沒有偏見,
of people with higher risk of depression.
可能也省錢;
Is this the kind of society we want to build,
但也可能在不知不覺中逐步導致
without even knowing we've done this,
抑鬱症風險較高的人 在就業市場裡吃到閉門羹。
because we turned decision-making to machines we don't totally understand?
我們要在不自覺的情形下 建立這種社會嗎?
Another problem is this:
僅僅因我們讓給 我們不完全理解的機器做決策?
these systems are often trained on data generated by our actions,
另一個問題是:這些系統通常由
human imprints.
我們行動產生的數據, 即人類的印記所訓練。
Well, they could just be reflecting our biases,
它們可能只是反映我們的偏見,
and these systems could be picking up on our biases
學習了我們的偏見
and amplifying them
並且放大,
and showing them back to us,
然後回饋給我們;
while we're telling ourselves,
而我們卻告訴自己:
"We're just doing objective, neutral computation."
「這樣做是客觀、不偏頗的計算。」
Researchers found that on Google,
研究人員在谷歌上發現,
women are less likely than men to be shown job ads for high-paying jobs.
女性比男性更不易看到 高薪工作招聘的廣告。
And searching for African-American names
蒐索非裔美國人的名字
is more likely to bring up ads suggesting criminal history,
比較可能帶出暗示犯罪史的廣告,
even when there is none.
即使那人並無犯罪史。
Such hidden biases and black-box algorithms
這種隱藏偏見和黑箱的演算法,
that researchers uncover sometimes but sometimes we don't know,
有時被研究人員發現了, 但有時我們毫無所知,
can have life-altering consequences.
很可能產生改變生命的後果。
In Wisconsin, a defendant was sentenced to six years in prison
在威斯康辛州,某個被告 因逃避警察而被判處六年監禁。
for evading the police.
你可能不知道
You may not know this,
演算法越來越頻繁地被用在
but algorithms are increasingly used in parole and sentencing decisions.
假釋和量刑的決定上。
He wanted to know: How is this score calculated?
想知道分數如何計算出來的嗎?
It's a commercial black box.
這是個商業的黑盒子,
The company refused to have its algorithm be challenged in open court.
開發它的公司
But ProPublica, an investigative nonprofit, audited that very algorithm
拒絕讓演算法在公開法庭上受盤問。
with what public data they could find,
但是 ProPublica 這家 非營利機構評估該演算法,
and found that its outcomes were biased
使用找得到的公共數據,
and its predictive power was dismal, barely better than chance,
發現其結果偏頗,
and it was wrongly labeling black defendants as future criminals
預測能力相當差,僅比碰運氣稍強,
at twice the rate of white defendants.
並錯誤地標記黑人被告 成為未來罪犯的機率,
So, consider this case:
是白人被告的兩倍。
This woman was late picking up her godsister
考慮這個情況:
from a school in Broward County, Florida,
這女人因來不及去佛州布勞沃德郡的 學校接她的乾妹妹,
running down the street with a friend of hers.
而與朋友狂奔趕赴學校。
They spotted an unlocked kid's bike and a scooter on a porch
他們看到門廊上有一輛未上鎖的 兒童腳踏車和一台滑板車,
and foolishly jumped on it.
愚蠢地跳上去,
As they were speeding off, a woman came out and said,
當他們趕時間快速離去時,
"Hey! That's my kid's bike!"
一個女人出來說: 「嘿!那是我孩子的腳踏車!」
They dropped it, they walked away, but they were arrested.
雖然他們留下車子走開, 但被逮捕了。
She was wrong, she was foolish, but she was also just 18.
她錯了,她很蠢,但她只有十八歲。
She had a couple of juvenile misdemeanors.
曾觸犯兩次少年輕罪。
Meanwhile, that man had been arrested for shoplifting in Home Depot --
同時,
85 dollars' worth of stuff, a similar petty crime.
那個男人因在家得寶商店 偷竊八十五美元的東西而被捕,
But he had two prior armed robbery convictions.
類似的小罪,
But the algorithm scored her as high risk, and not him.
但他曾兩次因武裝搶劫而被定罪。
Two years later, ProPublica found that she had not reoffended.
演算法認定她有再犯的高風險,
It was just hard to get a job for her with her record.
而他卻不然。
He, on the other hand, did reoffend
兩年後,ProPublica 發現她未曾再犯;
and is now serving an eight-year prison term for a later crime.
但因有過犯罪紀錄而難以找到工作。
Clearly, we need to audit our black boxes
另一方面,他再犯了,
and not have them have this kind of unchecked power.
現正因再犯之罪而入監服刑八年。
(Applause)
很顯然,我們必需審核黑盒子,
Audits are great and important, but they don't solve all our problems.
並且不賦予它們 這類未經檢查的權力。
Take Facebook's powerful news feed algorithm --
(掌聲)
you know, the one that ranks everything and decides what to show you
審核極其重要, 但不足以解決所有的問題。
from all the friends and pages you follow.
拿臉書強大的動態消息演算法來說,
Should you be shown another baby picture?
就是通過你的朋友圈 和瀏覽過的頁面,
(Laughter)
排序並決定推薦 什麼給你看的演算法。
A sullen note from an acquaintance?
應該再讓你看一張嬰兒照片嗎?
An important but difficult news item?
(笑聲)
There's no right answer.
或者一個熟人的哀傷筆記?
Facebook optimizes for engagement on the site:
還是一則重要但艱澀的新聞?
likes, shares, comments.
沒有正確的答案。
In August of 2014,
臉書根據在網站上的參與度來優化:
protests broke out in Ferguson, Missouri,
喜歡,分享,評論。
after the killing of an African-American teenager by a white police officer,
2014 年八月,
under murky circumstances.
在密蘇里州弗格森市 爆發了抗議遊行,
The news of the protests was all over
抗議一位白人警察在不明的狀況下 殺害一個非裔美國少年,
my algorithmically unfiltered Twitter feed,
抗議的消息充斥在
but nowhere on my Facebook.
我未經演算法篩選過的推特頁面上,
Was it my Facebook friends?
但我的臉書上卻一則也沒有。
I disabled Facebook's algorithm,
是我的臉書好友不關注這事嗎?
which is hard because Facebook keeps wanting to make you
我關閉了臉書的演算法,
come under the algorithm's control,
但很麻煩惱人,
and saw that my friends were talking about it.
因為臉書不斷地 想讓你回到演算法的控制下,
It's just that the algorithm wasn't showing it to me.
臉書的朋友有在談論弗格森這事,
I researched this and found this was a widespread problem.
只是臉書的演算法沒有顯示給我看。
The story of Ferguson wasn't algorithm-friendly.
研究後,我發現這問題普遍存在。
It's not "likable."
弗格森一事和演算法不合,
Who's going to click on "like?"
它不討喜;
It's not even easy to comment on.
誰會點擊「讚」呢?
Without likes and comments,
它甚至不易被評論。
the algorithm was likely showing it to even fewer people,
越是沒有讚、沒評論,
so we didn't get to see this.
演算法就顯示給越少人看,
Instead, that week,
所以我們看不到這則新聞。
Facebook's algorithm highlighted this,
相反地,
which is the ALS Ice Bucket Challenge.
臉書的演算法在那星期特別突顯 為漸凍人募款的冰桶挑戰這事。
Worthy cause; dump ice water, donate to charity, fine.
崇高的目標;傾倒冰水,捐贈慈善,
But it was super algorithm-friendly.
有意義,很好;
The machine made this decision for us.
這事與演算法超級速配,
A very important but difficult conversation
機器已為我們決定了。
might have been smothered,
非常重要但艱澀的 新聞事件可能被埋沒掉,
had Facebook been the only channel.
倘若臉書是唯一的新聞渠道。
Now, finally, these systems can also be wrong
最後,這些系統
in ways that don't resemble human systems.
也可能以不像人類犯錯的方式出錯。
Do you guys remember Watson, IBM's machine-intelligence system
大家可還記得 IBM 的 機器智慧系統華生
that wiped the floor with human contestants on Jeopardy?
在 Jeopardy 智力問答比賽中 橫掃人類的對手?
It was a great player.
它是個厲害的選手。
But then, for Final Jeopardy, Watson was asked this question:
在 Final Jeopardy 節目中
"Its largest airport is named for a World War II hero,
華生被問到:
its second-largest for a World War II battle."
「它的最大機場以二戰英雄命名,
(Hums Final Jeopardy music)
第二大機場以二戰戰場為名。」
Chicago.
(哼 Jeopardy 的音樂)
The two humans got it right.
「芝加哥,」
Watson, on the other hand, answered "Toronto" --
兩個人類選手的答案正確;
for a US city category!
華生則回答「多倫多」。
The impressive system also made an error
這是個猜「美國」城市的問題啊!
that a human would never make, a second-grader wouldn't make.
這個厲害的系統也犯了
Our machine intelligence can fail
人類永遠不會犯,
in ways that don't fit error patterns of humans,
即使二年級學生也不會犯的錯誤。
in ways we won't expect and be prepared for.
我們的機器智慧可能敗在
It'd be lousy not to get a job one is qualified for,
與人類犯錯模式迥異之處,
but it would triple suck if it was because of stack overflow
在我們完全想不到、 沒準備的地方出錯。
in some subroutine.
得不到一份可勝任的 工作確實很糟糕,
(Laughter)
但若起因是機器的子程式漫溢, 會是三倍的糟糕。
In May of 2010,
(笑聲)
a flash crash on Wall Street fueled by a feedback loop
2010 年五月,
in Wall Street's "sell" algorithm
華爾街「賣出」演算法的 回饋迴路觸發了股市的急速崩盤,
wiped a trillion dollars of value in 36 minutes.
數萬億美元的市值 在 36 分鐘內蒸發掉了。
I don't even want to think what "error" means
我甚至不敢想
in the context of lethal autonomous weapons.
若「錯誤」發生在致命的 自動武器上會是何種情況。
So yes, humans have always made biases.
是啊,人類總是有偏見。
Decision makers and gatekeepers,
決策者和守門人
in courts, in news, in war ...
在法庭、新聞中、戰爭裡……
they make mistakes; but that's exactly my point.
都會犯錯;但這正是我的觀點:
We cannot escape these difficult questions.
我們不能逃避這些困難的問題。
We cannot outsource our responsibilities to machines.
我們不能把責任外包給機器。
(Applause)
(掌聲)
Artificial intelligence does not give us a "Get out of ethics free" card.
人工智慧不會給我們 「倫理免責卡」。
Data scientist Fred Benenson calls this math-washing.
數據科學家費德·本森 稱之為「數學粉飾」。
We need the opposite.
我們需要相反的東西。
We need to cultivate algorithm suspicion, scrutiny and investigation.
我們需要培養懷疑、審視 和調查演算法的能力。
We need to make sure we have algorithmic accountability,
我們需確保演算法有人負責,
auditing and meaningful transparency.
能被審查,並且確實公開透明。
We need to accept that bringing math and computation
我們必須體認,
to messy, value-laden human affairs
把數學和演算法帶入凌亂、 具價值觀的人類事務
does not bring objectivity;
不能帶來客觀性;
rather, the complexity of human affairs invades the algorithms.
相反地,人類事務的 複雜性侵入演算法。
Yes, we can and we should use computation
是啊,我們可以、也應該用演算法
to help us make better decisions.
來幫助我們做出更好的決定。
But we have to own up to our moral responsibility to judgment,
但我們也需要在判斷中 加入道德義務,
and use algorithms within that framework,
並在該框架內使用演算法,
not as a means to abdicate and outsource our responsibilities
而不是像人與人間相互推卸那樣,
to one another as human to human.
就把責任轉移給機器。
Machine intelligence is here.
機器智慧已經到來,
That means we must hold on ever tighter
這意味著我們必須更堅守
to human values and human ethics.
人類價值觀和人類倫理。
Thank you.
謝謝。
(Applause)
(掌聲)