Placeholder Image

Subtitles section Play video

  • So when people voice fears of artificial intelligence,

    譯者: Lilian Chiu 審譯者: Helen Chang

  • very often, they invoke images of humanoid robots run amok.

    當人們表達出對人工智慧的恐懼,

  • You know? Terminator?

    他們腦中的景象通常是 形象似人的機器人失控殺人。

  • You know, that might be something to consider,

    知道嗎?魔鬼終結者?

  • but that's a distant threat.

    雖然考量那種情況的確沒錯,

  • Or, we fret about digital surveillance

    但那是遙遠以後的威脅。

  • with metaphors from the past.

    或者,我們擔心被數位監視,

  • "1984," George Orwell's "1984,"

    有著來自過去的隱喻。

  • it's hitting the bestseller lists again.

    喬治歐威爾的《1984》

  • It's a great book,

    再度登上了暢銷書的排行榜。

  • but it's not the correct dystopia for the 21st century.

    雖然它是本很棒的書,

  • What we need to fear most

    但它並未正確地反映出 21 世紀的反烏托邦。

  • is not what artificial intelligence will do to us on its own,

    我們最需要恐懼的

  • but how the people in power will use artificial intelligence

    並不是人工智慧本身會對我們怎樣,

  • to control us and to manipulate us

    而是掌權者會如何運用人工智慧

  • in novel, sometimes hidden,

    來控制和操縱我們,

  • subtle and unexpected ways.

    用新穎的、有時隱蔽的、

  • Much of the technology

    精細的、出乎意料的方式。

  • that threatens our freedom and our dignity in the near-term future

    那些會在不遠的將來

  • is being developed by companies

    威脅我們自由和尊嚴的科技,

  • in the business of capturing and selling our data and our attention

    多半出自下面這類公司,

  • to advertisers and others:

    他們攫取我們的注意力和資料,

  • Facebook, Google, Amazon,

    販售給廣告商和其他對象:

  • Alibaba, Tencent.

    臉書、Google、亞馬遜、

  • Now, artificial intelligence has started bolstering their business as well.

    阿里巴巴、騰訊。

  • And it may seem like artificial intelligence

    人工智慧開始鞏固這些公司的事業。

  • is just the next thing after online ads.

    看似人工智慧將是

  • It's not.

    線上廣告後的下一個產物。

  • It's a jump in category.

    並非如此。

  • It's a whole different world,

    它是個大躍進的類別,

  • and it has great potential.

    一個完全不同的世界,

  • It could accelerate our understanding of many areas of study and research.

    它具有龐大的潛力,

  • But to paraphrase a famous Hollywood philosopher,

    能夠加速我們對於 許多研究領域的了解。

  • "With prodigious potential comes prodigious risk."

    但,轉述一位知名 好萊塢哲學家的說法:

  • Now let's look at a basic fact of our digital lives, online ads.

    「驚人的潛力會帶來驚人的風險。」

  • Right? We kind of dismiss them.

    先談談一個數位生活的 基本面向:線上廣告。

  • They seem crude, ineffective.

    我們算是有點輕視了線上的廣告。

  • We've all had the experience of being followed on the web

    它們看似粗糙、無效。

  • by an ad based on something we searched or read.

    我們都曾經因為在網路上 搜尋或閱讀過某些內容,

  • You know, you look up a pair of boots

    而老是被一個廣告給跟隨著。

  • and for a week, those boots are following you around everywhere you go.

    上網搜尋一雙靴子,

  • Even after you succumb and buy them, they're still following you around.

    之後的一週,你到哪兒 都會看見那雙靴子。

  • We're kind of inured to that kind of basic, cheap manipulation.

    即使你屈服,買下了它, 它還是到處跟著你。

  • We roll our eyes and we think, "You know what? These things don't work."

    我們算是習慣了 那種基本、廉價的操縱,

  • Except, online,

    翻個白眼,心想: 「知道嗎?這些沒有用。」

  • the digital technologies are not just ads.

    只除了在線上,

  • Now, to understand that, let's think of a physical world example.

    數位科技並不只是廣告。

  • You know how, at the checkout counters at supermarkets, near the cashier,

    為瞭解這一點,我們先用 實體世界當作例子。

  • there's candy and gum at the eye level of kids?

    你們有沒有看過,在超市結帳台 靠近收銀機的地方,

  • That's designed to make them whine at their parents

    會有放在孩子視線高度的 糖果和口香糖?

  • just as the parents are about to sort of check out.

    那是設計來讓孩子哀求

  • Now, that's a persuasion architecture.

    正在結帳的父母用的。

  • It's not nice, but it kind of works.

    那是一種說服架構,

  • That's why you see it in every supermarket.

    不太好,但算是有些效用,

  • Now, in the physical world,

    因此在每個超級市場都看得到。

  • such persuasion architectures are kind of limited,

    在實體世界中,

  • because you can only put so many things by the cashier. Right?

    這種說服架構有點受限,

  • And the candy and gum, it's the same for everyone,

    因為在收銀台那裡 只擺得下那麼點東西,對吧?

  • even though it mostly works

    並且每個人看到的 是同樣的糖果和口香糖,

  • only for people who have whiny little humans beside them.

    這招只對身旁

  • In the physical world, we live with those limitations.

    有小孩子喋喋不休吵著的大人有用。

  • In the digital world, though,

    我們生活的實體世界裡有那些限制。

  • persuasion architectures can be built at the scale of billions

    但在數位世界裡,

  • and they can target, infer, understand

    說服架構的規模可達數十億的等級,

  • and be deployed at individuals

    它們會瞄準、臆測、了解,

  • one by one

    針對個人來部署,

  • by figuring out your weaknesses,

    各個擊破,

  • and they can be sent to everyone's phone private screen,

    弄清楚個別的弱點,

  • so it's not visible to us.

    且能傳送到每個人 私人手機的螢幕上,

  • And that's different.

    別人是看不見的。

  • And that's just one of the basic things that artificial intelligence can do.

    那就很不一樣。

  • Now, let's take an example.

    那只是人工智慧 能做到的基本功能之一。

  • Let's say you want to sell plane tickets to Vegas. Right?

    讓我舉個例子。

  • So in the old world, you could think of some demographics to target

    比如說,你要賣飛往賭城的機票。

  • based on experience and what you can guess.

    在舊式的世界裡,你可以想出 某些特徵的人來當目標,

  • You might try to advertise to, oh,

    根據你的經驗和猜測。

  • men between the ages of 25 and 35,

    你也可以試著打廣告,

  • or people who have a high limit on their credit card,

    像針對 25~35 歲的男性,

  • or retired couples. Right?

    或高信用卡額度的人,

  • That's what you would do in the past.

    或退休的夫妻,對吧?

  • With big data and machine learning,

    那是過去的做法。

  • that's not how it works anymore.

    有了大量資料和機器學習,

  • So to imagine that,

    方式就不一樣了。

  • think of all the data that Facebook has on you:

    試想,

  • every status update you ever typed,

    想想臉書掌握什麼關於你的資料:

  • every Messenger conversation,

    所有你輸入的動態更新、

  • every place you logged in from,

    所有的訊息對話、

  • all your photographs that you uploaded there.

    所有你登入時的所在地、

  • If you start typing something and change your mind and delete it,

    所有你上傳的照片。

  • Facebook keeps those and analyzes them, too.

    如果你開始輸入些內容, 但隨後改變主意而將之刪除,

  • Increasingly, it tries to match you with your offline data.

    臉書會保留那些內容和分析它們。

  • It also purchases a lot of data from data brokers.

    它越來越會試著將你 和你的離線資料做匹配,

  • It could be everything from your financial records

    也會向資料仲介商購買許多資料。

  • to a good chunk of your browsing history.

    從你的財務記錄

  • Right? In the US, such data is routinely collected,

    到你過去瀏覽過的一大堆記錄。

  • collated and sold.

    在美國,這些資料被常規地收集、

  • In Europe, they have tougher rules.

    校對和售出。

  • So what happens then is,

    歐洲的規定比較嚴。

  • by churning through all that data, these machine-learning algorithms --

    接下來發生的狀況是

  • that's why they're called learning algorithms --

    透過攪拌所有這些資料, 這些機器學習演算法──

  • they learn to understand the characteristics of people

    這就是為什麼它們 被稱為學習演算法──

  • who purchased tickets to Vegas before.

    它們學會了解過去購買機票

  • When they learn this from existing data,

    飛往賭城的人有何特徵。

  • they also learn how to apply this to new people.

    當它們從既有的資料中 學到這些之後,

  • So if they're presented with a new person,

    也學習如何將所學 套用到新的人身上。

  • they can classify whether that person is likely to buy a ticket to Vegas or not.

    如果交給它們一個新的人,

  • Fine. You're thinking, an offer to buy tickets to Vegas.

    它們能辨識那人可能 或不太可能買機票。

  • I can ignore that.

    好。你心想,不就是提供 購買飛往賭城機票的訊息罷了,

  • But the problem isn't that.

    可以忽略它。

  • The problem is,

    但問題不在那裡。

  • we no longer really understand how these complex algorithms work.

    問題是,

  • We don't understand how they're doing this categorization.

    我們已經不能真正了解 這些複雜的演算法如何運作。

  • It's giant matrices, thousands of rows and columns,

    我們不了解它們如何分類。

  • maybe millions of rows and columns,

    它是個巨大的矩陣, 有數以千計的直行和橫列,

  • and not the programmers

    也許有上百萬的直行和橫列,

  • and not anybody who looks at it,

    程式設計者也無法了解,

  • even if you have all the data,

    任何人看到它都無法了解,

  • understands anymore how exactly it's operating

    即使握有所有的資料,

  • any more than you'd know what I was thinking right now

    對於它到底如何運作的了解程度,

  • if you were shown a cross section of my brain.

    絕對不會高於你對我現在 腦中想什麼的了解程度,

  • It's like we're not programming anymore,

    如果你單憑看我大腦的切面圖。

  • we're growing intelligence that we don't truly understand.

    感覺好像我們不是在寫程式了,

  • And these things only work if there's an enormous amount of data,

    而是在栽培一種我們不是 真正了解的智慧。

  • so they also encourage deep surveillance on all of us

    只在資料量非常巨大的情況下 這些才行得通,

  • so that the machine learning algorithms can work.

    所以他們也助長了 對我們所有人的密切監視,

  • That's why Facebook wants to collect all the data it can about you.

    這樣機器學習才能行得通。

  • The algorithms work better.

    那就是為什麼臉書要盡可能 收集關於你的資料。

  • So let's push that Vegas example a bit.

    這樣演算法效果才會比較好。

  • What if the system that we do not understand

    讓我們再談談賭城的例子。

  • was picking up that it's easier to sell Vegas tickets

    如果這個我們不了解的系統

  • to people who are bipolar and about to enter the manic phase.

    發現比較容易把機票銷售給

  • Such people tend to become overspenders, compulsive gamblers.

    即將進入躁症階段的躁鬱症患者。

  • They could do this, and you'd have no clue that's what they were picking up on.

    這類人傾向於變成 花錢超支的人、強迫性賭徒。

  • I gave this example to a bunch of computer scientists once

    他們能這麼做,而你完全不知道 那是他們選目標的根據。

  • and afterwards, one of them came up to me.

    有次,我把這個例子 給了一群電腦科學家,

  • He was troubled and he said, "That's why I couldn't publish it."

    之後,其中一人來找我。

  • I was like, "Couldn't publish what?"

    他感到困擾,說:「那就是 為什麼我們無法發表它。」

  • He had tried to see whether you can indeed figure out the onset of mania

    我說:「不能發表什麼?」

  • from social media posts before clinical symptoms,

    他曾嘗試能否在出現臨床症狀前 就預知躁鬱症快發作了,

  • and it had worked,

    靠的是分析社交媒體的貼文。

  • and it had worked very well,

    他辦到了,

  • and he had no idea how it worked or what it was picking up on.

    結果非常成功,

  • Now, the problem isn't solved if he doesn't publish it,

    而他完全不知道是怎麼成功的, 也不知道預測的根據是什麼。

  • because there are already companies

    如果他不發表結果, 問題就沒有解決,

  • that are developing this kind of technology,

    因為已經有公司

  • and a lot of the stuff is just off the shelf.

    在發展這種技術,

  • This is not very difficult anymore.

    很多東西都已經是現成的了。

  • Do you ever go on YouTube meaning to watch one video

    這已經不是很困難的事了。

  • and an hour later you've watched 27?

    你可曾經上 YouTube 原本只是要看一支影片,

  • You know how YouTube has this column on the right

    一個小時之後你卻已看了 27 支?

  • that says, "Up next"

    你可知道 YouTube 在網頁的右欄

  • and it autoplays something?

    擺著「即將播放」的影片,

  • It's an algorithm

    而且會自動接著播放那些影片?

  • picking what it thinks that you might be interested in

    那是種演算法,

  • and maybe not find on your own.

    選出它認為你可能會感興趣,

  • It's not a human editor.

    但不見得會自己去找到的影片。

  • It's what algorithms do.

    並不是人類編輯者,

  • It picks up on what you have watched and what people like you have watched,

    而是演算法做的。

  • and infers that that must be what you're interested in,

    它去了解你看過什麼影片, 像你這類的人看過什麼影片,

  • what you want more of,

    然後推論出那就是你會感興趣、

  • and just shows you more.

    想看更多的影片,

  • It sounds like a benign and useful feature,

    然後呈現更多給你看。

  • except when it isn't.

    聽起來是個良性又有用的特色,

  • So in 2016, I attended rallies of then-candidate Donald Trump

    除了它不是這樣的時候。

  • to study as a scholar the movement supporting him.

    在 2016 年,我去了一場 擁護當時還是候選人川普的集會,

  • I study social movements, so I was studying it, too.

    我以學者身份去研究支持他的運動。

  • And then I wanted to write something about one of his rallies,

    我研究社會運動,所以也去研究它。

  • so I watched it a few times on YouTube.

    接著,我想要針對 他的某次集會寫點什麼,

  • YouTube started recommending to me

    所以就在 YouTube 上 看了幾遍。

  • and autoplaying to me white supremacist videos

    YouTube 開始推薦給我

  • in increasing order of extremism.

    並為我自動播放, 白人至上主義的影片,

  • If I watched one,

    一支比一支更極端主義。

  • it served up one even more extreme

    如果我看了一支,

  • and autoplayed that one, too.

    它就會送上另一支更極端的,

  • If you watch Hillary Clinton or Bernie Sanders content,

    並且自動播放它。

  • YouTube recommends and autoplays conspiracy left,

    如果你看的影片內容是 希拉蕊柯林頓或伯尼桑德斯,

  • and it goes downhill from there.

    YouTube 會推薦並自動播放 陰謀論左派的影片,

  • Well, you might be thinking, this is politics, but it's not.

    之後就每況愈下。

  • This isn't about politics.

    你可能會想,這是政治。

  • This is just the algorithm figuring out human behavior.

    但並不是,重點不是政治,

  • I once watched a video about vegetarianism on YouTube

    這只是猜測人類行為的演算法。

  • and YouTube recommended and autoplayed a video about being vegan.

    我曾經上 YouTube 看一支關於吃素的影片,

  • It's like you're never hardcore enough for YouTube.

    而 YouTube 推薦並自動播放了 一支關於嚴格素食主義者的影片。

  • (Laughter)

    似乎對 YouTube 而言 你的口味永遠都還不夠重。

  • So what's going on?

    (笑聲)

  • Now, YouTube's algorithm is proprietary,

    發生了什麼事?

  • but here's what I think is going on.

    YouTube 的演算法是專有的,

  • The algorithm has figured out

    但我認為發生的事是這樣的:

  • that if you can entice people

    演算法發現到,

  • into thinking that you can show them something more hardcore,

    如果誘使人們思索

  • they're more likely to stay on the site

    你還能提供他們更重口味的東西,

  • watching video after video going down that rabbit hole

    他們就更可能會留在網站上,

  • while Google serves them ads.

    看一支又一支的影片, 一路掉進兔子洞,

  • Now, with nobody minding the ethics of the store,

    同時 Google 還給他們看廣告。

  • these sites can profile people

    沒人在意商家倫理的情況下,

  • who are Jew haters,

    這些網站能夠描繪人的特性,

  • who think that Jews are parasites

    哪些人痛恨猶太人,

  • and who have such explicit anti-Semitic content,

    認為猶太人是寄生蟲,

  • and let you target them with ads.

    以及哪些人明確地反猶太人,

  • They can also mobilize algorithms

    讓你針對他們提供廣告。

  • to find for you look-alike audiences,

    它們也能動員演算法,

  • people who do not have such explicit anti-Semitic content on their profile

    為你找出相近的觀眾群,

  • but who the algorithm detects may be susceptible to such messages,

    那些側看不怎麼明顯反猶太人,

  • and lets you target them with ads, too.

    但是被演算法偵測出來 很容易受到這類訊息影響的人,

  • Now, this may sound like an implausible example,

    讓你針對他們提供廣告。

  • but this is real.

    這可能聽起來像是個 難以置信的例子,

  • ProPublica investigated this

    但它是真實的。

  • and found that you can indeed do this on Facebook,

    ProPublica 調查了這件事,

  • and Facebook helpfully offered up suggestions

    且發現你的確可以 在臉書上做到這件事,

  • on how to broaden that audience.

    且臉書很有效地提供建議,

  • BuzzFeed tried it for Google, and very quickly they found,

    告訴你如何拓展觀眾群。

  • yep, you can do it on Google, too.

    BuzzFeed 用 Google 做了實驗,他們很快發現,

  • And it wasn't even expensive.

    是的,你也可以在 Google 上這樣做。

  • The ProPublica reporter spent about 30 dollars

    而且甚至不貴。

  • to target this category.

    ProPublica 的記者 花了大約 30 美元

  • So last year, Donald Trump's social media manager disclosed

    來針對這個類別。

  • that they were using Facebook dark posts to demobilize people,

    去年川普的社交媒體經理透露,

  • not to persuade them,

    他們利用臉書的隱藏廣告貼文 來「反動員」選民,

  • but to convince them not to vote at all.

    不是勸說或動員他們,

  • And to do that, they targeted specifically,

    而是說服他們根本不去投票。

  • for example, African-American men in key cities like Philadelphia,

    為做到這一點,他們準確設定目標,

  • and I'm going to read exactly what he said.

    比如像費城這樣 關鍵城市的非裔美國男性,

  • I'm quoting.

    讓我把他的話一字不漏讀出來。

  • They were using "nonpublic posts

    以下為引述。

  • whose viewership the campaign controls

    他們使用「非公開貼文,

  • so that only the people we want to see it see it.

    那些貼文的觀看權限 由競選團隊來控制,

  • We modeled this.

    所以只有我們挑的讀者才看得到。

  • It will dramatically affect her ability to turn these people out."

    我們為此建立了模型,

  • What's in those dark posts?

    會嚴重影響到她(指希拉蕊) 動員那些人去投票的能力。」

  • We have no idea.

    那些隱藏廣告貼文中有什麼內容?

  • Facebook won't tell us.

    我們不知道。

  • So Facebook also algorithmically arranges the posts

    臉書不告訴我們。

  • that your friends put on Facebook, or the pages you follow.

    所以臉書也用演算法的方式 來安排你的朋友

  • It doesn't show you everything chronologically.

    在臉書的貼文或是你追蹤的頁面。

  • It puts the order in the way that the algorithm thinks will entice you

    它並不會照時間順序 來呈現所有內容。

  • to stay on the site longer.

    呈現順序是演算法認為

  • Now, so this has a lot of consequences.

    能引誘你在網站上逗留久一點的順序。

  • You may be thinking somebody is snubbing you on Facebook.

    所以,這麼做有許多後果。

  • The algorithm may never be showing your post to them.

    你可能會認為有人在臉書上冷落你。

  • The algorithm is prioritizing some of them and burying the others.

    也許是演算法根本沒把 你的貼文呈現給他們看。

  • Experiments show

    演算法優先呈現其中某些, 而埋藏掉其他的。

  • that what the algorithm picks to show you can affect your emotions.

    實驗顯示,

  • But that's not all.

    演算法選擇呈現給你的內容, 會影響你的情緒。

  • It also affects political behavior.

    但不止這樣,

  • So in 2010, in the midterm elections,

    它也會影響政治行為。

  • Facebook did an experiment on 61 million people in the US

    在 2010 年的期中選舉時,

  • that was disclosed after the fact.

    臉書做了一個實驗, 對象是美國 6100 萬人,

  • So some people were shown, "Today is election day,"

    該實驗後來被揭露出來。

  • the simpler one,

    有些人看到的是「今天是選舉日」,

  • and some people were shown the one with that tiny tweak

    簡單的版本,

  • with those little thumbnails

    有些人看到的是有 小小調整過的版本,

  • of your friends who clicked on "I voted."

    用小型照片縮圖來顯示出

  • This simple tweak.

    你的朋友中按了 「我已投票」的那些人。

  • OK? So the pictures were the only change,

    這是個小小的調整。

  • and that post shown just once

    唯一的差別就是照片,

  • turned out an additional 340,000 voters

    這篇貼文只被顯示出來一次,

  • in that election,

    結果多出了 34 萬的投票者

  • according to this research

    在那次選舉投了票,

  • as confirmed by the voter rolls.

    根據這研究指出,

  • A fluke? No.

    這結果已經由選舉人名冊確認過了。

  • Because in 2012, they repeated the same experiment.

    是僥倖嗎?不是。

  • And that time,

    因為在 2012 年, 他們重覆了同樣的實驗。

  • that civic message shown just once

    那一次,

  • turned out an additional 270,000 voters.

    只顯示一次的公民訊息

  • For reference, the 2016 US presidential election

    造成投票者多出了 27 萬人。

  • was decided by about 100,000 votes.

    供參考用:2016 年 總統大選的結果,

  • Now, Facebook can also very easily infer what your politics are,

    大約十萬選票的差距決定了江山。

  • even if you've never disclosed them on the site.

    臉書也能輕易推論出你的政治傾向,

  • Right? These algorithms can do that quite easily.

    即使你未曾在臉書上透露過。

  • What if a platform with that kind of power

    對吧?那些演算法很輕易就做得到。

  • decides to turn out supporters of one candidate over the other?

    一旦具有那種力量的平台決定要使

  • How would we even know about it?

    一位候選人的支持者出來投票, 另一位的則不,會如何呢?

  • Now, we started from someplace seemingly innocuous --

    我們如何得知發生了這種事?

  • online adds following us around --

    我們討論的起始點看似無害──

  • and we've landed someplace else.

    線上廣告跟著我們到處出現──

  • As a public and as citizens,

    但後來卻談到別的現象。

  • we no longer know if we're seeing the same information

    身為大眾、身為公民,

  • or what anybody else is seeing,

    我們不再知道 我們看到的資訊是否相同,

  • and without a common basis of information,

    或是其他人看到了什麼,

  • little by little,

    沒有共同的資訊基礎,

  • public debate is becoming impossible,

    漸漸地,

  • and we're just at the beginning stages of this.

    就會變成不可能公開辯論了,

  • These algorithms can quite easily infer

    我們目前只是在 這個過程的初始階段。

  • things like your people's ethnicity,

    這些演算法很容易推論出

  • religious and political views, personality traits,

    比如你的種族、

  • intelligence, happiness, use of addictive substances,

    宗教和政治觀點、個人特質、

  • parental separation, age and genders,

    智力、快樂程度、 是否使用上癮式物質、

  • just from Facebook likes.

    父母離異、年齡和性別,

  • These algorithms can identify protesters

    只從臉書按的讚就能知道。

  • even if their faces are partially concealed.

    這些演算法能夠辨識抗議者,

  • These algorithms may be able to detect people's sexual orientation

    即使遮蔽他們部份的臉也能辨識。

  • just from their dating profile pictures.

    這些演算法或許能偵測人的性向,

  • Now, these are probabilistic guesses,

    只要有他們的約會側寫照片即可。

  • so they're not going to be 100 percent right,

    這些是用機率算出的猜測,

  • but I don't see the powerful resisting the temptation to use these technologies

    所以不見得 100% 正確,

  • just because there are some false positives,

    但我並沒有看到因為這些技術有 假陽性結果(實際沒有被預測為有)

  • which will of course create a whole other layer of problems.

    大家就抗拒使用它們,

  • Imagine what a state can do

    因而這些假陽性結果 又造成全然另一層的問題。

  • with the immense amount of data it has on its citizens.

    想像一下國家會怎麼用

  • China is already using face detection technology

    所擁有的大量國民資料。

  • to identify and arrest people.

    中國已經在使用面部辨識技術

  • And here's the tragedy:

    來識別和逮捕人。

  • we're building this infrastructure of surveillance authoritarianism

    不幸的是,

  • merely to get people to click on ads.

    起初我們建立這個 專制監視的基礎結構,

  • And this won't be Orwell's authoritarianism.

    僅僅為了要讓人們點閱廣告。

  • This isn't "1984."

    這不會是歐威爾的專制主義。

  • Now, if authoritarianism is using overt fear to terrorize us,

    這不是《1984》。

  • we'll all be scared, but we'll know it,

    如果專制主義 公然利用恐懼來恐嚇我們,

  • we'll hate it and we'll resist it.

    我們會害怕,但我們心知肚明,

  • But if the people in power are using these algorithms

    我們會厭惡它,也會抗拒它。

  • to quietly watch us,

    但如果掌權者用這些演算法

  • to judge us and to nudge us,

    悄悄地監看我們、

  • to predict and identify the troublemakers and the rebels,

    評斷我們、輕輕推使我們,

  • to deploy persuasion architectures at scale

    用這些演算法來預測和辨識出 問題製造者和叛亂份子,

  • and to manipulate individuals one by one

    部署大規模的說服結構,

  • using their personal, individual weaknesses and vulnerabilities,

    並個別操弄每一個人,

  • and if they're doing it at scale

    利用他們個人、個別的缺點和弱點,

  • through our private screens

    如果規模夠大,

  • so that we don't even know

    透過我們私人的螢幕,

  • what our fellow citizens and neighbors are seeing,

    那麼我們甚至不會知道

  • that authoritarianism will envelop us like a spider's web

    其他公民及鄰居看到了什麼內容,

  • and we may not even know we're in it.

    那種專制主義會像蜘蛛網 一樣把我們緊緊地包裹起來,

  • So Facebook's market capitalization

    而我們甚至不會知道 自己被包在裡面。

  • is approaching half a trillion dollars.

    所以,臉書的市場資本化

  • It's because it works great as a persuasion architecture.

    已經接近五千億美元。

  • But the structure of that architecture

    因為它是個很成功的說服架構。

  • is the same whether you're selling shoes

    但用的架構一樣,

  • or whether you're selling politics.

    不論你銷售的是鞋子

  • The algorithms do not know the difference.

    或是政治。

  • The same algorithms set loose upon us

    演算法不知道差別。

  • to make us more pliable for ads

    那個被鬆綁了的、

  • are also organizing our political, personal and social information flows,

    為使我們更容易 被廣告左右的演算法,

  • and that's what's got to change.

    同時也正組織著我們的 政治、個人和社會的資訊流,

  • Now, don't get me wrong,

    這點必須要被改變才行。

  • we use digital platforms because they provide us with great value.

    別誤會我,

  • I use Facebook to keep in touch with friends and family around the world.

    我們使用數位平台,是因為 它們能提供我們極大的價值。

  • I've written about how crucial social media is for social movements.

    我用臉書來和世界各地的 朋友家人保持聯絡。

  • I have studied how these technologies can be used

    我寫過關於社交媒體對於 社會運動有多重要的文章。

  • to circumvent censorship around the world.

    我研究過這些技術能如何

  • But it's not that the people who run, you know, Facebook or Google

    被用來規避世界各地的審查制度。

  • are maliciously and deliberately trying

    但,不是臉書 或 Google 的營運者

  • to make the country or the world more polarized

    在惡意、刻意地嘗試

  • and encourage extremism.

    讓國家或世界變得更兩極化、

  • I read the many well-intentioned statements

    或鼓勵極端主義。

  • that these people put out.

    我讀過許多出發點很好的聲明,

  • But it's not the intent or the statements people in technology make that matter,

    都是這些人發出來的。

  • it's the structures and business models they're building.

    但重點並不是科技人的意圖或聲明,

  • And that's the core of the problem.

    而他們建造的結構與商業模型

  • Either Facebook is a giant con of half a trillion dollars

    才是問題的核心。

  • and ads don't work on the site,

    要不就臉書是個大騙子, 詐騙了半兆美元,

  • it doesn't work as a persuasion architecture,

    該網站上的廣告沒有用,

  • or its power of influence is of great concern.

    它不以說服架構的形式運作;

  • It's either one or the other.

    要不就它的影響力很讓人擔心。

  • It's similar for Google, too.

    只會是兩者其一。

  • So what can we do?

    Google 也類似。

  • This needs to change.

    所以,我們能做什麼?

  • Now, I can't offer a simple recipe,

    這必須要改變。

  • because we need to restructure

    我無法提供簡單的解決之道,

  • the whole way our digital technology operates.

    因為我們得要重建

  • Everything from the way technology is developed

    整個數位技術的運作方式;

  • to the way the incentives, economic and otherwise,

    每一樣──從發展技術的方式

  • are built into the system.

    到獎勵的方式,不論是實質 或其他形式的獎勵──

  • We have to face and try to deal with

    都要被建置到系統中。

  • the lack of transparency created by the proprietary algorithms,

    我們得要面對並試圖處理

  • the structural challenge of machine learning's opacity,

    專有演算法所造成的透明度缺乏,

  • all this indiscriminate data that's being collected about us.

    難懂的機器學習的結構性挑戰,

  • We have a big task in front of us.

    所有被不分皂白地收集走、 與我們相關的資料。

  • We have to mobilize our technology,

    我們面對巨大的任務。

  • our creativity

    我們得要動員我們的科技、

  • and yes, our politics

    我們的創意、

  • so that we can build artificial intelligence

    以及我們的政治。

  • that supports us in our human goals

    以讓我們建立的人工智慧

  • but that is also constrained by our human values.

    能夠支持我們人類的目標,

  • And I understand this won't be easy.

    那些同時也被人類價值 所限制住的目標。

  • We might not even easily agree on what those terms mean.

    我知道這不容易。

  • But if we take seriously

    我們甚至無法輕易取得 那些用語意義的共識。

  • how these systems that we depend on for so much operate,

    但如果我們認真看待

  • I don't see how we can postpone this conversation anymore.

    我們如此依賴的這些系統如何運作,

  • These structures

    我看不出我們怎能再延遲對話。

  • are organizing how we function

    這些結構

  • and they're controlling

    正在組織我們運作的方式,

  • what we can and we cannot do.

    並且控制了

  • And many of these ad-financed platforms,

    我們能做什麼、不能做什麼。

  • they boast that they're free.

    許多這類由廣告贊助的平台,

  • In this context, it means that we are the product that's being sold.

    它們誇說它們是免費的。

  • We need a digital economy

    在這個情境下,意思就是說 「我們」就是被銷售的產品。

  • where our data and our attention

    我們需要一個數位經濟結構,

  • is not for sale to the highest-bidding authoritarian or demagogue.

    在這個結構中,我們的 資料和注意力是非賣品,

  • (Applause)

    不能售與出價最高的 專制主義者或煽動者。

  • So to go back to that Hollywood paraphrase,

    (掌聲)

  • we do want the prodigious potential

    回到前面說的好萊塢改述,

  • of artificial intelligence and digital technology to blossom,

    我們的確希望人工智慧與數位科技的

  • but for that, we must face this prodigious menace,

    巨大潛能能夠綻放,

  • open-eyed and now.

    但為此,我們必須要 面對這個巨大的威脅,

  • Thank you.

    睜開眼睛,現在就做。

  • (Applause)

    謝謝。

So when people voice fears of artificial intelligence,

譯者: Lilian Chiu 審譯者: Helen Chang

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it

B1 US TED 演算法 臉書 廣告 貼文 資料

【TED】Zeynep Tufekci:我們正在建立一個荒誕的世界,只是為了讓人們點擊廣告(We're building a dystopia just to make people click on ads | Zeynep Tufekci)。 (【TED】Zeynep Tufekci: We're building a dystopia just to make people click on ads (We're building a dystopia just to make people cli

  • 144 12
    Zenn posted on 2021/01/14
Video vocabulary