Subtitles section Play video
Transportation, healthcare, education, the arts, finance.
交通、醫療、教育、藝術、金融。
AI is quickly reaching into nearly every aspect of our lives, including what for many is a concerning development, AI's increasing proliferation in public safety.
人工智能正在迅速滲透到我們生活的方方面面,其中包括令許多人擔憂的發展--人工智能在公共安全領域的日益普及。
Axon, one of the most prominent suppliers of public safety technology in the United States, has for years been steadily deploying AI through both existing and new products.
Axon 是美國最著名的公共安全技術供應商之一,多年來一直在通過現有產品和新產品穩步部署人工智能。
These products give police officers powerful new capabilities.
這些產品為警務人員提供了強大的新功能。
Live language translation through their body cameras, automatic license plate recognition of multiple lanes of traffic at once through in-vehicle cameras, and the transcription of audio and video evidence into court-ready documents in just minutes.
通過車身攝像頭進行實時語言翻譯,通過車載攝像頭同時自動識別多條車道的車牌,並在短短几分鐘內將音頻和視頻證據轉錄為法庭文件。
Their newest product, DraftOne, which can generate the first draft of a police report using body-worn camera audio, promises to reduce the strain of paperwork on officers' time, freeing them up to be out on the street.
他們的最新產品 DraftOne 可以利用隨身攝像頭的音頻生成警方報告的初稿,有望減少文書工作對警員時間的壓力,使他們能夠騰出時間上街巡邏。
But with these capabilities come concerns about ethical product development, bias, and accountability.
但是,伴隨著這些能力而來的是對產品開發道德、偏見和問責制的擔憂。
We sat down with Rick Smith, CEO and founder of Axon.
我們與Axon公司的首席執行官兼創始人裡克-史密斯(Rick Smith)進行了座談。
He makes the case for a tech company like Axon's ability to make positive change on difficult problems in a responsible way.
他認為,像Axon這樣的科技公司有能力以負責任的方式為棘手問題帶來積極變化。
Together, we can create a safer tomorrow.
齊心協力,我們就能創造更安全的明天。
We tend to look at ourselves as the tech company that'll do hardware, software, AI, focused on how do we make the world less dangerous?
我們傾向於將自己視為一家科技公司,提供硬件、軟件和人工智能,專注於如何讓世界不再那麼危險?
How do we prevent, deter violence?
我們如何預防和阻止暴力?
How do we hold people accountable?
我們如何讓人們負起責任?
And ultimately, how do we deal with the ugly problems in modern society to build the world that we wanna live in?
歸根結底,我們該如何處理現代社會中的醜惡問題,從而建立一個我們想要生活的世界?
The way we're generally approaching AI is we try to focus on areas that are important to us.
我們對待人工智能的一般方式是,我們儘量專注於對我們來說重要的領域。
Areas where, okay, where can we guard band the risks?
我們可以在哪些方面防範風險?
And how can we use AI not to autonomously do things that could adversely impact a person, but how can we use it to do more of the mundane work so that we can bring human ethics and judgment well-focused where it's needed?
我們如何才能利用人工智能,而不是讓它自主地去做那些可能會對人造成負面影響的事情,但我們又如何才能利用它去做更多平凡的工作,從而在需要的地方發揮人類的道德和判斷力?
An enormous amount of the resources that are invested in policing are actually consumed with mundane, bureaucratic, inefficient tasks.
投入到警務工作中的大量資源實際上被消耗在瑣碎、官僚、低效的任務上。
And as a result, we see officers that are burned out, agencies that are having a hard time retaining people, cases that go into the criminal justice system that are perhaps not really well-prepared.
是以,我們看到有的警官已經精疲力竭,有的機構很難留住人,有的案件進入刑事司法系統後可能並沒有真正做好充分準備。
You know, there's a lot of people who sit in jail just waiting for the system to catch up to them.
你知道,有很多人坐在監獄裡,就等著系統來抓他們。
And that's where we think AI can really shine.
而這正是我們認為人工智能能夠真正大顯身手的地方。
What kind of framework does Axon use, if any, to ensure products are being made responsibly with societal values and community input in mind?
Axon 採用什麼樣的框架(如果有的話)來確保在生產產品時考慮到社會價值和社區意見?
We have an EEAC, which is an Equity and Ethics Advisory Council.
我們有一個 EEAC,即公平與道德諮詢委員會。
And its job is to give us feedback from those people we wouldn't naturally talk to.
它的工作就是讓我們從那些我們不會自然而然交談的人那裡獲得反饋。
Basically, ring the alarm bell for me and for our product managers on, hey, here are the risks.
基本上,給我和我們的產品經理敲響了警鐘:嘿,這裡有風險。
And I'll tell you, when we first started doing this, it's a little frustrating because policing can be somewhat divisive.
我告訴你,我們剛開始這樣做的時候,有點令人沮喪,因為警務工作可能會引起一些分歧。
And frankly, on our advisory council, we have a lot of people who are very critical of policing.
坦率地說,在我們的顧問委員會中,有很多人對警務工作持責備態度。
It also allows us to really make our products legitimately better.
它還能讓我們真正把產品做得更好。
It's better for everybody, including us, if we build our products the right way so that they're accepted.
如果我們能以正確的方式製造產品,使其得到認可,這對包括我們在內的每個人都有好處。
Tell me about some of the kind of specific AI-enabled tools that Axon is making.
請告訴我Axon正在製造的一些特定人工智能工具。
We've already launched our first major AI product called DraftOne.
我們已經推出了第一款重要的人工智能產品,名為 DraftOne。
DraftOne is an AI tool that takes the audio from a body camera, we turn that into a transcript, and then from that transcript, we reformat that information into the form of a police report narrative, which is the long description of what actually happened in an incident.
DraftOne 是一款人工智能工具,它能獲取人體攝影機的音頻,將其轉化為文字記錄,然後根據文字記錄將資訊重新格式化為警方報告敘述的形式,即對事件實際發生情況的詳細描述。
Without DraftOne, police officers spend about half of their working days writing reports.
如果沒有 DraftOne,警察大約有一半的工作日都在撰寫報告。
Now, that's a necessary task, but it does not add value to their mission of public safety.
現在,這是一項必要的任務,但並不能為他們的公共安全使命增添價值。
It is a heavy overhead load.
這是一個沉重的高空負荷。
And what we're doing is we're just helping them take the information from the words that they said and what they heard in a body camera and pre-formatting the draft, preparing it for their review.
我們所做的就是幫助他們從他們所說的話和他們在人體攝影機中聽到的內容中獲取信息,並對草案進行預先格式化,為他們的審查做好準備。
Another one we're working on is using AI to help investigators sort through the mountains of digital evidence.
我們正在研究的另一個項目是利用人工智能幫助調查人員從堆積如山的數字證據中進行分類。
So if you're a prosecutor and you're dealing with a critical case, you may have six body cameras, three in-car cameras, thousands of photos.
是以,如果你是一名檢察官,正在處理一個重要案件,你可能會有六個車身攝像頭、三個車載攝像頭、數千張照片。
That's, again, a place where we see AI can help sort through all of that information, connect it conceptually to what's in the report.
這也是我們認為人工智能可以幫助整理所有資訊,並將其與報告中的內容進行概念性連接的地方。
How do we tag that to the moments in video that are most relevant and help those investigators know where to focus their time?
我們如何將其標記為視頻中最相關的時刻,並幫助調查人員知道該將時間集中在哪裡?
What do you think some of the risks of deploying this kind of technology are and how are you working to mitigate those risks?
您認為部署這種技術會有哪些風險,以及您將如何努力降低這些風險?
So I think the number one risk is that you get a police officer on the stand who says, look, I didn't write that, an AI wrote that.
是以,我認為最大的風險是,警察在證人席上說:"聽著,那不是我寫的,是人工智能寫的。
That's the biggest concern that prosecutors are worried about.
這是檢察官們最擔心的問題。
And so we spend a lot of time building safeguards and, frankly, speed bumps that cause them to slow down and have to review things.
是以,我們花了很多時間建立保障措施,坦率地說,就是設置減速帶,讓他們放慢腳步,不得不重新審視事情。
We tune this very carefully to make sure that the AI does not inject any imagination or hallucination.
我們對此進行了非常仔細的調整,以確保人工智能不會注入任何想象或幻覺。
It does not inject any information.
它不會注入任何資訊。
Everything must be based on words that were in the transcript.
一切都必須以記錄稿中的文字為基礎。
And if not, we instruct the AI to be very conservative.
如果沒有,我們就會訓示人工智能非常保守。
If it does not have high confidence that anything is going in there is accurate, it is to insert a question.
如果它不太相信裡面的內容是準確的,就會插入一個問題。
They're going to see a bunch of insert statements like the subject was insert height tall, and they're bracketed in a way to help you find them.
他們會看到一堆插入語句,比如主題是插入身高這樣的語句,而且這些語句都用括號括起來,以幫助你找到它們。
We've designed this in a way where you cannot do anything with the draft report until you have fixed and answered all the questions in the insert statement.
我們的設計方式是,在您解決並回答插入說明中的所有問題之前,您不能對報告草案做任何事情。
How do you think AI could strengthen accountability and community trust in public safety?
您認為人工智能如何加強公共安全的問責制和社區信任?
When agencies roll out body cameras, one of the things they tell us is, okay, we now need a way to go through and spot check what people are doing.
當機構推出人體攝影機時,他們告訴我們的其中一件事是,好吧,我們現在需要一種方法來檢查和抽查人們在做什麼。
So historically, they've just done this with random video audits.
是以,從歷史上看,他們只是通過隨機視頻審核來完成這項工作。
What our priority rank video tool does is it allows the agency to tune what the things are they'd like to see.
我們的優先級視頻工具可以讓機構調整他們希望看到的內容。
A great example is, anytime the gun comes out of the holster, well, we can detect that.
一個很好的例子是,只要槍從槍套裡拿出來,我們就能檢測到。
Or we can use the transcripts now, thanks to AI, to look for keywords or even key concepts.
或者,藉助人工智能,我們現在可以利用記錄謄本來查找關鍵詞甚至關鍵概念。
Things like, you know, if a video scores that there's any sort of swearing or racial epithets, that could get a very high score.
比如,如果視頻中出現髒話或種族辱罵,就會得到很高的分數。
And basically the high scored videos are the ones that are going to be reviewed.
基本上,得分高的視頻就會被審核。
I think just between those two, we'd probably find the vast majority of where there's likely to be either bad behaviors that need to be corrected, or frankly, good behaviors that need to be rewarded and built upon and potentially shared with others in the agency on how to do it right.
我認為,就在這兩者之間,我們可能會發現絕大多數情況下都可能存在需要糾正的不良行為,或者坦率地說,需要獎勵和發揚的良好行為,並可能與機構中的其他人分享如何正確行事。
I would say to anybody living in the world today, you should be playing with, experimenting, using AI.
我想對生活在當今世界的任何人說,你們應該玩一玩、做一做實驗、用一用人工智能。
It's really hard to have a good opinion on something you're ignorant of.
對於自己一無所知的東西,真的很難有好的意見。
You know, challenge yourself, learn a little about it, because I don't think that there's any aspect of human society right now that is not going to be changed.
要知道,挑戰自己,瞭解一點這方面的知識,因為我認為現在人類社會沒有任何方面是不會改變的。
We spend a lot of time thinking about what can go wrong.
我們花了很多時間思考可能出現的問題。
We also need to think about, hey, what could go right?
我們還需要想一想,嘿,有什麼可能出錯?
Or the things we're talking about, let's not compare them to perfection.
或者說,我們正在談論的事情,不要與完美相提並論。
Can we do better than today?
我們能比今天做得更好嗎?
That feels like that can be done, and you can do it thoughtfully, and that's what we aspire to do.
這讓人感覺是可以做到的,而且你可以做得很周到,這也是我們渴望做到的。