Placeholder Image

Subtitles section Play video

  • Transportation, healthcare, education, the arts, finance.

  • AI is quickly reaching into nearly every aspect of our lives, including what for many is a concerning development, AI's increasing proliferation in public safety.

  • Axon, one of the most prominent suppliers of public safety technology in the United States, has for years been steadily deploying AI through both existing and new products.

  • These products give police officers powerful new capabilities.

  • Live language translation through their body cameras, automatic license plate recognition of multiple lanes of traffic at once through in-vehicle cameras, and the transcription of audio and video evidence into court-ready documents in just minutes.

  • Their newest product, DraftOne, which can generate the first draft of a police report using body-worn camera audio, promises to reduce the strain of paperwork on officers' time, freeing them up to be out on the street.

  • But with these capabilities come concerns about ethical product development, bias, and accountability.

  • We sat down with Rick Smith, CEO and founder of Axon.

  • He makes the case for a tech company like Axon's ability to make positive change on difficult problems in a responsible way.

  • Together, we can create a safer tomorrow.

  • We tend to look at ourselves as the tech company that'll do hardware, software, AI, focused on how do we make the world less dangerous?

  • How do we prevent, deter violence?

  • How do we hold people accountable?

  • And ultimately, how do we deal with the ugly problems in modern society to build the world that we wanna live in?

  • The way we're generally approaching AI is we try to focus on areas that are important to us.

  • Areas where, okay, where can we guard band the risks?

  • And how can we use AI not to autonomously do things that could adversely impact a person, but how can we use it to do more of the mundane work so that we can bring human ethics and judgment well-focused where it's needed?

  • An enormous amount of the resources that are invested in policing are actually consumed with mundane, bureaucratic, inefficient tasks.

  • And as a result, we see officers that are burned out, agencies that are having a hard time retaining people, cases that go into the criminal justice system that are perhaps not really well-prepared.

  • You know, there's a lot of people who sit in jail just waiting for the system to catch up to them.

  • And that's where we think AI can really shine.

  • What kind of framework does Axon use, if any, to ensure products are being made responsibly with societal values and community input in mind?

  • We have an EEAC, which is an Equity and Ethics Advisory Council.

  • And its job is to give us feedback from those people we wouldn't naturally talk to.

  • Basically, ring the alarm bell for me and for our product managers on, hey, here are the risks.

  • And I'll tell you, when we first started doing this, it's a little frustrating because policing can be somewhat divisive.

  • And frankly, on our advisory council, we have a lot of people who are very critical of policing.

  • It also allows us to really make our products legitimately better.

  • It's better for everybody, including us, if we build our products the right way so that they're accepted.

  • Tell me about some of the kind of specific AI-enabled tools that Axon is making.

  • We've already launched our first major AI product called DraftOne.

  • DraftOne is an AI tool that takes the audio from a body camera, we turn that into a transcript, and then from that transcript, we reformat that information into the form of a police report narrative, which is the long description of what actually happened in an incident.

  • Without DraftOne, police officers spend about half of their working days writing reports.

  • Now, that's a necessary task, but it does not add value to their mission of public safety.

  • It is a heavy overhead load.

  • And what we're doing is we're just helping them take the information from the words that they said and what they heard in a body camera and pre-formatting the draft, preparing it for their review.

  • Another one we're working on is using AI to help investigators sort through the mountains of digital evidence.

  • So if you're a prosecutor and you're dealing with a critical case, you may have six body cameras, three in-car cameras, thousands of photos.

  • That's, again, a place where we see AI can help sort through all of that information, connect it conceptually to what's in the report.

  • How do we tag that to the moments in video that are most relevant and help those investigators know where to focus their time?

  • What do you think some of the risks of deploying this kind of technology are and how are you working to mitigate those risks?

  • So I think the number one risk is that you get a police officer on the stand who says, look, I didn't write that, an AI wrote that.

  • That's the biggest concern that prosecutors are worried about.

  • And so we spend a lot of time building safeguards and, frankly, speed bumps that cause them to slow down and have to review things.

  • We tune this very carefully to make sure that the AI does not inject any imagination or hallucination.

  • It does not inject any information.

  • Everything must be based on words that were in the transcript.

  • And if not, we instruct the AI to be very conservative.

  • If it does not have high confidence that anything is going in there is accurate, it is to insert a question.

  • They're going to see a bunch of insert statements like the subject was insert height tall, and they're bracketed in a way to help you find them.

  • We've designed this in a way where you cannot do anything with the draft report until you have fixed and answered all the questions in the insert statement.

  • How do you think AI could strengthen accountability and community trust in public safety?

  • When agencies roll out body cameras, one of the things they tell us is, okay, we now need a way to go through and spot check what people are doing.

  • So historically, they've just done this with random video audits.

  • What our priority rank video tool does is it allows the agency to tune what the things are they'd like to see.

  • A great example is, anytime the gun comes out of the holster, well, we can detect that.

  • Or we can use the transcripts now, thanks to AI, to look for keywords or even key concepts.

  • Things like, you know, if a video scores that there's any sort of swearing or racial epithets, that could get a very high score.

  • And basically the high scored videos are the ones that are going to be reviewed.

  • I think just between those two, we'd probably find the vast majority of where there's likely to be either bad behaviors that need to be corrected, or frankly, good behaviors that need to be rewarded and built upon and potentially shared with others in the agency on how to do it right.

  • I would say to anybody living in the world today, you should be playing with, experimenting, using AI.

  • It's really hard to have a good opinion on something you're ignorant of.

  • You know, challenge yourself, learn a little about it, because I don't think that there's any aspect of human society right now that is not going to be changed.

  • We spend a lot of time thinking about what can go wrong.

  • We also need to think about, hey, what could go right?

  • Or the things we're talking about, let's not compare them to perfection.

  • Can we do better than today?

  • That feels like that can be done, and you can do it thoughtfully, and that's what we aspire to do.

Transportation, healthcare, education, the arts, finance.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it