Placeholder Image

Subtitles section Play video

  • AI is being discussed a lot, but what does it mean to use AI responsibly?

  • Not sure?

  • That's great.

  • That's what I'm here for.

  • I'm Manny, and I'm a security engineer at Google.

  • I'm going to teach you how to understand why Google has put AI principles in place, identify the need for responsible AI practice within an organization, recognize that responsible AI affects all decisions made at all stages of a project, and recognize that organizations can design their AI tools to fit their own business needs and values.

  • Sounds good?

  • Let's get into it.

  • You might not realize it, but many of us already have daily interactions with artificial intelligence, or AI, from predictions for traffic and weather to recommendations for TV shows you might like to watch next.

  • As AI becomes more common, many technologies that aren't AI-enabled, they start to seem inadequate, like having a phone that can't access the internet.

  • Now AI systems are enabling computers to see, understand, and interact with the world in ways that were unimaginable just a decade ago, and these systems are developing at an extraordinary pace.

  • What we've got to remember, though, is that despite these remarkable advancements, AI is not infallible.

  • Developing responsible AI requires an understanding of the possible issues, limitations, or unintended consequences.

  • Technology is a reflection of what exists in society.

  • So without good practices, AI may replicate existing issues or bias and amplify them.

  • This is where things get tricky, because there isn't a universal definition of responsible AI, nor is there a simple checklist or formula that defines how responsible AI practices should be implemented.

  • Instead, organizations are developing their own AI principles that reflect their mission and value.

  • Luckily for us, though, while these principles are unique to every organization, if you look for common themes, you find a consistent set of ideas across transparency, fairness, accountability, and privacy.

  • Let's get into how we view things at Google.

  • Our approach to responsible AI is rooted in a commitment to strive towards AI that's built for everyone, that's accountable and safe, that respects privacy, and that is driven by scientific excellence.

  • We've developed our own AI principles, practices, governance processes, and tools that together embody our values and guide our approach to responsible AI.

  • We've incorporated responsibility by design into our products, and even more importantly, organization.

  • Like many companies, we use our AI principles as a framework to guide responsible decision-making.

  • We all have a role to play in how responsible AI is applied.

  • Whatever stage in the AI process you're involved with, from design to deployment or application, the decisions you make have an impact.

  • And that's why it's so important that you, too, have a defined and repeatable process for using AI responsibly.

  • There's a common misconception with artificial intelligence that machines play the central decision-making role.

  • In reality, it's people who design and build these machines and decide how they're used.

  • Let me explain.

  • People are involved in each aspect of AI development.

  • They collect or create the data that the model is trained on.

  • They control the deployment of the AI and how it's applied in a given context.

  • Essentially, human decisions are threaded through our technology products.

  • And every time a person makes a decision, they're actually making a choice based on their own values.

  • Whether it's a decision to use generative AI to solve a problem, as opposed to other methods, or anywhere throughout the machine learning lifecycle, that person introduces their own set of values.

  • This means that every decision point requires consideration and evaluation to ensure that choices have been made responsibly, from concept through deployment and maintenance.

  • Because there's the potential to impact many areas of society, not to mention people's daily lives, it's important to develop these technologies with ethics in mind.

  • Responsible AI doesn't mean to focus only on the obviously controversial use cases.

  • Without responsible AI practices, even seemingly innocuous AI use cases, or those with good intent, could still cause ethical issues or unintended outcomes, or not be as beneficial as they could be.

  • Ethics and responsibility are important, not just because they represent the right thing to do, but also because they can guide AI design to be more beneficial for people's lives.

  • So how does this relate to Google?

  • We've learned that building responsibility into any AI deployment makes better models and builds trust with our customers and our customers' customers.

  • If at any point that trust is broken, we run the risk of AI deployments being stalled, unsuccessful, or at worst, harmful to stakeholders and those products' effects.

  • And tying it all together, this all fits into our belief at Google that responsible AI equals successful AI.

  • We make our product and business decisions around AI through a series of assessments and reviews.

  • These instill rigor and consistency in our approach across product areas and geographies.

  • These assessments and reviews begin with ensuring that any project aligns with our AI principles.

  • While AI principles help ground a group in shared commitments, not everyone will agree with every decision made about how products should be designed responsibly.

  • This is why it's important to develop robust processes that people can trust.

  • So even if they don't agree with the end decision, they trust the process that drove the decision.

  • So we've talked a lot about just how important guiding principles are for AI in theory, but what are they in practice?

  • Let's get into it.

  • In June 2018, we announced seven AI principles to guide our work.

  • These are concrete standards that actively govern our research and product development and affect our business decisions.

  • Here's an overview of each one.

  • One, AI should be socially beneficial.

  • Any project should take into account a broad range of social and economic factors and will proceed only where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

  • Two, AI should avoid creating or reinforcing unfair bias.

  • We seek to avoid unjust effects on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

  • Three, AI should be built and tested for safety.

  • We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.

  • Four, AI should be accountable to people.

  • We will design AI systems that provide appropriate opportunities for feedback, relative explanations, and appeal.

  • Five, AI should incorporate privacy design principles.

  • We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

  • Six, AI should uphold high standards of scientific excellence.

  • We'll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches.

  • And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

  • Seven, AI should be made available for uses that accord with these principles.

  • Many technologies have multiple uses, so we'll work to limit potentially harmful or abusive applications.

  • So those are the seven principles we have, but in addition to these seven principles, there are certain AI applications we will not pursue.

  • We will not design or deploy AI in these four application areas.

  • Technologies that cause or are likely to cause overall harm.

  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

  • Technologies that gather or use information for surveillance that violates internationally accepted norms.

  • And technologies whose purpose contravenes widely accepted principles of international law and human rights.

  • Establishing principles was a starting point rather than an end.

  • What remains true is that our AI principles rarely give us direct answers to our questions how to build our products.

  • They don't and shouldn't allow us to sidestep hard conversations.

  • They are a foundation that establishes what we stand for, what we build, and why we build it.

  • And they're core to the success of our enterprise AI offerings.

  • Thanks for watching.

  • And if you want to learn more about AI, make sure to check out our other videos.

AI is being discussed a lot, but what does it mean to use AI responsibly?

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it