•  
 
 

Tech glossary

Defining IT & technology terms

Responsible AI

Responsible AI refers to a set of frameworks that promote accountable, ethical and transparent Artificial Intelligence (AI) development and adoption. From approving loan agreements to selecting job candidates, many AI use cases are sensitive in nature. Organizations adopt responsible AI models to avoid biases, which can be ingrained in AI design or the data sources it uses.

As AI becomes more sophisticated and prevalent, ethical considerations must be given significant consideration. Industry leaders and end users alike are calling for more AI regulation.

Responsible AI best practices generally apply the following guidelines:

  • Asking questions that evaluate why you’re using AI for each use case
  • Establishing management policies that address accountability and potential flaws
  • Committing to appropriate and secure data use
  • Understanding that humans should be auditing an AI’s decision-making and results
  • Recognizing that bias can be unconsciously included in AI design
  • Creating documentation that explains how the AI works

Learn more about Responsible AI

Related terms

  • Artificial Intelligence (AI)
  • Big data

Featured content for responsible AI

Article Image

Infographic 10 Game-Changing Use Cases for Data and AI

Article Image

eBook How Healthcare Organizations Are Achieving Ambitious Goals With Intelligent Technology

Article Image

Blog (Insight Voices) The 5 Stages of AI Adoption: How to Sustain Momentum at Every Level

Article Image

eBook Unlocking the Power of Data and AI

Narrow your topic:

Digital Innovation  Artificial Intelligence (AI)  View all focus areas