What is ‘Responsible AI’?

With the sharp increase in use of AI in business, the interest in Responsible AI has grown rapidly in academia, industry and government circles. Responsible AI is a framework and set of principles that holds AI applications responsible for the decisions they make - just like humans. The interest in developing Responsible AI emerged in response to the range of individual and societal harms that the misuse, abuse, poor design, or negative unintended consequences AI systems may cause. While most organizations agree on a common set of ethical principles, many are challenged to translate them into concrete actions that impact daily decisions and provide evidence of the effectiveness in order to prove they are not in violation of such principles.

There are several relevant principles that should be considered when designing AI models, allowing for mitigation of ethical risks. Read the full story in FEI Daily

Relevant Links:

  1. ITechLaw (itechlaw.org)  A global lawyers organization focused on tech since 1971. They have a proposed Responsible AI Principles with a 2page summary and a detailed Policy Framework
  2. ACM Governance and Oversight coming to AI –Independent Audit of AI systems
  3. MIT Sloan What Does an AI Ethicist (at Microsoft) Do?  
  4. HBR When Machine learning Goes off the Rails
  5. ACM News:  Holding Algorithms Responsible
  6. Point/Counterpoint Point: Should AI Technology Be Regulated? Yes, and Here’s How.
  7. Draft Memorandum to the Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications”