Technology

What is ‘Responsible AI’?


There are several relevant principles that should be considered when designing AI models, allowing for mitigation of ethical risks.

© ipopba/iStock/Getty Images Plus

What is Artificial Intelligence (AI) and why does it matter? And why all this attention from companies, governments, academics and the press on Responsible AI?  Understanding the basics of AI, analytics and automation is the first step on any successful AI journey.

AI is quickly reshaping the business landscape of many industries including the financial industry. AI can produce real business value by increasing productivity and efficiency and reducing operational costs. Without a doubt, AI will continue to impact how business gets done. According to Forbes, 65% of senior financial management professionals expect positive changes from the use of AI in financial services including risk assessment, fraud prevention and automation of processes, as well as more specific applications such as being used in the determination of credit worthiness with financial institutions.

AI is one of the many topics that the FEI Committee on Finance and Information Technology (CFIT) focuses on within the information technology space,  creating related programs and services targeted to meet the strategic needs of FEI members. CFIT may also communicate positions to government agencies, legislators and professional and business organizations. As the use cases for AI continue to increase, the Emerging Technologies Subcommittee of CFIT has performed research on one of the more critical elements of a successful AI program: Responsible AI.

With the sharp increase in use of AI in business, the interest in Responsible AI has grown rapidly in academia, industry and government circles. Responsible AI is a framework and set of principles that holds AI applications responsible for the decisions they make - just like humans. The interest in developing Responsible AI emerged in response to the range of individual and societal harms that the misuse, abuse, poor design, or negative unintended consequences AI systems may cause. While most organizations agree on a common set of ethical principles, many are challenged to translate them into concrete actions that impact daily decisions and provide evidence of the effectiveness in order to prove they are not in violation of such principles.

There are several relevant principles that should be considered when designing AI models, allowing for mitigation of ethical risks.

1. Fairness: Human biases within the data, model design, and methods for training and testing of an algorithm can lead to outcomes that affect groups of people differently. This leads to an intrinsically unfair decision-making process. Both the algorithms and data used in an AI application have the ability to unintentionally create a bias, one outcome of which is to reinforce discrimination. It is possible for organizations to design AI systems with checks and balances in the design and approval processes to mitigate unwanted bias and achieve decisions that are fair under a specific and clearly communicated governance model. It is also suggested that a move toward a greater diversity of people developing and deploying AI should be a goal.

2. Referencing Lessons Learned by Top Technology Companies: According to an article by Gartner, two practices came from lessons learned of enhancing trust in AI: Algorithmic Impact Assessments (AIA) and the appointment of an external governance board. These external boards should be an oversight committee completely independent of executive pressure with direct access to influence and drive the responsible AI effort.

3. Transparency: Transparency allows humans to see whether the models created have been thoroughly tested and make sense, and that they can understand why particular decisions are made. It provides the ability to know how and why a model performed the way it did in a specific context and, therefore, to understand the rationale behind its decision or behavior. It also can demonstrate that both the design and implementation processes that have gone into the particular decision and the decision or behavior itself are ethically permissible, unbiased, non-discriminatory/fair, and worthy of public trust/safety-securing.

4. Accountability: AI needs to be accompanied by a chain of accountability that holds the organizer and/or systems human operator responsible for the decisions of the algorithm. Accountability in AI is also about holding the people in your company responsible for your AI development to regulatory, ethical, and precedential standards.

5. Privacy: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection and ensure the security of data. This includes ensuring proper data governance, and management, for all data used and generated by the AI system throughout its lifecycle. Users should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.trust within relevant stakeholders is not only done through informing what data is

6. Governance: Corporate governance is essential to develop and enforce policies, procedures and standards in AI systems. Chief ethics and compliance officers have an important role to play, including identifying ethical risks, managing those risks and ensuring compliance with standards. Governance structures and processes should be implemented to manage and monitor the organization’s activities. Possibly the development of strong corporate standards could influence the level and rate of development of governmental standards.

A recent federal order On December 3, 2020, puts the White House Office of Management and Budget in charge of drawing up a roadmap for how federal agencies use AI software. The executive order lays out a list of nine principles, specifying that the ways in which federal agencies use AI should be:

  • lawful,
  • purposeful and performance-driven
  • accurate, reliable and effective
  • safe, secure and resilient
  • understandable
  • responsible and traceable
  • regularly monitored
  • transparent
  • accountable

Work is also being done globally to develop standards and guidelines for Responsible AI. Regional and national governments have formed task forces to examine this issue. There are also standards and technical reports being developed by international standards developing organizations such as the Institute for Electrical Engineers (IEEE) and the International Organization for Standardization (ISO). Time will tell if there will be any mandatory standards in the future in the area of Responsible AI but for sure there is a growing consensus around some basic ethical principles that should be reviewed and incorporated into your processes as you consider adopting the use or development of AI in your organization.