Call us at 650-400-3029 (PST)

Keep Tabs on Ethical Use of AI With a Chain of Responsibility

The potential for artificial intelligence (AI) to transform business in positive ways has been enthusiastically embraced by many forward-thinking enterprises. While AI promises to accelerate improvements in how we do business, enthusiasm for this technology needs to be tempered by thoughtful consideration of the ethical implications.

My respected colleague Neil Raden addresses this very issue in his recent article in Diginomica, “Trustworthy AI versus ethical AI—what’s the difference, and why does it matter?” He aptly points out that there’s some confusion about the terms “trustworthy AI” and “ethical AI.” As he states, “trustworthy AI is not necessarily ethical, and ethical AI is not necessarily trustworthy.” In his view, trustworthiness should be ascribed based on transparency and reputation. Ethical AI, on the other hand, is based on values and some sense that fundamental human rights should not be compromised for commercial gain. More practically, perhaps, ethical AI behaves in a way you wouldn’t mind being written up in a major newspaper. 

If you’re not sure about the difference, these examples show AI models that are trustworthy but questionable in terms of ethics:

  • Predictive policing: These AI systems predict where more police staff need to be assigned to increase police presence in areas where more serious crimes tend to occur. While police departments may come to trust these systems, placing more police officers in a neighborhood will tend to result in more arrests there. If these systems initially place more police officers in neighborhoods of color, this can become self-reinforcing and discriminatory.
  • Intrusive personalization: We’ve all become all too accustomed to parting with our personal information when we use “trusted” applications and websites. Despite our trust, there exists the potential that this data can be abused through profiling or can cause unwanted societal disruptions—all of which have ethical implications.

Neil argues that AI-driven projects need to be driven by ethics first—or, more practically, by values. In a recent post, he also went on to talk about responsible AI. The idea is that someone should be responsible for the AI and its outcomes (whether these are ethical or not).

The computer is not responsible—but then who is?

This sense of responsibility is critical when automating decisions with AI technologies. When a company—or a company’s representative—says that something was done “because the computer said to,” they are implying that the computer is responsible. This is clearly nonsense. The computer is running a program—it is not responsible in any meaningful way for the actions proposed or taken by the program it is running. This is true of any automated system, but the explosive growth of AI is going to bring this challenge front and center. If companies are to use AI-based automated decision-making systems, they are going to have to take responsibility for what they do. They must take responsibility when customers, regulators, courts, and news media ask about their decisions. 

This requires a Chain of Responsibility. When an individual makes a decision, they can be held responsible for it. They can be required to write an explanation, defend it in court, or explain it to regulators. Well-designed decision-making systems that leverage AI or machine learning (ML) record information about how each decision was made—including the AI algorithms used, the key factors influencing those algorithms, the decisions made, and the results achieved. This data is used to enable fine-tuning and continuous improvement and to provide an explanation of the decision. Such a system is transparent, but it is not yet responsible.

Similarly, there are emerging tools for evaluating machine learning and AI models for bias and fairness. Applied consistently, these can help a company ensure that its AI conforms to its values. Taking care with the data you use and the way you approach modeling can avoid exposing sensitive information or invading privacy. But this is still very internally focused and relies on individuals behaving ethically. 

Changing behavior generally requires consequences—people must feel accountable for the behavior of these models. The outside world of regulators and public relations also requires accountability and responsibility. Ethical use of AI requires a record of why the decision was made the way it was made at every step on the way. There needs to be a Chain of Responsibility for the decision-making executed by the computer program. 

Create a Chain of Responsibility 

Think of this as similar to the concept of a “chain of custody” in law, which is the process of maintaining and documenting evidence. As defined by the Legal Dictionary, it entails “keeping a detailed log showing who collected, handled, transferred, or analyzed evidence during an investigation.” There are established protocols and procedures for this process. 

Leaning on this definition, we can define a Chain of Responsibility as a process for maintaining and documenting decision-making, showing who defined, modeled, trained, tested, and validated the decision-making approach that resulted in a specific decision.

An automated decisioning system that has a Chain of Responsibility fully defined would not only produce a record of how it made each decision it made—how the specific customer or transaction was processed to come up with the decision outcome—it could also identify the parties responsible for each element of decision-making applied by the program.

The questions that need to be answered in a Chain of Responsibility for AI-based systems include:

  • Who defined the overall decision-making approach that the AI is part of? 
  • Who validated that decision model and confirmed it was legal, compliant, appropriate, and the company’s preferred approach?
  • Who approved the AI algorithm(s) used in the decision-making? 
  • Who designed the scenarios used to test the algorithm and validate the outcomes?
  • Who decided what kinds of bias should be checked for and what it means for the algorithm to be “fair”?
  • Who selected the data used to train the algorithm?
  • If any of that data was the result of other, precursor decisions, what is their Chain of Responsibility ?
  • Finally, who tested the implementation of that decision model to ensure it matched the design? 

Such a Chain of Responsibility begins with a well-defined decision model. A decision model breaks down the decision-making into its component parts, identifying an owner for each part, that is, a department or a role. Regardless of how that piece of the decision-making is implemented, they “own” the definition.

When a sub-decision in this model is identified as being based on analysis of data, whether that’s simple statistical analysis or the result of a complex machine learning or AI algorithm, further ownership must be defined around the data and how it was processed to come up with the analytical insight being used.

Once all this is done, deployment and operationalization need to include recording who owned the testing and validation, making sure that the computer does what the people in the chain intended it to do. Decision models really help here, as they create a visual blueprint that everyone can refer to while validating the results.

There’s no out-of-the box solution to ensure ethical and responsible use of AI. But there is tremendous pressure to get something out the door and adding value as soon as possible. And any automated system makes many individual decisions very quickly, allowing a poorly designed algorithm to go off the rails fast. If you want responsible AI, it’s essential to scrutinize all facets of your AI projects—from the planning and design stage to the outcomes—to ensure that you are applying ethical principles and your values to the AI algorithms you build and that you are creating a Chain of Responsibility that will allow you to be accountable for how they act on your behalf.

Learn more about how to responsibly incorporate AI into your business through decision modeling, download our white paper, “Building an AI Enterprise.”