5
min read

Ethical AI: using transparency for accountability

How transparency is vital for ethical AI.

This is part 4 of a 5-part series on Ethical AI. If you missed part 1- Ethical AI Standards: A Process Approach, click here.

Accountability can only really be enforced through transparency. While it is nice to believe that people are good and wish to do good, the simple reality is this: what is hidden may not be reprimanded and what is not reprimanded may always be executed. So, it is simple to see that transparency removes the possibility for hidden activity, and so opens up the possibility of punishment and thus deters wrongdoing. Analogous to this concept of transparency is the entire enterprise of auditing. The same concept applies here: auditors sift through financial records and attempts to piece together the financial picture a company holds and so determine what must be paid. While making company activity more transparent is great in this ethical, auditorial sense, there is another sense in which transparent AI can help make users make decisions; therefore, this secondary meaning of transparency helps with epistemic and rational decision-making problems.

Activity Reporting

Human interaction with AI really is the crux of ethical AI. Consider nugget’s screening engine. It is AI set towards optimizing the right hiring options given the performance and skill profiles of potential candidates. In the hands of HR departments, real, genuine events could be influenced by the findings reported by the AI. Namely, a choice to hire person A over B might only occur given a report which suggests person A is right for the job. And so, we see that the AI has deep physical implication when utilized even if it itself has no physical force. Therefore, we need to foster the right kind of relationship between humans and AI that ensures accountability is maintained.

Activity reports are perhaps the most important addition one could make for AI utilization. A reporting system allows for a clear picture of how AI gets used to make decisions. Say a user decides an AI’s suggestion is bad, perhaps the user knows the AI is not accounting for certain variables, or just suspects the training data is bad. Logging this decision provides a myriad of benefits: it allows for review of user activity, ideally in addition to logs the user would be able to add their own input, so they are able to justify their decisions prior to any investigation. Such information could then be relayed to the AI developers, to which they can determine if the AI can be improved to remove the need for human intervention in similar circumstances. Here is a simplified idea of how a log could work (further elaborations are possible, including the idea of privatized logs to prevent inter-employee interference, different schemas for information processing depending on the AI’s role, etc.):

AI Reasoning Made Clear

In regular human discourse, we can ask people how they arrived at their findings. Scientists can produce lab results with corresponding data and evidence. Mathematicians can produce deductive proofs. Speeders can come up with a justification for their overzealous driving in a courthouse. Whatever the situation, humans can generate reasons- good or bad, about their actions and opinions. A term you may be familiar with is ‘black box’, which signifies some unknown process (i.e., a function) that takes input and produces an output. The mechanism at play which transmutes the input into the corresponding output is obscured, and thus the ‘black box’ is the function which is unknown.

a blackbox example

The term ‘white box’ refers to code in which one can see the functions at play, but the functions are so complex that to a human, it is indecipherable. This is very much the case for complex neural networks and heuristics-based programs. So, we might say that the justification is obscured: we cannot make sense of it (imagine a scientist trying to present to you their findings but speaks a language so foreign and dense to you, that you could not even attempt to translate it). Luckily, AI can be set to give easy to digest reasons, facts, and figures. That is, while AI can make easy to process judgments (like a nugget score), AI can also provide justification for such scores (e.g., benchmarks and the respective performances by each member, etc.). The justification itself need not be the entirety of the scoring system function- simply salient facts are sufficient to inform users (this is so that companies can maintain their IP in how they’ve decided to score the raw data- this protects company trade secrets).

example of nugget scores

The benefit here, is that the reasoning is useful for users to make a decision, because it is a factor in itself for the user to deliberate upon. This can be illustrated like so:

An Agent is considering between options 1 and 2. They have reasons to pick both options and so defer to AI to generate a preference. The AI picks option 2. The Agent could go ahead with this option or look deeper into it. The Agent views some of the pertinent reasonings provided by the AI, laid out for the Agent to view. In doing so, the Agent is able to cognize an approximation of what the AI is going for and can then assent to it if the Agent sees no glaring issue. If the Agent does see a glaring flaw in one of the reasons, they can reassess accordingly.

Conclusion

To summarize, transparency ensures responsible use of AI. We see that users are able to be fed a steady stream of information to aid them in their deliberations, and the ability to provide justifications for one’s deliberations is always a net benefit. Where AI fails, humans will be able to spot it wherever possible and apply fixes. From this, we can see the ethical way forward is known via transparency.

Stay tuned for the final article in our Ethical AI Series on human autonomy.

Nicholas Tessier 🧠

Product Manager