Understanding AI outputs: study shows pro-western cultural bias in the way AI decisions are explained
If you were affected, you might want an explanation of why an AI system produced the decision it did.
- If you were affected, you might want an explanation of why an AI system produced the decision it did.
- Yet AI systems are often so computationally complex that not even their designers fully know how the decisions were produced.
- Explainable AI systems help AI engineers to monitor and correct their models’ processing.
- We wanted to see to what extent researchers indicated awareness of cultural variations that were potentially relevant for designing satisfactory explainable AI.
Cultural differences in explanations
- The other is externalist, citing factors like social norms, rules, or other factors that are outside the person.
- However, such explanations are not obviously preferred over externalist explanations in “collectivist” societies, such as those commonly found across Africa or south Asia, where people often view themselves as interdependent.
- If people from different cultures prefer different kinds of explanations, this matters for designing inclusive systems of explainable AI.
- Our research, however, suggests that XAI developers are not sensitive to potential cultural differences in explanation preferences.
Overlooking cultural differences
- Moreover, when we checked the cultural background of the people tested in the studies, we found 48.1% of the studies did not report on cultural background at all.
- This suggests that researchers did not consider cultural background to be a factor that could influence the generalisability of results.
- Of those that did report on cultural background, 81.3% only sampled western, industrialised, educated, rich and democratic populations.
- Yet, out of the studies that reported on cultural background, 70.1% extended their conclusions beyond the study population – to users, people, humans in general – and most studies did not contain evidence of reflection on cultural similarity.
Why the results matter
- To address this cultural bias in XAI, developers and psychologists should collaborate to test for relevant cultural differences.
- As AI is being used worldwide to make important decisions, systems must provide explanations that people from different cultures find acceptable.
- As it stands, large populations who could benefit from the potential of explainable AI risk being overlooked in XAI research.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.