Header image

Technology A question of bias Research has identified biases against minority groups when it comes to certain financial products. Phoebe Ward and Carol McNaughton Nicholls examine how AI could potentially exacerbate the issue D ebates about how new technology impacts our lives are not new. However, the pace of this debate has changed recently, fuelled by increased access to generative artificial intelligence (AI) tools; technologies that raise fundamental questions about knowledge-making, decisionmaking, and what it means to be human. As insight practitioners, we think about these challenges constantly. We need to understand how insight and evidence can help society navigate new challenges. We know how important a rich evidence base can be in cutting through complex landscapes. Crucially, evidence allows us to have informed discussion and truly examine who is impacted by change. New insights on complex topics can trigger the need for more research and debate. In 2022, Citizens Advice explored how personal data and algorithms could be leading to discriminatory pricing for people of colour buying car insurance. The research shone a spotlight on the use of new practices in financial services specifically, and raised the question: are similar trends happening elsewhere? The findings from the Citizens Advice research, in the report Discriminatory pricing: Exploring the ethnicity penalty in the insurance market, raised important considerations for the Financial Services Consumer Panel, an independent statutory body representing the interests of consumers in financial services via advice and challenge to the Financial Conduct Authority (FCA). The panel wanted to look at this issue in greater depth and understand whether the patterns identified exist elsewhere. To do so, Thinks Insight & Strategy was commissioned to examine the evidence base, exploring whether there is evidence that the use of personal data and AI in financial services decision-making is causing detriment to groups with protected characteristics. To answer this complex question, we analysed almost 70 sources, from published academic pieces to thought-leadership articles. Sources included in the evidence base originated from Canada, Australia, the US, Europe and the UK. To reflect the global nature of the debate, we also interviewed thought leaders across each of these locations. You wont be surprised to hear that there is no straightforward answer to the panels question. However, three findings shine a light on the pressing need to use this evidence base for further debate: 1. Some biased outcomes Some groups are experiencing biased outcomes in financial services, as the Citizens Advice study shows and its concerning. A year after its initial research, Citizens Advice re-ran the study, finding again that customers from ethnic minority backgrounds in the UK pay more for their car insurance. Another report Improving access to insurance for disabled people, by Scope found disabled people sharing their experiences of travelling abroad without insurance because of the unaffordable prices they face. Ongoing legal cases in the US show black customers having to work harder to claim on their insurance. Something is happening. 2. More evidence required It is strongly suspected that these biased outcomes occur as a result of the use of personal data and AI in decision-making. But thats the key word: suspect. We are familiar with the argument that bias can be reinforced and embedded into new decision-making tools using historical and proxy data. These practices are part of the reason experts suspect bias is inherent in the technology used to make decisions by financial services and that these practices lead to different outcomes for certain groups in a way we might not be comfortable with as a society. However, the challenge is categorically evidencing the link between efficient, technology-led practices and biased outcomes. The use of this technology has evolved so quickly that firms cannot always be sure about how AI is being trained to make these decisions and the data points being used. Because companies do not always have personal data on protected characteristics, they cannot reverse check for biased outcomes. This is concerning, and means experts are calling for greater emphasis to be placed on governance and transparency for those holding data and using AI. 42 Impact ISSUE 44 2023_pp42-43 Tech.indd 42 05/12/2023 12:13