Exploring the Challenges of Bias and Lack of Transparency in AI: Implications for the Legal Industry

This blog post is the first in a series adapted from a detailed article co-authored by Sophie Nappert and Sarah Chojecki entitled “Evidence in International Arbitration through the Looking-Glass of the Digital Economy”, and is intended to provide a summary on a number of the issues raised. — Editor

AI and The Law

Artificial Intelligence (AI) has the potential to automate repetitive tasks, increase efficiencies, and reduce costs, making it an attractive option to make lawyers and legal staff more productive, and create new roles. However, AI also challenges our notions of fairness and due process, which are the cornerstones of the rule of law, and essential aspects of international arbitration. In particular, the issue of bias in AI and the lack of transparency and explainability in AI models could affect the validity and reliability of evidence obtained through AI.


Validity and Reliability of AI: The Legal Context

This blog post explores the factors affecting the validity and reliability of AI applied in the legal context, with a focus on the challenges posed by bias and the lack of transparency and explainability in AI models. The issue of bias in AI is a significant challenge that affects its validity and reliability. AI's propensity to replicate or introduce human error or bias could lead to decisions that appear objective but are, in fact, discriminatory. Arvind Narayanan, a computer scientist and professor at Princeton University, put it this way: “Today’s AI/ML [machine learning] is uninterpretable, biased and fragile. When it works, we don’t understand why.” This is particularly relevant in the legal industry, where the use of AI in eDiscovery and arbitrator selection could result in skewed or unfair outcomes. The lack of transparency and explainability in AI models used in arbitration further exacerbates this issue, making it difficult to identify any biases or flaws in the algorithm. 


Moreover, the lack of transparency and explainability in AI models used in arbitration could lead to challenges in the validity and reliability of evidence obtained through AI. If the decision-making process of AI is opaque, it can be challenging to evaluate the accuracy and reliability of the evidence produced by it. James Dempsey, of the Berkeley Center for Law and Technology, even notes that “AI may replicate human error or bias or introduce new types of error or bias ….  AI trained on data that reflects biases that infected past decisions could incorporate those biases into future decision-making, yet give such decisions the appearance of objectivity.”


Examples of areas where Bias and Lack of Transparency in AI could affect the Legal Disputes Industry:

  1. Bias in eDiscovery and AI analytics: AI tools are being used in eDiscovery to search through and analyze vast amounts of electronic data to find relevant evidence in a legal case. However, if the AI algorithm is trained on biased inputs, it could result in the exclusion of relevant evidence or the inclusion of irrelevant evidence, leading to a biased output and analysis.

  2. Bias in arbitrator selection: AI tools can be used to select arbitrators based on certain attributes or previous decisions. However, if the AI model possesses hidden biases, it could result in a pool of arbitrators that is not diverse or inclusive, or even not entirely suited to the dispute at all. AI hallucination observed in certain large language models (LLMs) such as ChatGPT.


It is worrisome, to say the least that, as Maxi Scherer observed, the use of algorithms in criminal risk assessment in the US has led to racially biased outcomes, demonstrating the potential for AI to perpetuate societal biases and prejudices. To address the lack of transparency regarding how and why AI can reach a particular output, the so-called ‘black box’ feature of AI,, an entire branch of AI research is being dedicated to developing Explainable Artificial Intelligence (XAI), AI that can be explained to humans. However, even with XAI, challenges remain, such as the tradeoff between algorithmic accuracy and explainability and the potential for AI providers to claim proprietary trade secrets and resist disclosure of data and algorithms.


Addressing the Challenges of AI in Law


As the use of AI in the legal industry and other fields continues to expand, the challenges posed by bias, and the lack of transparency and explainability in AI models must be addressed to ensure that AI is used in a fair and unbiased manner, leading to more objective outcomes. This can be achieved, to some extent, by developing and implementing Explainable Artificial Intelligence (XAI) and, in line with the European Union’s proposed Artificial Intelligence Act, ensuring that AI is subject to robust testing for validity and reliability, transparency and explainability, and accountability. Additionally, it is essential to recognize that AI should be viewed as a tool that can aid in the performance of legal work but not as a replacement for human judgment and expertise. By addressing these challenges, we can unlock the full potential of AI while ensuring that it is used in a way that upholds the principles of fairness and due process.

FAQ


What is AI bias, and how does it occur?

AI bias refers to systematic errors in an AI system that lead to unfair or discriminatory outcomes. It occurs when the data used to train the AI reflects existing biases, or when the algorithms themselves are designed in a way that unintentionally favors certain outcomes.

Why is transparency in AI important for the legal industry?

Transparency ensures that the decision-making processes of AI systems can be understood and scrutinized. In the legal industry, this is critical for:

·  Ensuring fair treatment in cases involving AI-generated evidence or recommendations.

·  Building trust in AI systems used.

·  Facilitating accountability when AI systems produce flawed or biased outcomes.

What are the risks of using biased AI in legal practice?

Using biased AI in legal practice can lead to:

·  Undermining public trust in legal systems.

·  Legal challenges and reputational damage for firms relying on flawed AI tools.

How can AI bias impact arbitration or dispute resolution?

In arbitration or dispute resolution, biased AI could result in:

·  Favoring one party over another based on irrelevant factors.

·  Overlooking key evidence due to algorithmic flaws.

·  Reinforcing systemic inequalities present in training data

What is the “black-box problem” in AI, and how does it affect the legal industry?

The "black-box problem" refers to the lack of interpretability in AI systems, where the internal processes that lead to a decision are not visible or understandable. In the legal industry, this can:

·  Complicate challenges to AI-generated outputs.

·  Raise ethical concerns about accountability and fairness.

·  Limit the adoption of AI tools in sensitive legal contexts.

What does the future hold for bias and transparency in AI within the legal industry?

The future will likely see:

·  Increased emphasis on explainable AI to address the transparency challenge.

·  Stricter legal and regulatory frameworks to prevent AI bias.

·  Advancements in technology that enable more equitable and accountable AI systems.

What does the EU AI Act have to say about AI bias?

The EU AI Act addresses AI bias through several key provisions aimed at ensuring that AI systems, particularly high-risk AI systems, are developed and used in a way that minimizes bias and its potential harmful effects.

  1. Risk Management System (Article 9): Providers must establish a risk management system that includes identifying and analyzing known and foreseeable risks, including those related to bias. This system must adopt appropriate risk management measures to address these risks.

  2. Data and Data Governance (Article 10): High-risk AI systems must be developed using training, validation, and testing data sets that meet quality criteria.​ These data sets must be relevant, representative, and free of errors to the best extent possible.​The data governance practices must include measures to detect, prevent, and mitigate possible biases that could affect health, safety, fundamental rights, or lead to discrimination.

  3. Transparency and Provision of Information (Article 13): Providers must ensure that high-risk AI systems are transparent, enabling deployers to understand and appropriately use the system. This includes providing information on the system's capabilities, limitations, and any known or foreseeable circumstances that may lead to risks, including bias.

  4. Human Oversight (Article 14): High-risk AI systems must be designed to allow effective human oversight to prevent or minimize risks, including those arising from bias.​

  5. Accuracy, robustness and cybersecurity (Article 15): High-risk AI systems with continuous learning must minimize the risk of possibly biased outputs influencing input for future operations (feedback loops).

  6. Post-Market Monitoring (Article 72): Providers must establish a post-market monitoring system to collect and analyze data on the AI system's performance, including any issues related to bias, to ensure continuous compliance with the requirements.

What potential changes in the regulation of artificial intelligence can we expect in the U.S. in 2025 following President Trump’s return to office?

On January 20, 2025, President Trump issued an executive order titled "Initial Rescissions of Harmful Executive Orders and Actions," revoking over 50 prior executive orders, including Executive Order 14110 of October 30, 2023, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The 2023 order had established standards for responsible AI development in both public and private sectors, explicitly addressing concerns about AI bias and its potential to perpetuate discrimination and civil rights violations.

Three days later, on January 23, 2025, President Trump signed a new executive order: "Removing Barriers to American Leadership in Artificial Intelligence." This order prioritizes U.S. global AI dominance by eliminating policies perceived as obstacles to innovation. It mandates a comprehensive review of existing AI regulations and requires an action plan within 180 days to accelerate AI advancement.

Critics express concern that this approach may prioritize rapid AI development at the expense of addressing issues like algorithmic bias and discrimination. The revocation of prior safeguards could lead to the deployment of AI systems without adequate measures to prevent biased outcomes.

Next
Next

How Arbitral Institutions and International Organisations Are Paving the Way to New Technologies