MEDIATION AND AI. IS IT POSSIBLE TO ANONYMISE DATA IN LARGE PREDICTIVE ALOGORITHMS TO COMPLY WITH THE REQUIREMENTS OF CONFIDENTIALITY?

Philip Corsano May 2024

April 2024

Artificial Intelligence (AI) is transforming dispute resolution in the United Kingdom in several significant ways, particularly through the use of predictive models and machine learning algorithms. These technologies have the potential to improve the way disputes are resolved, making the process more efficient, cost-effective, and data-driven. However, the use of AI in dispute resolution also raises important ethical questions, particularly around the issue of confidentiality in mediation.

One of the key ways AI is being used in dispute resolution in the UK is through the development of predictive models of human risk tolerances. These models use machine learning algorithms to analyse vast amounts of data on human behaviour and decision-making, in order to predict how individuals are likely to respond to different types of risks and uncertainties.

For example, researchers at the University of Cambridge have developed a predictive model of human risk tolerances[1] that takes into account a wide range of factors, including an individual’s age, gender, personality traits, and previous experiences with risk and uncertainty. By analysing this data, the model is able to predict with a high degree of accuracy how an individual is likely to respond to different types of risks, such as financial risks, health risks, or social risks.

This type of predictive modelling has significant implications for dispute resolution in the UK, particularly in cases where the parties have different risk tolerances and are struggling to reach a mutually acceptable agreement. By using AI-powered predictive models, mediators and other dispute resolution professionals can gain a deeper understanding of each party’s risk tolerances and tailor their approach accordingly.

For example, if a predictive model indicates that one party is highly risk-averse and is likely to be very cautious in their decision-making, the mediator may need to take a more gradual and incremental approach to building trust and finding common ground. On the other hand, if the model suggests that the other party is more risk-tolerant and is willing to take bold and decisive action, the mediator may need to be more assertive in their approach and push for a more rapid resolution.

Another way AI is transforming dispute resolution in the UK is through the use of predictive analytics and machine learning algorithms to analyse historical data and identify patterns and trends that can inform decision-making. This approach is particularly valuable in cases where there is a large volume of data available, such as in complex commercial disputes or class action lawsuits.

For example, a team of researchers at UCL London and other Universities[2] has developed an AI-powered predictive analytics tool that can analyse thousands of court decisions and identify key factors that are likely to influence the outcome of a case. By analysing factors such as the type of case, the parties involved, the legal arguments presented, and the previous decisions of the judge or arbitrator, the tool is able to predict with a high degree of accuracy how a case is likely to be decided.

This type of predictive analytics has the potential to transform the way disputes are resolved in the UK, by providing parties and their legal representatives with a more data-driven and evidence-based approach to decision-making. Rather than relying solely on intuition and experience, lawyers and mediators can use AI-powered tools to analyse historical data and identify the most effective strategies for resolving a particular type of dispute.

However, the use of AI in dispute resolution also raises important ethical questions, particularly around the issue of confidentiality in mediation. Mediation is a confidential process, where the parties are encouraged to speak openly and honestly in order to find a mutually acceptable solution to their dispute. The success of mediation often depends on the parties’ willingness to share sensitive information and explore creative solutions without fear of that information being used against them in future legal proceedings.

The use of AI-powered predictive models and machine learning algorithms in mediation may pose a challenge to this fundamental principle of confidentiality. If a predictive model is able to analyse data from previous mediation sessions and identify patterns and trends, there is a risk that this information could be used to gain an unfair advantage in future disputes or legal proceedings. At the moment the analysis is of previous publicly available court decisions, so confidentiality is not an issue. Whether the publicly available court information is relevant to parties mediating their own dispute will depend upon the distinctive facts of the mediating parties case.

For example, if a party in a mediation session shares sensitive information about their financial situation or business practices, and that information is then used to train a predictive model, there is a risk that the model could be used to gain insights into that party’s negotiating strategy or vulnerabilities in future disputes. This could undermine the trust and confidence that is essential for successful mediation, and could ultimately lead to parties being less willing to engage in the process.

To address these ethical concerns, it is important for dispute resolution professionals in the UK to develop clear guidelines and protocols around the use of AI in mediation and other confidential processes. This may involve establishing strict data privacy and security measures to ensure that sensitive information is not shared or used inappropriately, as well as developing transparent and accountable processes for the development and deployment of AI-powered tools.

One approach that has been suggested is the use of “confidentiality by design” principles in the development of AI-powered dispute resolution tools. This involves building privacy and security safeguards into the design of the tools from the outset, rather than trying to add them on as an afterthought. For example, predictive models could be trained using anonymized data that has been stripped of any identifying information, and the models could be designed to only provide aggregate insights rather than revealing information about individual cases or parties.

Another approach is to ensure that parties in mediation and other confidential processes are fully informed about the use of AI-powered tools, and are given the opportunity to opt out if they have concerns about the privacy or security of their information. This could involve providing clear and concise explanations of how the tools work, what data is being collected and analysed, and how that data will be used and protected.

Ultimately, the use of AI in dispute resolution in the UK has the potential to bring significant benefits in terms of efficiency, cost-effectiveness, and data-driven decision-making. However, it is important for the legal and dispute resolution community to carefully consider the ethical implications of these technologies, particularly around the issue of confidentiality in mediation and other confidential processes.

By developing clear guidelines and protocols for the use of AI in dispute resolution, and by building privacy and security safeguards into the design of these tools from the outset, we can help to ensure that the benefits of AI are realized while also protecting the fundamental principles of trust, confidence, and confidentiality that are essential for successful dispute resolution. As the use of AI in dispute resolution continues to evolve in the UK and around the world, it will be important for all stakeholders to work together to find a balance between the potential benefits of these technologies and the need to uphold confidentiality and the highest ethical standards in the delivery of legal and dispute resolution services.

[1] Researchers at the University of Cambridge, in collaboration with the University of Zurich and the London School of Economics, have conducted a study on the use of machine learning to predict risk preferences. The study is titled “Predicting Risk Attitudes From Personality, Time, and Money” and was published in the journal “Frontiers in Psychology” in 2022. Reference: Kutbay, H., Sgroi, D., & Starkey, N. J. (2022). Predicting Risk Attitudes From Personality, Time, and Money. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.806915

In this study, the researchers used machine learning algorithms to predict risk attitudes based on an individual’s personality traits, time preferences, and financial literacy. The study involved 1,122 participants who completed a series of online surveys and experiments.

The researchers found that their machine learning model could predict risk attitudes with an accuracy of up to 72%, which is significantly better than chance. They also identified several key factors that were predictive of risk attitudes, including personality traits such as extraversion and openness to experience, as well as time preferences and financial literacy.

While this study does not specifically focus on the application of AI in dispute resolution, it does demonstrate the potential for machine learning algorithms to predict human risk tolerances, which could have implications for dispute resolution processes.

[2] The University College London (UCL) Centre for Artificial Intelligence has conducted research on using machine learning to predict the outcomes of cases in the European Court of Human Rights, with a reported accuracy of 79% (Aletras et al., 2016).

The University of Liverpool has developed an AI system called “CaseCruncher Alpha” that achieved 86.6% accuracy in predicting the outcomes of real cases, compared to 66.3% accuracy for human lawyers (Medvedeva et al., 2020).

The Jur-E project, funded by the European Commission, aims to develop an AI-based decision support system for legal professionals, including predictive analytics capabilities (Jur-E Project, 2021).