Search
Close this search box.

An artificial intelligence algorithm has predicted the outcome of human rights trials with 79 percent accuracy, according to a study published today in PeerJ Computer Science. Developed by researchers from the University College London (UCL), the University of Sheffield, and the University of Pennsylvania, the system is the first of its kind trained solely on case text from a major international court, the European Court of Human Rights (ECtHR).

“Our motivation was twofold,” co-author Vasileios Lampos of UCL Computer Science told Digital Trends. “It first starts with scientific curiosity.” In other words, would it even be possible to create such an AI judge? “And on the more practical side of things, we thought that AI could be a complementary assistive tool for various administrative tasks in a pretrial phase, reducing the long waiting time before a decision is made,” he added.

The algorithm analyzed texts from nearly 600 cases related to human right’s issues including fair trials, torture, and privacy in an effort to identify patterns. It determined that text language, topics, and circumstances — i.e., factual background information — were the most reliable indicators for whether a case would be deemed a violation of the European Convention of Human Rights.

Similar predictive systems have previously been developed by analyzing the alleged crimes and policy positions of judges rather than case texts. In that sense, this AI judge is unique.

Lampos explained that, within the context of the ECtHR, the study supports the theory that judges rely more heavily on legal realism than formalism.

“In very lay terms, this means that the judges of the court might weight more the facts and circumstances of a case — rather than the respective laws — during their decision-making process,” he said. “This could be related to the fact that applicants need to exhaust all effective remedies available to them in their domestic legal systems, before submitting an application to ECtHR.”

“I don’t expect judges to be replaced [by AI],” Lampos said, “at least [not] in the near future.” Instead, he sees the AI judge as a tool for the court, albeit currently a blunt one. To refine the algorithm, the researchers will need to feed it to much larger quantities of text while maintaining privacy and ensuring that the system isn’t used for potentially unethical practice.

“A major caveat that needs to be mentioned here is that law firms could exploit such software tools, biasing the language used in their applications to ECtHR in order to get favorable treatment ,” Lampos cautioned. “Hence, we need to be very careful when discussing potential commercial exploitations of the findings.”

Before the system is made available, the researchers will need understand how it might be used and what unintended consequences it could have. That means making some predictions of their own.