Categories
Research pieces

The transfer of the human rights framework in the AI era – Anastasia Karagianni

Anastasia Karagianni is a Ph.D. student in AI & Law at the

University of Athens and co-founder of DATAWO.

Linkedin

DATAWO

According to the European Commission, AI-based systems (or AI systems) have the power to create an array of opportunities for European society and the economy, but they also pose new challenges to the human rights framework. In this blog post, the ways in which the human rights framework can be implemented in the AI era will be examined. In the first part of the blog post, a definition of AI will be provided, as well as the design of an AI system will be briefly explained. In the second part of the blog post, the challenges set by AI systems to human rights will be briefly studied in five different key areas for the enjoyment of human rights in the digital era: a) in the recruitment process; b) in education; c) in content moderation process; d) in predictive policing, and e) in the digital welfare state. The impact of the use of such technology on human rights will be scrutinised accordingly in the context of an analysis of a relevant case study. In the third and last part of the blog post, the transfer of the human rights framework in the AI era will be examined.

AI

1. What is AI?

Artificial intelligence (AI) refers to the modelling of machines and systems to human intelligence through programming. The term may also be applied to any machine that exhibits traits associated with a human mind, such as learning and problem-solving. Kaplan and Haenlein  define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. Poole and Mackworth define AI as “the field that studies the synthesis and analysis of computational agents that act intelligently.” An agent is something (or someone) that acts. An agent is intelligent when: i) its actions are appropriate for its circumstances and its goals; ii) it is flexible to changing environments and changing goals; iii) it learns from experience, and iv) it makes appropriate choices given its perceptual and computational limitations.

How is an AI system designed?

AΙ systems can be purely software-linked, acting in the virtual world—like voice assistants, image analysis software, search engines, speech and face recognition systems—or Artificial Intelligence can be embedded in hardware devices—in advanced robots, autonomous cars, drones, or internet of things applications, as the EU High-Level Expert Group on AI has flagged.

AI essentially functions through a set of rules called algorithms. These algorithms are fed with huge datasets so that they identify patterns, make predictions and recommend a course of action. Over time, AI improves automatically through experience and this phenomenon is called Machine Learning. It is a common thought that AI can improve non-biased decision-making because computers are associated with logic and imagine that algorithms are devoid of human biases or limitations. However, the algorithms used in AI are developed by humans, who inevitably translate and replicate their biases into the algorithmic design process. 

AI decision-making can also have discriminatory results if the system “learns” from discriminatory training data. There are two ways in which biased training data can have discriminatory effects. First, the AI system might be trained on biased data. Biases can be detected in statistical data (if the sample used does not represent all the genders or races) or historical data (collected data about past events that does not reflect the current situation).  Second, problems may arise when the sampling procedure can also be biased (for instance when data was gathered by specific groups of people), or when in the filtering procedure societal factors, like class or race have not been taken into account.  

 

2.Where an AI system is used?

For explanation reasons, the key areas that I will focus on, namely the recruitment process, education, content moderation, predictive policing, and the digital welfare state, will be defined first. As such, a) recruitment process is referred to as a process of identifying the jobs vacancy, analysing the job requirements, reviewing applications, screening, shortlisting, and selecting the right candidate. B) Education refers to the discipline that is concerned with methods of teaching and learning in schools or school-like environments, as opposed to various non-formal and informal means of socialisation. C) Content moderation is when an online platform screens and monitors user-generated content based on the platform’s specific rules and guidelines to determine if the content should be published on the online platform, or not. D) Predictive policing refers to the use of predictive analytics based on mathematical models, and other analytical techniques in law enforcement to identify potential criminal activity. E) Digital welfare states may be defined as having systems of social protection and assistance which are “driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish”. 

The case of Amazon’s experimental recruitment tool

Amazon had been working on the design of a recruitment tool to review job applicants’ resumes with the aim of finding out the candidate who matches perfectly to the position in an automatic way since 2014. More particularly, this tool was based on AI and was trained on data submitted by applicants over the last ten years, the majority of whom were men. As such, in 2015, the team that had been building this tool realised that male candidates were mostly preferred, as this system was effectively taught mostly by male data. To be more specific, according to Maude Lavanchy, “the algorithm learned to systematically downgrade women’s CVs for technical jobs such as software developer”. This can be explained by the fact that the majority of Amazon’s employees in technical roles, like software developers, are males. 

The use of algorithms in education 

The problem of bias in educational testing has been documented since 1960 and anticipated many aspects of the modern literature on algorithmic bias and fairness, according to Ryan S. Baker and Aaron Hawn. “Algorithms have become applied in educational practices at scale for a range of applications, often high-stakes, including dropout prediction, automated essay scoring, graduate admissions, and knowledge inference. Academics have been warning about possible uneven effectiveness and lack of generalisability across populations in educational algorithms for several years”. It is evident by the following case that under these circumstances the right to education and to non- discrimination is violated. 

In 2020, the University of Texas at Austin’s computer science department was using a machine learning program to evaluate applicants during their Ph.D. program. The program’s database consisted of past admission decisions. The problematic issue in this algorithm was that most of the rejection decisions concerned students from diverse backgrounds, reducing in this way the opportunity of the current candidates to gain a positive decision. However, this was not a new case. Many repetitious public universities in the USA were consulted by the firm EAB, which used the advising software Navigate. It was found out that this software system identified black students as “high risk” in order not to graduate in their selected major at four times the rate of white students.

AI in content moderation processes 

Social media is daily hosting problematic or harmful content relevant to misinformation and hate speech. In an attempt to restrain this content and upgrade the information is shared in the community of the users, social media platforms publish sets of community guidelines that explain the types of content they prohibit, and remove or hide this content from them regarding the impact that content moderation has on the limitation of the freedom of speech is what algorithmic system operated behind this process is trained to tackle. 

Most content moderation tools used in platforms focus on certain types of content, such as copyright-infringing content. However, in the case of extremist content and hate speech, there are variations in speech related to different groups and regions, and the context of this content, and the decision of whether this content should be removed or not can be very difficult. One study, for example, found that AI models trained to process hate speech online were 1.5 times more likely to identify tweets as offensive or hateful when written by African-American users, according to Pillsbury’s Internet & Social Media Team. 

The case of ProKids in predictive policing 

According to the Fair Trails, the Dutch police have used an automated risk assessment tool, called ProKid, which purports to assess the risk of future criminality of children and young people from 12 to 23 years old since 2011. ProKid is a statistical risk assessment tool, used by police to assess the risk of young children and adolescents being involved in “future violent and property offending”, based on data variables that have shown to be associated with such behaviour. More particularly, for the training of the algorithm, data gathered from 31,769 children (20,141 boys, 11,628 girls) between 12 and 18 years, and later up to 23 years, of age was used. The outcome was that one-third of 2,444 children’s risk was assessed as red, orange, and yellow, while only 1,542 of the 2,444 were deemed correct. In reality, ProKid was not actually designed to predict the likelihood of criminality, but to predict the likelihood of a child being registered in a police system in relation to a crime. 

“If a child has been subject to a ProKid risk assessment, it results in police registering them on their systems and monitoring them, then referring them and their families to youth care services and child abuse protection services”. 

The case of SyRI in the digital welfare state 

The Dutch government 2014 used an AI system, called SyRI (for “system risk indication”), when there is a suspicion of fraud (with benefits, allowances, or taxes) in a specific neighbourhood by the government agency. Despite major objections from the Dutch Data Protection Authority and the Council of State, SyRI was implemented. The municipality has no obligation to inform citizens of a neighbourhood that they are being investigated, according to Ronald Huissen from the Platform Bescherming Burgerrechten (Platform for Civil Rights Protection). The agency that asked for the analysis has to investigate the citizens that are flagged for an unlikely combination of data whether an actual case of fraud took place. The Ministry of Social Affairs and Employment examines the data for false positives, without handing them over to the agency that asked for the analysis. As a result, due to the lack of transparency, the citizens cannot be aware of how, based on what data, and why SyRI decided that they need to be flagged.

On 5 February 2020, the Dutch court of The Hague ordered the immediate halt of SyRI because it violates article 8 of the European Convention on Human Rights (ECHR), which protects the right to respect for private and family life. Article 8 requires that any legislation has a “fair balance” between social interests and any violation of the private life of citizens.

 

3. Transferring the human rights framework in the AI era

AI has created new forms of oppression and disproportionately affects the most powerless and vulnerable, as in the cases analysed before. There are several lenses through which we can examine AI. However, the use of international human rights law and its well- developed standards can counteract the power differences. 

More particularly, human rights laws provide the proper framework to address the challenges set by the AI systems that can be used as a base in the discussions for formulating the AI regulatory framework. Moreover, they are universal and binding, while they oblige not only states but also companies to respect human rights and comply with the human rights standards they set. Additionally, they anticipate remedies in case of violation, while they articulate the application of human rights law to changing circumstances, including technological developments. This is why the transfer of the human rights framework in the AI era is needed. 

At this point, I would like to clarify that the term ‘’transfer’’ is used because the human rights framework already exists, as hundreds of international treaties, national constitutions, laws etc. cover various fields in a different manner. As such, what is needed is the rules of law and principles enshrined in them to be mirrored in the AI regulatory frameworks. With the term ‘’transfer’’, I also mean the respect for human rights framework as an abstract political statement in the decision-making about the development and deployment of AI. 

To sum up, taking into consideration the before-mentioned cases, the only certain thing to me is that substantive equality efforts are really required in order to overcome the impact of AI systems on human rights. This means that efforts that look to the equality of results and equality of opportunities are needed. Moreover, legal binding texts both in the public and private sector, regarding the regulation of AI are of great importance as well. Last but not least, a human rights impact assessment and a provision of a control mechanism that can be activated during all the stages of the AI process, which will systematically investigate and overlook the AI process, will be very helpful for the mitigation of bias. 

 

Leave a Reply

Your email address will not be published. Required fields are marked *