Categories
Current awareness pieces Posts

The Role of Impact Assessments for the Creation of Trustworthy AI – Elizabeth Quinn

Elizabeth Quinn

Elizabeth Quinn is a graduate of International Human Rights Law (LL.M) Lund University, Sweden. She is a graduate of Law (LL.B) from Trinity College Dublin, Ireland. She has worked as a Blue Book Trainee at the European Commission in the area of fundamental rights policy. Her main interest is in business and human rights. In particular, she is interested in digital rights.

https://www.linkedin.com/in/elizabeth-quinn-083030206/


Artificial Intelligence (AI) poses both significant opportunities and threats to human rights. Systems that have the ability to analyse, predict and nudge behaviour and make decisions may be used both by public and private actors in a range of areas affecting our lives. Getting the balance of risks and opportunities right is key to maintaining public confidence and trust in AI. 

This blog post will argue that ex ante human rights impact assessments (HRIA’s) of AI systems could be one tool in creating and building public trust.. It will do so by examining impact assessments within the broader trend towards a new governance framework in which certain responsibilities are delegated to the private sector. 

 

Introduction 

Impact assessments as a regulatory tool has historical roots in environmental impact assessments and has expanded into other areas such as privacy, data protection and human rights. In particular, human rights impact assessments (HRIA) are identified as a tool for conducting due diligence under the UN Guiding Principles on Business and Human Rights (UNGP’s). Within the logic of the  UNGP’s, conducting impact assessments is an ongoing process that aims to ‘identify, prevent, mitigate and account for how companies address their impacts on human rights.’ This process should ‘involve meaningful consultation with potentially affected groups and other relevant stakeholders’ taking into account the size and nature of the company’s operations. 

 

AI is a broad field but includes machine learning. Machine learning has been described as computer algorithms that have the ability to learn and improve performance over time on a task. This requires the detection and prediction of patterns. Decisions are made on the basis of the patterns that are formed. As AI systems may be entrusted with important decisions that may have detrimental consequences for people, the building of public trust should not be taken for granted. 

 

Business and human rights has focused on certain industries. The focus generally relates to businesses that have human rights impacts—direct or indirect—in their supply chain. The analysis developed for these industries does not easily transfer to the technology sector. This is reflected in the setting up of the B-Tech project at the UN, that aims to provide a roadmap for applying the UNGP’s in the technology sector. The project shows the sector can learn from other industries but that it has distinct features, including the development and deployment of AI systems.

 

The New Governance Approach to Regulatory Frameworks

The move towards considering the role of impact assessments for AI aligns with the new governance approach to regulation. In this approach, certain responsibilities in legislative frameworks are delegated to the private sector. This has been argued to prompt an ongoing conversation between companies and regulators. In the context of AI it is understandable that collaborative governance is needed, in part, due to innovation in the area generally happening at a fast pace in the private sector.

 

As social and technical factors need to be examined and are inherently intertwined it has also been argued that HRIA’s for AI should be an analysis of a sociotechnical system. Huq has argued that ‘without taking a systemic perspective that attends to the suite of human design decisions [associated with the use of AI], it will often not be feasible to identify how or why inaccuracies or systemic biases occur.’ A HRIA would allow for such a dynamic approach to be operationalised. 

 

Two fundamental elements of conducting HRIA’s are the early engagement of all stakeholders who may be affected, and transparency in how systems work. These elements are essential to the effectiveness and legitimacy of the new governance approach as they allow for public feedback to modify aberrant systems and the building of public trust. What is adequate in terms of both stakeholder engagement and transparency is a matter of debate and will evolve as assessments are conducted and published. An obligation to review the implementation of the assessment could create incentives for companies to take them seriously and not as a box ticking exercise. A review mechanism could also work to build public trust.

 

The new governance framework has been operationalised in regulation at EU level. Data protection impact assessments (DPIA’s) are required under the General Data Protection Regulation (GDPR) when data processing results in high-risk. A DPIA, unlike a HRIA does not require stakeholder engagement to be conducted or the assessment to be published. It has been argued that by failing to mandate transparency and stakeholder engagement it hinders the public’s ability to provide feedback, an essential component in a functioning collaborative governance regime.

 

The New Governance Approach in the Proposed EU AI Act 

The approach is also reflected in the proposed EU AI Act. Providers (developers) of high-risk AI systems are required to undertake ex ante assessments, including the requirement to set up a risk management system. Providers need to establish, implement, document and maintain a risk management system. This is a continuous process that must be conducted through the life cycle of the system. It has been argued that the obligations do not comprehensively take account of fundamental rights. The obligations in the proposal do not obligate providers to engage with stakeholders or take into account the impact the system may have on particular vulnerable groups. 

 

Civil society has also called for the proposed EU AI Act to obligate users (deployers) of high risk AI systems to conduct a fundamental rights impact assessment. The logic of the responsibility for conducting the assessment being placed on the users (deployers) is that human rights risks posed by AI systems are context dependent. The additional requirement could allow a focus on the impacts that a system may have on a particular group of people, the inclusion of those people in the process and transparency, if they are required to publish the assessments. All of these elements are seen as fundamental in building public trust and could also work as a tool for ensuring accountability.

 

One risk in creating ex ante HRIA requirements for AI systems that has been acknowledged is that it may be costly and serve to consolidate the  position of dominant technology companies positions in the market. However, it has been argued that economics needs to be balanced with the need to create public trust in AI systems to allow for the uptake of AI in society.  Guidance for smaller companies could include guidelines and partnerships on conducting HRIA’s.

 

Case Study: Facebook’s (Meta) Human Rights Impact Assessment in Myanmar 

There are some limited examples of HRIA’s being conducted by technology companies. Facebook engaged in the HRIA process in relation to its role in Myanmar, conducted by the NGO Business for Social Responsibility (BSR). In 2018, the Independent International Fact Finding Mission on Myanmar concluded that there was no doubt that the prevalence of hate speech on Facebook contributed to increased tension and calls to violence in the country against the Rohingya minority. 

 

The impact assessment conducted was an ex post analysis of Facebook’s role in Myanmar. The assessment does not discuss Facebook’s newsfeed algorithm in detail or how people were  targeted on the platform. Whether AI systems fuelled the dissemination of hate speech is not questioned or analysed. The assessment also focuses on the platform being used by bad actors. This focus shifts the blame to people using the platform away from an examination of how the company’s algorithms may have worked to amplify hate speech. 

 

The lack of examination on the algorithmic curation of Facebook’s newsfeed has been criticised as the proposed recommendations will not lead to systemic change. Facebook’s response to the report was to state that it is acting to mitigate human rights impacts in Myanmar. Actions taken included the adoption of a human rights policy and advocating for reforms of law in Myanmar. Facebook acknowledges that systematic changes are needed in the country in order to protect human rights, but did not question whether systemic change is also needed by the company’s AI systems that may have had spill-over effects. 

 

This example clearly shows that HRIA’s are best performed on an ex-ante basis. It also demonstrates the need for access to information and transparency on AI systems to an external organisation if they conduct the assessment. Impactful assessments will require multi-disciplinary teams, working together to ensure that a holistic approach is taken. This is being done where impact assessments are being developed through conducting case studies. These case studies, and the challenges that may arise will be illustrative in providing practical guidance.

 

Conclusion 

HRIA’s are not the only tool that can be utilised to enhance trustworthiness in AI and will need to be complemented with transparent accountability mechanisms. HRIA’s have the advantage of being able to identify a potential risk on an ex ante basis.  This, in theory, should allow for the  adoption of mitigation measures in place to ensure the risk is controlledThese risks do not only include human rights risks to individuals but should focus on design choices and create a dialogue between developers, users and end users of the systems in question. This dialogue has the potential to create an ongoing conversation, building public trust and potentially shaping norms and expectations of what counts as trustworthy AI over time.

Leave a Reply

Your email address will not be published. Required fields are marked *