Categories
Posts

AI Applications and Liability Issues: The Case of Virtual Influencers – Anna Moraiti

Anna Moraiti

Anna Moraiti is a PhD candidate at the University of Luxembourg. Prior to her arrival in Luxembourg, she studied Law at the National and Kapodistrian University of Athens. Her main fields of research are European Criminal Law, Philosophy of Criminal Law and the Intersection of Criminal Law with New Technologies.

https://www.linkedin.com/in/anna-moraiti-50459b224/


AI Applications and Liability Issues: The Case of Virtual Influencers 

 

In the 2013 Spike Jonze film, titled Her, the main character (played by Joaquin Phoenix) is a depressed middle-aged man who falls in love with his artificially intelligent virtual assistant. The film convinces us that such a scenario could be possible in a relatively near future, even though said assistant is personified only through her voice. A few clicks on Instagram these days may indicate that this future lies closer than expected. 

Imma instagram post

Imma is a virtual influencer from Japan with more than 350.000 followers on Instagram. She is interested in Japanese culture, film, and art. She has even got a glimpse of the real world while posing for Ikea during the Covid-19 pandemic and trying out common household activities to pass the time, without neglecting her physical and mental well-being. Miquela, counting more than 3 million followers, is by her own (misleading) description a ’19-year-old robot living in LA’. Like Imma, she is actually a computer-generated image (CGI) that was created by Brud, an American startup that creates social media personas. These are only two examples out of the 35 verified virtual influencers that have been operating on Instagram for some years now. 

 

While the phenomenon itself has not received the attention it merits as of yet, a scroll through their posted photos is enough to discern the ways in which they can outperform human influencers: they may adapt their appearance very easily so as to illustrate the values of the brands they promote and their interests – in fact any apparent imperfections are tailored so as to form part of an alluring persona – they never age, and they do not fall ill or die. Of course, depending on the content that they promote their activities on social media platforms can be anything between beneficial, indeed inspiring, and outright harmful. Their role in creating effective social media strategies may suggest that they will be more broadly deployed in the future. Their scope of autonomous action amounts to zero (that is, it is humans who create their posts and shape their personalities), but this may not remain the case for long. In a 2019 study conducted by the social content company Fullscreen amongst young social media users, 23% of the respondents stated that they would describe virtual influencers as ‘authentic’. Even more alarmingly, 42% of them were found to have followed a virtual influencer without realising it was computer-generated. So, the question posed is: if these influencers were to cause harm through their own actions, who should bear the cost and who should be blamed?  

 

This question relates to the major ethical and legal debate that is taking place with reference to a potential civil and criminal liability regime for certain AI systems (AIs), as risk control in the area seems more and more problematic. In the event that fault-based (negligence) liability and strict liability rules would prove to be insufficient (due to the need to establish the breach of a duty in the first case, and, in any event, due to the problem of individual causation), the attribution of legal personhood to certain such systems presents itself as a possible means of better managing the risks inherent in their deployment and avoiding a potential ‘responsibility gap’. Harmful behaviour that cannot be traced back to any human agent involved (such as the user, the owner, or the programmer of the system) could occur in numerous occasions. Conferring legal personhood on an AI system would be conceptualised to serve as a symbol for the joint efforts of the parties involved in their production and distribution to the market. 

 

In 2017, the European Parliament issued a resolution with recommendations to the Commission on Civil Law Rules on Robotics, suggesting that a separate legal status could be created at least for the most sophisticated autonomous robots. Since then, the concept of an ‘electronic personhood’ has been examined under different angles. Whereas some maintain that a human can always be identified as responsible for a machine’s output, and even if we were to agree that this is mostly the case for now, this is likely to change in a few years or decades of AI development. The recognition of a new legal person and the granting of certain rights—most importantly, the right to property—seems feasible and effective, if legal personhood is seen as a cluster property, meaning that a legal person qualifies as such because it happens to unite a number of incidents. From the perspective of civil law, a distinct personality would serve as a protection for their programmers and users’ legal interests, as it would be possible for autonomous agents to cover any damages caused to third parties from their own assets. 

 

What may seem possible for the purposes of civil law is much more debatable when criminal law sanctions enter the picture. A potential criminal liability regime for AI agents is more rigorously opposed in legal theory, as personhood for the purposes of criminal law has been historically linked to human ability for reflection, more specifically the capacity to pursue proper goals based on conscious beliefs. At a starting level, the proposition seems incompatible with the principle of culpability. When it comes to criminal law, some authors attempt to draw an analogy between AIs and corporations as the only example of criminal liability being extended to artificial agents. Some sanctions that have been suggested are confiscation or destruction of harmful AIs, or else the imposition of fines on assets owned by them. Drawing from corporate criminal liability, the doctrinal tool that seems pertinent is respondeat superior, which allows mental states of employees to be imputed to the corporation, provided that they were acting within the scope of their employment. Whereas this analogy is certainly interesting to explore, it is much more difficult to identify culpable human agents in the case of AI agents than within the context of a corporate structure; especially because multiple individuals and/or corporations may be involved at different stages of their production. This is for instance the case in open-source software. An AI agent exhibiting a rough equivalent of a mens rea forms part of a speculative scenario; directly criminalising such agents would require a major revision of the general part of the criminal law and as such, it is not a solution that any European state can easily accept. As argued by Abbott and Sarch, a moderate response, such as the set-up of an expanded civil and or/criminal liability regime for certain individuals behind their design and operation, would be more reasonable from an overall legal point of view. Additionally, an insurance scheme could be specifically designed in order to provide compensation to victims. 

 

Simply put, creating a new legal person—especially for the purposes of criminal law—is no simple task. Considering the case of virtual influencers, the prospect of granting legal personality to them for compensation purposes can be confronted with the practical argument that they are fully controlled by their designers or operators, and, therefore, that they do not qualify as autonomous software agents. This feature is their greatest commercial benefit right now, as the brands that choose them wish to avoid employing their human counterparts, who may act impulsively and easily generate public controversy with an ill-advised act or statement. However, technological advances related to the autonomy of AI applications could easily subvert the situation in the future. It could be that companies will be tempted to launch into the market virtual influencers powered by AI, that is, autonomous creations that would resonate with their audience on a deeper level and develop—for better or for worse—as they interact with their environment. Betaworks, an American startup that invests in media businesses, has expressed its intention to invest specifically in intelligent virtual creators. According to Danika Laszuk, the general director of Betaworks’ startup bootcamp, this prospect marks the future of influencers. 

 

The capacity of virtual influencers to negatively instruct their followers’ overall behaviour and concrete acts should not be underestimated. Such a type of intelligent influencer could very easily commit Internet-enabled offences if let loose, e.g. ‘public calls for criminal acts’ (§111 of the German Criminal Code), ‘incitement to hatred’ (§130) or ‘defamation’ (§186). Indeed, its actions could lead to the commission of crimes that have been caused by unfortunate interactions with the public. This would come as no surprise. There is the precedent of Tay, an AI chatbot launched by Microsoft in 2016. The bot was initially designed for Twitter but was deactivated some hours later for spreading racist remarks that it picked up while interacting with users. But the risk in this case is not comparable to the malfunction of a chatbot. Virtual influencers’ projected personality, statements, and appearance blur the line between reality and fiction. Considering the preventive measures described in the draft AI Act of the European Commission, an AI agent operating from such a position would probably qualify as a ‘high-risk AI system’ and could easily perform prohibited practices if left unsupervised (see, notably, articles 5b and 6 of the AI Act). Unpredictable failures will not be fully prevented by regulation and an effective liability regime will soon become a necessity to address the harm caused. If autonomous software agents, such as the speculated virtual influencers, are to thrive in the future, increased transparency (see article 13 of the AI Act) as to their existence and the aims that they pursue is the absolute minimum we should ask for.

 

Leave a Reply

Your email address will not be published. Required fields are marked *