Lessons and consequences of the lack of regulation of artificial intelligence for women’s human rights


A visualization of artificial intelligence. Remko Koopman / Flickr


The global human rights community has awakened to the obvious risks that artificial intelligence (AI) poses to human rights. It seems that every day a new report is published on the widespread dangers posed by the digital world, especially for minority and excluded groups, but also in the form of new harms, from biometric surveillance to discrimination. Human rights experts now generally agree that states need to regulate this technology given the risks at stake.

In our research, we take this a step further. We argue that the absence of adequate regulation of artificial intelligence may in itself be a violation of human rights norms. This gap reflects a state in which governments have relinquished their obligations to protect, fulfill and remedy.

Despite the gendered implications of how AI works and affects our lives, it remains largely under-analyzed from a women’s rights perspective, especially when such a perspective provides a useful analytical lens for extracting lessons for other groups of rights holders.

Let us first look at the issue of gender stereotypes associated with voice-controlled devices like Alexa or Siri. These types of voice-based technologies promote a limiting stereotype of women: ‘she assists rather than direct, she pacifies rather than excites’. For example, in a now-abandoned project called Australian actress Cate Blanchett’s voice, Nadia used the voice to develop an AI-based assistant to provide virtual support services to people living with disabilities.

By supporting such stereotypes, gender AI risks violating Article 5 of the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), which requires States Parties to change practices that promote the “idea of ​​inferiority or superiority of either sex. or on stereotypical roles for men and women. ‘ ‘Nadia’ is a particularly interesting example because it was driven by a well-meaning goal of facilitating ‘accessible information and communication technologies’ as set out in the Convention on the Rights of Persons with Disabilities, but it was designed in such a way that that women’s rights were largely violated, as described in CEDAW.

In continuation of our research, we look at the issue of gender-based violence (GBV). For more than 10 years, GBV teams led by the Spanish police have been assisted by a computer-based system (VioGén), which assesses the level of protection and assistance that GBV victims need, given (among others) the risk of recurrence. of the perpetrator. However, many questions have arisen about the use of AI in the public sector and who should be held accountable when AI goes wrong.

The system in place does not yet use an AI-based algorithm, which is still being tested. Instead, it uses an old system of risk assessment that relies on traditional statistics. In 2020, the Spanish government compensated the family of a woman who was murdered by a former partner when she was denied a detention by the Spanish police based on the non-AI system.

It is possible that the new AI-based model may have been able to better identify the risks. But without adequate regulation of AI, when do governments decide it is the right time to move on to AI-based decisions? If an AI system better protects the victims overall, then how do you choose between prioritizing the rights of the victims (necessarily women, by law in Spain) and the rights of the perpetrators (mainly men) and, for example, a family life? If an AI model is in place and automatic, then how can the victim or perpetrator challenge his decision with data that is not taken into account by the algorithm? Finally, who is responsible (and how is this determined) if things go wrong?

Finally, and possibly one of the best known examples of abuse of AI technology is the use of deepfake technologies. A particular site (not identified here to avoid its promotion) generates pornographic images or videos by superimposing an uploaded image on videos and images of pre-filmed women (and a small number of men) in its database. Pornography-related attacks on non-media-relevant individuals are more likely to target women than men. This is because deepfakes are an AI tool that is most often developed with women’s bodies in mind, and partly because a higher proportion of pornographic images involve women rather than men.

The question here is not whether society considers pornography acceptable. Rather, it is the unauthorized use of images of women for financial and personal gain. The website mainly stores pictures of women’s bodies and is therefore primarily aimed at women. Should this blatant discrimination from a private entity, made possible only by AI-powered technology, be allowed? Would it be different if it affected the reputation of politicians, where men are overrepresented around the world?

These three examples illustrate how a gendered lens can help us identify the inherent risks that AI poses to women, but we can imagine groups of rights holders such as ethnic and linguistic minorities and children facing similar challenges and being left behind. unprotected by law.

The current absence of adequate regulation – that is, a state’s failure to establish legally binding standards to protect human rights from the roll – out of AI systems – is in itself a violation of human rights. We believe that all UN review procedures should begin to ask countries whether their legal regimes are ready for these challenges and what they are doing about it.

It is possible that some of the human rights violations may be resolved within the existing legal regime. Many others will not, however. For example, it is difficult to attribute responsibility for errors in machines that keep learning. If states do not have an adequate legal regime to prevent and solve these problems, they would waive their obligations under international human rights law. This would not be the first recognized case under human rights law where failure to regulate would constitute a violation of the positive obligations of states to protect.

These three examples illustrate how a gendered lens can help us identify the inherent risks that AI poses to women, but we can imagine groups of rights holders such as ethnic and linguistic minorities and children facing similar challenges.

Changes could theoretically be achieved with an international treaty. Some regional bodies are already trying to address it – for example, a proposal for an EU regulation is currently under way. But whether the approach is international, regional, national or all of the above, there is ultimately no fair or legitimate case to ‘wait and see’.

Leave a Comment