AI is a Girl’s Best Friend? A Gender-Based Analysis of the Legal and Ethical Challenges of AI systems

by | Nov 2, 2020 | All, Women Input in Digitalisation | 0 comments

N’oubliez jamais qu’il suffira d’une crise politique, économique ou religieuse pour que les droits des femmes soient remis en question. Ces droits ne sont jamais acquis. Vous devrez rester vigilantes votre vie durant. ~ Simone de Beauvoir

This warning by Simone de Beauvoir still hits home after so many years passing between the writing of The Second Sex and the era of AI and pandemics. Yes, the global gender gap index is still improving according to the reports of World Economic Forum, but our time poses some new challenges of its own, not to mention the old ghosts of the past still hanging around. It is especially evident in the context of the COVID-19 pandemic when during one short year we witnessed a sharp increase of violence against girls and women, as well as the economic toll on the same.

But let’s go back a little. AI technologies were at the centre of attention of policymakers on national and international level for quite a while. In 2019 the appointment of the new President of the European Commission, Ursula von der Leyen, became memorable on two counts. One, she was the first woman on this position which broke another layer of the glass ceiling, but she also put a remarkably strong emphasis on digital technologies, AI in particular, and their transformative powers over European society. She went even further by promising a ‘coordinated European approach on the human and ethical implications of Artificial Intelligence’ in her first 100 days at office. This was more or less delivered through the White Paper on Artificial Intelligence in the beginning of 2020 which was met with mixed reactions by different stakeholders. It is important to say that the White Paper is part of a broader political objectives and needs to be considered together with other policy documents such as the Data Strategy , the Industrial Strategy for Europe and so on. We can go as far as identifying AI, its utilisation and regulation, as a focal point around which the rest of the policy actions of the Commission are built. Even in the EU Gender Equality Strategy 2020 – 2025 we see the link between development and deployment of AI and gender equality, as a ‘key driver of economic progress’. What is notable, however, is the dual nature of AI with respect to human rights, and in particular women’s rights.

This is recognised in the Gender Equality Strategy itself, but it is already pretty evident in practice too. We have been battling for years with the underrepresentation of women in tech. While there is some improvement, the numbers are far from impressive. When we talk about women in tech, however, we should remember this is a very broad field and while women might be underrepresented in general, it is even more so when we talk about female participation in developing AI technologies. The Global Gender Gap Report for 2020 indicates that women globally make up around 26% of workers in data and AI. Even more worrisome is the data showing female researchers making up as little as 12% of the AI researchers globally.

What does this mean? To quote Meredith Broussard, describing the dire history (and present state) of AI: ‘…we have a small, elite group of men who tend to overestimate their mathematical abilities, who have systematically excluded women and people of colour in favour of machines for centuries,…who have unused piles of government money sitting around, and who have adopted the ideological rhetoric of far-right libertarian anarcho-capitalists. What could possibly go wrong?’ Well, as a matter of fact, quite a few things.

First, it is important to point out there are numerous ways AI can affect women in a negative way. The issues are usually related to discrimination, but they may very well lead to even more severe consequences such as affecting their right to health. An AI model is as good as its data input and if there is bias in the data, this will affect the model. To illustrate, a biased data set that disregards heart disease symptoms in women due to men being statistically more susceptible to such health conditions, will result in an AI system that is supposed to help medical professionals and patients with diagnosis, not recognising heart conditions in female patients in a sufficient rate and thus endangering their lives.

Even if the data is not the problem, it is possible for the AI model to discriminate women based on protected attributes that have deliberately been dropped from the main data, but have, nevertheless, still played a role in the discriminatory outcome. To give an example gender, as a protected attribute, might not be included as a feature in the training data set for an AI resumé assessment system for software development jobs. Yet, the model could determine men are preferred based on bigger historical number of resumés and downgrade resumés containing the word woman/women, for instance in the name of educational establishment a candidate attended to. To put it as simply as possible, you do not teach your AI what ‘gender’ is and if it matters, but it determines it does based on the better success rate of male candidates. This was a real case with an AI recruiting tool used by Amazon.

From a legal perspective, another problem appears when a victim of discrimination decides to protect their rights and seek remedy. The traditional legal concepts of liability and accountability are not always flexible enough to provide victims with a satisfactory solution because it is increasingly difficult to determine who is the one responsible and, thus, liable for discrimination when the one discriminating is an AI system. Indeed, this could even be just a product bought by a company that has never participated in the development of the system.

Are we screwed then? Not necessarily. AI does not create issues for women that did not exist before; it merely reflects and amplifies the stereotypes and flaws in our society towards women. Why did the female digital assistants like Siri or Alexa used to reply to sexual harassment by phrases such as ‘I’d blush if I could’? For me and every other woman out there the answer is clear, and the issue is not AI. After all, in the end it is nothing more than a bunch of 1s and 0s.

We shall use AI for our advantage. Its benefits are too substantial to be disregarded due to problems we can work out. Monitoring of high-risk AI systems, certification and trustworthiness by design are just some of the solutions. There are toolkits that are designed trying to test the systems for bias and fix the problem before it could create any issues in the real world. Putting the technical measures aside, there are plenty of organisational ones we can adopt to improve the situation. The number one priority is involving more women into researching and developing AI. Mentorship, support and increasing visibility of women who are already out there is essential, but we should also invest into young girls and break the stigma in education. Finally, we need to recognise women’s diversity as a group and invest more efforts where they are needed. A good example are women of colour who are often subject of double bias. AI is said to be here in order to transform the future, and we have the power to make sure this future is the one we want for ourselves – equal and full of opportunities.

Katerina Yordanova, Researcher in law at KU Leuven, Centre for IT and IP Law (CITIP).

The views expressed in this paper are solely the author’s views

FOLLOW US

Katerina Yordanova

Katerina Yordanova

Researcher in law at KU Leuven, Centre for IT and IP Law (CITIP)

Shares
Share This