AI and Gender Equality
The term AI has recently become a buzzword; a marketing eye-catcher that jumps at you from everywhere these days, and is, frankly, tiring. I googled synonyms of AI, and there was a development of ‘thinking’ computer systems: DoCS – but I am not sure this abbreviation will impact this article as AI would do. So, let’s stick with AI. After all, we are creatures of habit, and this habit will play a particular role in this article.
This article will look at a few risks associated with AI and potential solutions. AI technology is already transforming the labor market and changing the types of jobs and their quantity. Undeniably, the process of automation hugely affects employment structure and dictates whether existing jobs disappear or new jobs emerge. This increasing adoption of technology is driving the development of new jobs, according to a UNESCO report. Here is where AI has the potential to help or reinforce gender bias and hinder the DEI goal.
According to the EU, in order to be considered ethical, any AI technology must ensure respect for the fundamental rights of EU citizens. The EU wants to avoid the potential harm the misuse of AI can cause its citizens and find solutions to the major ethical concerns (bias, discrimination, algorithmic opacity, lack of transparency, privacy issues, technological determinism, etc.). Many could say that automation is likely to affect both female-dominated and male-dominated occupations, which is true. However, women are more likely to work in occupations that involve a high degree of routine and repetitive tasks (e.g., clerical support work or retail jobs) (Lawrence, 2018; Schmidpeter and Winter-Ebmer, 2018; Brussevich et al., 2019).
The Habit of Data
Data fed into algorithms can determine its functioning, and thus a gender bias is embedded in AI by those who design the systems. So, whatever data is provided or consumed by the Ai systems, they will use them, pick up on patterns, and often even amplify them. One of the recent problems with consumed data was that AI was trained with a unimodal system, meaning it was trained to a very specific task (such as processing images which happened to be one of the underlying problems of AI being biased). Only recently, many of these algorithms have been trained with the so-called multimodal system. While these systems have been previously used for research, they’re becoming more commercial. It’s the same for humans who process data through various sources; new AI algorithm training will also have multiple sources, so there is no lack of context when processing data, allowing them to integrate different modalities and synthesize them. While the new approach is better, it is not ideal, as it often relies on data sets mainly from open-source frameworks, which eventually exhibit biases. Another unaddressed challenge published by Stanford’s Institute for Human-Centered Artificial Intelligence is that multimodal models can result in higher-quality, machine-generated content that’ll be easier to personalize for misuse purposes. So, it is utopian and unrealistic to think we can have unbiased technologies with multimodal training systems, as even we human beings are not free of bias. However, our bias and habits can be lessened by providing diverse data and information. An advantage of AI is that it uncovers and mirror back to us some of the biases that humans hold. Furthermore, the new algorithmic accountability policies stress a prioritization of public participation to develop more democratic and equal systems. It is just recently that Amsterdam and Helsinki launched AI registries to detail how each city government uses algorithms to deliver service. The registry also offers citizens an opportunity to provide feedback on algorithms and ensure that these AI systems play in favor rather than against society. This is hopefully one of many steps towards using AI to achieve gender equality.
The Habit of Education
It can’t be stressed enough how important it is to innovate within education. It is a driver for a progressive society, but not when it lacks behind. The number of women entering the Stem field is increasing, but it does not mean we are anywhere close to the gender divide in digital skills. According to the World Economic Forum, within the G20 countries, women represent less than 15% of ICT professionals, and this gender and skills gap is getting wider every year. The European Institute for Gender Diversity reports that the gender gap in the AI workforce widens with career length. Women with more than 10 years of work experience in AI represent 12% of all professionals in the industry, compared to 20% of women with 0–2 years’ experience.
So, while more women tend to enter computer science roles, their numbers radically drop over time because female workers lack support, face discrimination, or the glass ceiling phenomenon, which essentially makes them transfer into another field. To claim, we have concluded that the number of women entering the technology field is just not enough. While this is not to say that women who enter the engineering field must stay in it for 20 years, to optimize the numbers, we need to empower women who decide to change their careers later in life and grow their skills to receive new training in data science or computing to enter those fields in later years. Unfortunately, many fields act as some sort of elitist and exclusive human capital, refusing to bring in and train people, so that we can create a more polyvalent society. Therefore, efforts to improve current education should come from all sides: individuals, stakeholders, government, and the private sector as well.
Finally, there is no doubt that there are AI algorithms that reinforce gender biases, but also ones that uncover them. However, AI itself is not one to blame. It is only mirroring issues of our society, and the fundamental work and improvements are still to be done among us, humans.
Technology and Human Rights Consultant