The illustration shows a binary code behind different groups of people

Diversity and AI: More Human, Less Machine

How must we shape Artificial Intelligence (AI) so that it reduces inequalities rather than reproduces them? Kinga Schumacher travelled to Barcelona for the German government’s lecture programme where she spoke about the opportunities and risks of AI as part of the lecture series on “Women in Tech”.

Bellaterra, a small community not far from Barcelona. The chairs in the conference room at the Artificial Intelligence Research Institute of the Spanish National Research Council are still empty; the smell of coffee wafts through the corridors. In one hour, Kinga Schumacher will hold keynote speech here on fair, non-discriminatory AI technologies, so-called “diversity-aware AI”. This is the last stop on her journey. During the past few days, she has discussed the topic of AI intensively with representatives from the fields of medicine, politics, education, climate research, and the economy. “I am taking many ideas and new perspectives back with me,” she says, “and that is exactly what is important for our work: different points of view.”

Making the unconscious conscious

For three years, Kinga Schumacher has been doing research in the Berlin laboratory of the German Research Center for Artificial Intelligence (Deutsches Forschungszentrum für Künstliche Intelligenz / DFKI) on “diversity-aware AI”, in other words on AI systems which take the diversity of society into consideration and, instead of reproducing inequalities, reduces them. To do so, Kinga Schumacher and her team develop guidelines, for example questions which scientists and developers should ask themselves during the entire design and development process: which dimensions of diversity play a role for my project: gender, age, origin, culture? How can these criteria be taken into consideration? Which data can I select in what way?

“This is about making ourselves aware of unconscious assumptions and prejudices, of documenting and discussing them with others and thus avoiding biases, i.e. prejudices and distortions,” she states. Different stakeholders and potential users should be included in the design and development process.

Diverse teams, diverse AI

She remembers a doctoral student who collected mobility data for his dissertation, based on the assumption that people carry their smartphone in their trouser pocket. “If he had written down his assumptions and consciously included women in his research, he would probably have realised that not all people transport their phone in their trouser pocket: women in particular hardly do so at all. An exchange of ideas in a diverse team would have highlighted this issue at an early stage.”

Diverse teams are the basis for AI systems, which work equally well for everyone. For this reason, diversity also plays a role at the DFKI, one of the world’s largest not-for-profit research institutes for AI. “Our teams are both international and interdisciplinary; women are now represented more frequently than they were a few years ago.”

“Good morning, Ms Schumacher, good morning, gentlemen!”

Kinga Schumacher’s own path into AI research was by no means predetermined. “As a child, I wanted to become a marine biologist,” she says and laughs. “When I took my first computer course at the age of ten, I thought, marine biologist is a beautiful dream, but in my home country of Hungary there is no access to the ocean. So this could be difficult – but informatics is also fun.” She had already gained a place at a university in Hungary when she went to Germany in the mid-1990s to learn a third foreign language. Here, she learned not only the language, but also met her current partner, and stayed.

Women join other women. There are fewer reservations.

It was only in Germany that she became aware of the fact that her career aspirations were unusual for a woman at that time. “In Hungary, it was normal that women studied STEM1 subjects. In Germany, my decision met with astonishment.” At the beginning of her studies at the University of Mannheim, there were only three women in the programme that year. “After just a few months, there were only two of us and from the second semester onwards I was on my own,” she remembers. “So then I heard, ‘Good morning, Ms Schumacher, good morning, gentlemen.” Thus, just staying away, like my fellow students, was not an option – or only for a very good reason.” Nevertheless, she never felt that her gender caused her any disadvantages.

Did Kinga Schumacher have role models? People who encouraged her on her path in a male-dominated profession? “At that time, I looked around a bit and then simply followed my interests. My parents supported me, but I didn’t have any role models because there simply weren’t any,” she says. Thankfully, the situation is different today: quotas, funding programmes and initiatives to attract women to technical professions are starting to take effect.

Meanwhile, at the DFKI, Kinga Schumacher heads a team which consists mainly of women. “Women join other women,” she says. “There are fewer reservations.” She advises young women and girls who are considering a profession in the STEM fields to pay less attention to the opinions of others and more to their own gut feeling. “Often, we know intuitively what is right for us and really interests us. And exactly this intuition is something which no AI system can replace.”

Human feelings: the last bastion?

But this is where the challenge lies. Developers are working to make AI systems “more human” and to imitate emotional intelligence, which creates emotional dependencies and can influence our empathy. People who fall in love with their AI chatbot have long become reality. Kinga Schumacher sees this development critically. “It is important that AI systems clearly disclose what they are: a technology. The difference between humans and machines should remain visible,” she says. These risks were also addressed again and again in the events during the last few days of her lecture tour. The participants agreed: it is important to strengthen society’s competences and skills in AI.

“Today, many people equate AI with ChatGPT, i.e. with AI systems which produce new content. Although this so-called generative AI has made this topic accessible for the general public, especially through its easily usable tools, there are, nevertheless, numerous non-generative AI systems which are less focused on in public debates but, despite this, have been doing meaningful work for a long time: image recognition, speech correction programmes, weather forecasts, AI-supported diagnoses. ‘Predictive Policing’, for example, is an AI-based method which uses data analyses to predict potential criminal activities and make police operations more efficient.”

“The great risk is not AI itself”

To teach more people how AI works, the DFKI developed the ‘AI Campus’ together with other research institutes and partners. This is a free learning platform which provides online courses, videos and podcasts for different target groups, and Kinga Schumacher was involved in its implementation. “You don’t just learn the basics there: how does AI work? What methods are available? How is AI applied in different professional groups? You also learn, for example, to develop a chatbot yourself,” she explained. Ethical issues are also discussed.

The great risk is not AI itself, but rather humans.

“To use the opportunities provided by AI and minimise the risks, we must learn to take a critical look at these technologies and apply them responsibly. While AI has many positive effects, it is up to us to maintain the balance between technological progress and human values. The great risk is not AI itself, but rather humans.” This is why she also supports regulations such as the AI Act recently passed by the EU Parliament. “In the end, this Act serves to protect human rights.”

The gender data gap and the potential of AI

Kinga Schumacher sees the greatest opportunities for AI in the fields of medicine and the healthcare sector: telemedicine, personalised medicine, improved diagnoses and the development of medicines. “AI systems can be used purposefully to overcome existing inequalities such as the gender data gap. Until recently, AI applications did not recognise the risk of heart attacks in women because the training data came mainly from male test samples. However, instead of demonising AI for that reason, modern AI applications, which take gender-specific differences into account, for example with regard to the symptoms, were developed and have since become better at recognising the risks for women. This is exactly the kind of constructive action which we need to develop the full potential of this technology for the good of all mankind.”

But medicine is not the only field in which AI applications can help to reduce inequalities. The main issue during an event at the Barcelona Supercomputing Center was how to make climate models more equitable. “Until now, long-term climate calculations are based almost solely on data from the northern hemisphere. This creates systematic gaps, making the forecast for countries in the southern hemisphere more inexact. AI systems can help to close these data gaps,” says Kinga Schumacher.

Are there AI systems which are completely bias-free? “An AI system can be designed to be largely bias-free for a specific purpose, a certain region or culture,” she says. “But what exactly ‘bias-free’ means depends on the context. There is no universal benchmark. AI applications for women’s health, for example, have no difficulty with gender bias. Image generating systems in Europe can show photos of people with their shoulders uncovered while in other countries and cultures this is not part of the cultural standard or even forbidden.” To develop bias-free AI systems, it is important to agree on specific criteria for diversity and include this in the guidelines for diversity-aware AI.

“The larger and more diverse the group of people, the better”

Meanwhile, the conference room in Bellaterra has filled up. Kinga Schumacher is standing on the stage; the light from her PowerPoint presentation lights up the faces in the audience, among them younger and older researchers who, like Kinga Schumacher, focus their work on ethical issues. At the end of her keynote speech she quotes the AI expert, Amit Ray, “As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership.” The audience nods their heads; then follows a round of questions and discussions.

By the end of this event, Kinga Schumacher has made further contacts and gained ideas for future collaborations which she will take back to Germany with her. “It is important that we make a joint effort to develop fair AI systems,” she had previously said. “The larger and more diverse the group of people, the better.”

 

Science, Technology, Engineering and Mathematics

About Kinga Schumacher
Portrait of Kinga Schumacher
Kinga Schumacher
Senior Researcher at the German Research Center for AI

Kinga Schumacher is a senior researcher and Deputy Research Group Leader in the Cognitive Assistants research department at the German Research Center for Artificial Intelligence. She completed her doctorate in AI at the University of Potsdam. Her research focuses on “diversity-aware AI” and human-machine interaction. Her analyses of the methods and capabilities of AI systems are incorporated in the regulatory activities of Germany and the EU.

Lecture Programme of the German Federal Government

Experts from politics, academia, culture and the media provide up-to-date and multi-faceted information about Germany in lectures and panel discussions. The ifa organises the Federal Government's lecture programme together with the German embassies and consulates abroad. It is aimed at multipliers from civil society in these countries. Find out more on the ifa website.