Why are marginalized groups disproportionately affected by the negative aspects of Artificial Intelligence (AI)? And (how) can we move towards the other side of the medallion: benefiting vulnerable groups in society – vulnerable or marginalized meaning perceived and made vulnerable by institutionalized discrimination and through the outdated misconception of the survival of the fittest- through and with data-driven AI? An example of the negative aspects of AI is the license plates being misused in the Dutch city Roermond (source: OneWorld) to detect illegal people from South-Eastern and Central Europe assumed to conduct petty crimes. The used automated AI-prejudice does not control for (marginalized) groups living legally in the area or others with a white van and white license plate. This is just a simplified example of the large-scale biased impact misuse of AI can have on society. More importantly how can we move towards a Human-Centric Approach that positively affects marginalized groups in society?

Unbiased Data-Driven Decision Making

The short logical answer is flip the coin. Break the current third wave of commercialized AI-mania by utilizing the positive impact it could have on people who are institutionally discriminated against and have been for centuries. For example, Amsterdam reports high crime rates in certain municipal areas. These people have a low socioeconomic status in combination with a migrant background. Mostly non-Western migrants outnumber the native Dutch people and the socially accepted highly skilled (often western and northern) migrants in specific crime rates. While of great added value to boost Dutch economics and cultural exchange, there are downsides to these informal double standards between Non-western and Western migrants plus Dutch native people. For instance, in other more expensive or high-end areas of Amsterdam, certain crime reports are lacking, such as theft and tax avoidance. This does not mean that criminal activity is characteristic of a particular municipal area of ​​the city, it just means that no other crimes have been reported. And even if certain crime rates occur among people of non-Western minority backgrounds, it is often because of the limiting socioeconomic status moving in a downward circular flow exceptionally difficult to come out of. Combined with unfortunate self-stigma often damages mental health for these people to raise their quality of life and living standards. These discrepancies are well-known data gaps among data scientists and other policy influencers. If police officers on the ground or decision-making officials have no incentive to increase surveillance in a more expensive area in the city or even remote areas outside the city, the data gap will widen. Data-driven AI with a human-centric approach could guide us to unbiased data-driven decision-making put into practice.

What is a Human-Centric Approach?

Zooming out of Amsterdam and onto the global scale, the Sustainable Development Goals are all full of the Human Centric Approach: to reach sustainable solutions for more and more people, we need a people-centered approach. This approach offers marginalized groups equal opportunities, equal competences to be developed, equal treatment in the workplace and it stimulates growth. We can already apply this to many essential necessities of life (water, food, health, education). Yet, it is at least equally applicable to data-driven AI solutions in our localized lives, especially during the current pandemic, for economic prosperity and safety measures. While sustainability is great and should be upheld within any type of organization, entity, or agency, achieving a sustainable world is work in progress, to say the least. More, much more, needs to be done.

Developing fair access to data-driven AI solutions to do good is a first step towards the Human-Centric Approach of Data-Driven AI. In more tangible ways the approach means that the human being is at the center. Individual data-control, transparency, accountability, equality, and sustainability are the essential principles of data-ethics (source: dataethics.eu). Unfortunately, on the road to success, it is tempting to lose sight of the Human-Centric Approach because of a commercialized goal or simple forgetfulness.

Data-Driven Public-Private-Partnerships: start the dialogue

The actual answer to how we can move towards benefiting vulnerable groups in society through and with data-driven AI is long. However, the first step is a call to action for round table discussions between small and medium-sized enterprises, government officials and civil society organizations. The discussions should focus on how to flip the negative aspects of data-driven AI that enable and reinforce discrimination. Discrimination based on race, gender, age, political affiliation, sexual orientation, hair color, or any other biased lens. ADC invites you to this dialogue. We also encourage you to draw people to your work, company, study group or other gathering that are different from your own referenced lens. Then see how great the sustainable development of your business, organization, work, school, or other social field will be. A win-win-win situation powered by data-driven AI in the current 21st century. Long overdue.

Would you like to know more?

Would you like to know more about Human-Centric Approach or about what Amsterdam Data Collective can do for you?
Get in touch with Casper Rutjes at casper@amsterdamdatacollective.com, or check our contactpage.

Read on

Casper Rutjes

Casper is Amsterdam Data Collective’s public sector lead. He is driven by a strong motivation to help people, teams and society work together and move forward. Through his in-depth knowledge of data and computer science technologies and strong interpersonal skills, he embeds innovation in a positive, constructive and effective manner.

Get in touch

Related Insights