After a record-setting ten months of negotiation and formation talks, the new Dutch government was sworn in last Monday, 10th of January. The political parties VVD, D66, CDA, and CU have presented their coalition agreement. (Source: Coalitieakkoord) It sums up what they plan to work on and achieve in the coming 2021-2025 period. Unlike previous years, data governance plays a more explicit role in the new coalition agreement.
The increased interest in the use and impact of data is also visible on European level. The European Union (EU) is in the process of developing regulations on the use of AI in the EU. In April 2021, the European Commission presented their EU Artificial Intelligence Act. (Source: EU AI regulation: Is your AI algorithm approved by the European Union?) This act prescribes rules for the development, commodification, and use of AI-driven products, services, and systems. If the AI Act is passed, it applies to all member states of the EU.
In this article, we highlight parts of the coalition agreement that are relevant to organisations working with data science and AI applications. We explain the implications and place the coalition plans in the context of the EU developments in the field of AI regulation.
The new coalition presents an outspoken ambition to stimulate (digital) innovation. At the same time, there is a strong emphasis on the need for more stringent regulation on the application of AI, online civil rights, and privacy. Overall, digitalisation is now more prominent, which is reflected in the appointment of the first-ever Secretary of Digitalisation. Three plans related to data governance are worth taking a closer look at.
1. European collaboration and human-centred AI
“We take the lead in strengthening the collaboration between member states of the EU in the field of digitalisation, among which are the application of human-centred AI, digital ethics, the development of a digital identity, cybersecurity and ‘open source’.” (Source: Coalitieakkoord)
Explainer | Human-centred AI
Human-centred AI resolves the apparent conflict between controlling the development and application of AI systems and stimulating digital innovation. The manifest “Mensgerichte Artificiele Intelligentie” by the Dutch AI coalition, presents human-centred AI as an alternative approach to the capitalist “Big Tech” in the west and the centralistic approach in the east. In this human-centred approach, systems that make use of AI are built with human, ethical principles embedded in their core. This coalition states that the Netherlands is a perfect breeding ground for human-centred AI. The Dutch “polder model” should enable collaboration between the public sector, science, business, and society.
According to the European Investment Bank (EIB), a digitalisation gap between the United States (US) and the EU already exists. Even though eight EU countries outperform the US in this regard. So, while some EU countries are at the global forefront of digitalisation, others are at risk of being left behind. The slow adaptation of digital technologies threatens to impede European business competitiveness in the long term. Moreover, digital companies tend to perform better than non-digital companies (Source: EIB). Given the diversity and the ever-increasing speed of technological development, active dialogue between EU states is therefore a vital part of preparing for the challenges of the digital future.
Ricardo Mourinho Félix, Vice-President of the EIB, said that a weak European digital sector means that the EU lacks ownership of its own data. Moreover, a weak digital sector leads to underdeveloped regulation and governance of data, making it difficult to protect, control, and audit data access and use. This means it is important that EU member states work together to set high standards in terms of digital technologies from which all member states can profit. By doing so, the EU can ensure that all states have a certain level of maturity in terms of digitalisation, data governance, digital ethics, and cybersecurity. As mentioned in the coalition agreement, the Dutch coalition plans to take a leading role in driving this collaboration among the EU member states.
2. Regulation and supervision of algorithms
“[We] will not apply face recognition software without strict regulation and control. […] By regulation, algorithms will be checked on transparency, discrimination, bias, and arbitrariness. This is supervised by a national supervisor of algorithms.” (Source: Coalitieakkoord)
The European Commission recently published a proposal to regulate AI algorithms in Europe. One of the initiatives is to set up the European Artificial Intelligence Board (EAIB), comprising one representative of each member state. The EAIB serves to supervise and facilitate the implementation of legislation and to share best practices among member states. In this setting, they stimulate cooperation between states. Meanwhile, executive authority remains the main responsibility of the member states themselves. Therefore, it is important that member states begin to construct governing entities that will be able to enforce such legislation. This is exactly what is in the coalition agreement: to set up a national supervisor of algorithms.
While it is good that the EU and the Dutch coalition have set high ambitions regarding governance of algorithms, it will prove challenging to define and, most of all, enforce legislation. As we have seen in the financial sector, putting regulations in place can take time. Additionally, it requires significant capacity and investment from both the supervisor as well as the supervisee. A complicating factor is that the realm of AI is rapidly innovating. This makes it particularly difficult for supervisory institutions to keep up the pace.
3. Make big tech responsible for countering disinformation and hate speech on their platforms
“Big online platforms will be made responsible for countering disinformation and hate speech on their platforms. We protect children against inappropriate online advertising and marketing, give them the right to not be tracked online and prohibit the creation and use of data profiles of children.” (Source: Coalitieakkoord)
The role of large online platforms as spreaders of disinformation became clear during the US presidential election and the 2016 Brexit referendum. Since then, many other events have drawn public attention to the political, social and economic risks arising from online environments. These environments are still largely unregulated and dominated by a handful of powerful companies. The European Commission has made it clear through the Digital Markets Act (DMA) and the Digital Services Act (DSA) that it will focus on putting additional oversight and regulation in place for large online platforms. They will aim to provide more rights to the users of these platforms. The plan in the Dutch coalition agreement to make online platforms responsible for countering the spread of disinformation and hate speech responds to the European Commission’s ambitions. Moreover, it sets out to tame the power of big tech companies and provide a safer online space.
As the field of data and AI is rapidly evolving, it is key to keep up with its pace. The Dutch coalition plans to do so by firstly, appointing a new secretary of digitalisation. Secondly, by focusing on, among other things, European collaboration and setting up a supervisory institution for algorithms. And thirdly, by countering the spread of disinformation and hate speech on social platforms. The Dutch ambitions are in line with the plans of the EU, marking the start of AI regulation. For companies that (want to) use AI, we recommend to not only prepare for compliance and risk mitigation, but to incorporate the principles of human-centred AI in their algorithms.
Do you want to know more about how to assess your algorithms, mitigate risks or ensure trustworthy and ethical AI? Amsterdam Data Collective can help you with this. Please contact Elianne Anemaat (Public Sector expert) at firstname.lastname@example.org, or check our contact page.