AlgoRail: On the trail of algorithms

Summer has finally arrived, but the Corona pandemic has already put a damper on many people’s holiday plans. That’s why we want to go on a journey in our new blog series “AlgoRail” – all over Europe and always on the trail of algorithmic systems. Where are algorithmic systems already in use in other countries? […]

Digital Sovereignty in the EU

“Digital Sovereignty” is a term frequently used in political debates on the national and European level. What is meant by this? How can the EU be capable of asserting its own values and legal systems in a digital policy? The debate on “Digital Sovereignty” is not sufficiently based on evidence at the present time. To […]

The Ethics of Responses to Covid-19: A Twitter-Chat by Women in AI Ethics

How ethical are technological responses to Covid-19? Should AI systems be used as part of public health strategies? When doing so, how do we protect fundamental rights and freedoms, such as privacy and data protection? In a Twitter chat by Women in AI Ethics, Carla Hustedt, project lead of “The Ethics of Algorithms”, tackled these […]

The Urgent Need for Robust Trust – Cultivating an environment in which algorithmic decision-making serves society

On February 19, 2020, the European Commission published its “White Paper on Artificial Intelligence – A European approach to excellence and trust” and launched a consultation. We welcome the European Commission’s effort to harmonize AI regulation and create an ecosystem in which algorithmic decision-making systems work for people and become a force for good in society. In […]

We Humans and the Intelligent Machines

The use of algorithms has long since ceased to be science fiction and has become reality. Seemingly intelligent machines are part of our lives. They surround us and enter ever new, sensitive areas of life. They analyze human behavior and shape modern societies. We should deal with them before we get used to them. We […]

Blogparade: Tech for good? Really, now? Or now more than ever?!

What contribution can digital technologies make to public welfare? Can we place algorithms in the service of society? Is “tech for good” a clichéd idea? Or can we, in fact, manage to use technology sensibly – especially now – without treating it as the silver bullet to complex societal problems? Do you have something to […]

From principles to practice: How can we make AI ethics measurable?

Discussions about the societal consequences of algorithmic decision-making systems are omnipresent. A growing number of guidelines for the ethical development of so-called artificial intelligence (AI) have been put forward by stakeholders from the private sector, civil society, and the scientific and policymaking spheres. The Bertelsmann Stiftung’s Algo.Rules are among this body of proposals. However, it […]

3 questions for … Marina Jirotka and Helena Webb

Marina Jirotka (Professor of Human Centred Computing) and Helena Webb (Senior Researcher) work at the University of Oxford’s Department of Computer Science, in the Human Centred Computing theme. From 2016 to 2018 they participated in the research project “UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy”, a collaboration with the Universities of […]

Automated Decisions: Europe must speak with one voice

Automated decisions have become a part of many Europeans’ daily lives. Whether for job searches in Finland, healthcare functions in Italy, or the identification of neglected children in Denmark, such systems are coming into use in many EU countries – often for core public-administration functions. Will the EU be able to develop a common response […]

The Self-Policing Effect of Automated Data Processing

The General Data Protection Regulation (GDPR) rests on the premise that the protection of personal data guarantees the protection of privacy. This premise has been subject to substantial criticism. It has been convincingly argued that with the help of algorithm-based processing of large data sets, knowledge about individuals can be inferred without violating GDPR provisions. […]