Impact assessments, seals of quality for the origin and quality of data, creating class-action rights for federal anti-discrimination agencies, establishing an agency with algorithmic oversight – there are several ways to go about making sure that algorithmic systems are used for the well-being of everyone in society. The working paper “Machines in the Service of Humankind” presents and outlines the range of strategies that have been discussed to date among scientists, economists, civil society actors and policymakers.

Which secondary schools are children allowed to attend? Which areas do police decide to patrol most intensively? Whose tax returns are processed by humans, and whose are processed entirely by software? Which passersby in train stations are deemed suspicious? And which court defendants are viewed as being particularly high risks? Around the world, states and businesses are using algorithmic systems to ask exactly these types of questions and to make these kinds of predictions. In discussing the use of such systems and so-called artificial intelligence (AI), it may be easy to cut corners conceptually and claim that algorithms rule our lives or that technology is impervious to social needs.

In our “Machines in the Service of Humankind” working paper, we (Konrad Lischka and Julia Krüger) counter this kind of fatalistic tech-determinism by drawing on dozens of ideas circulating among stakeholders in science, civil society, business, and policymaking that demonstrate what can be done to make sure algorithmic processes support social needs. There are several ideas, some of which are already concrete and practical. Our paper answers the following questions: What challenges to society do algorithmic systems pose? What are our options in addressing these challenges?

The following four action areas are the subject of closer analysis in our working paper (see also the figure below).

  1. Verifying that algorithmic systems’ goals are compatible with social norms.
  2. Reviewing the implementation of goals within systems.
  3. Facilitating a diversity of systems, goals and operators.
  4. Creating the framework conditions necessary for the inclusion-promoting use of algorithmic systems, which includes developing algorithmic literacy among data subjects and users as well as establishing state regulatory oversight.

We would like to thank Dr. Ulf Buermeyer, Dr. Andreas Dewes, Prof. Dr.-Ing. Florian Gallwitz, Lorena Jaume-Palasí, Dr. Nicola Jentzsch and Philipp Otto for their critical and inspiring comments.

Due to rapid technological development and the resulting need to distribute knowledge quickly among political, business and civil society leaders and stakeholders, we are publishing this discussion paper under a Creative Commons license (CC BY-SA 3.0 DE). We are happy to receive suggestions regarding additional areas of analysis, improvements or further analytical avenues, as well as any constructive criticism.


This text is licensed under a Creative Commons Attribution 4.0 International License