Quality Criteria for Algorithms – Lessons from Existing Compendia
What operations should algorithms be allowed to perform? What standards of quality should they be held to? For what purposes may they be used? Although they are deeply relevant for society, we have yet to reach a social consensus on these issues. A number of international organizations seek to answer these questions by formulating quality criteria that ensure the use and development of algorithmic processes are held to high ethical standards. In our working paper “Quality Criteria for Algorithmic Processes”, we take a closer look at three existing proposals and analyze their strengths and weaknesses.
The effects of algorithms on our individual and social realities can no longer be denied: they not only collect, sort, weigh and examine our data – they are increasingly relied upon in decision-making processes that can have far-reaching implications for those affected by them. Processes with such powerful impact should be subject to soft regulation, and the concept of quality standards for socially relevant algorithms offers a promising approach.
Technical innovations do not emerge from a vacuum. The creation of algorithmic systems involves the participation of a large variety of stakeholders: clients, programmers and implementers – to name but a few. It is this heterogeneity of the actors involved, and the ways in which their diverse professions overlap that make it difficult for a uniform professional ethics to develop.
Several international organizations have thus taken to tackling this issue by focusing on quality criteria for that process which represents the common denominator of all participants – the creation of algorithmic processes. As encouraging as this strategy may be, we are now faced with several different compendia that exist in isolation from each other. New documents are being drawn up without making use of possible synergy effects and tapping into the knowledge offered by existing catalogues.
Three quality criteria compendia under the microscope
In our working paper, we examine three current quality criteria compendia for algorithmic processes in order to gain a better understanding of their strengths and weaknesses. The proposals are the “Principles for Acountable Algorithms and a Social Impact Statement for Algorithms” by the FAT/ML collective, the “Asilomar AI Principles” by the Future of Life Institute, as well as the “Principles for Algorithmic Transparency and Accountability” of the ACM US Public Policy Council.
Ranging from the usefulness of individual criteria to (hidden) premises and formal as well as linguistic aspects, we take a closer look at the spectrum of each catalogue’s characteristics and compare them with each other. Finally, we offer a summary of transferable strengths and weaknesses to be avoided in future endeavors. With our work, we hope to facilitate the discourse and provide a reference for future work in the field.
Many sensible criteria – too little attention to ethical questions and implementation
The key findings of our study are represented in the infographic below (see figure 1). Several vital aspects have already been incorporated into existing compendia. This includes, for example, sensitizing actors in the process to relevant pre-or post-programming steps, as well as acknowledging a large group of addressees that reflects the heterogeneity of responsible actors. However, there is also room for improvement. Criteria concerned directly with ethical questions are rare, as is a recourse to the role of politics in this context. Also, the compendia do not call for bans on certain uses of algorithms that are inappropriate or indicate how the respective quality criteria catalogs might be implemented in practice. This is where new projects (including our own #algorules) must begin – by drawing on the lessons of previous efforts and developing them further.