Lajla Fetic

Lajla Fetic works as a researcher and consultant on the social impact of technology, develops solutions for the regulation of algorithmic systems and advises the “Ethics of Algorithms” team of the Bertelsmann Stiftung. Previously, she served as project manager, in particular for the further development of the Algo.Rules. She is the author of several practice guidelines on the implementation of AI ethics principles – for companies and digital administrations. She also contributed as co-author and project manager to a study on the development of the AI Ethics label of the AI Ethics Impact Group. Together with Carla Hustedt she is responsible for the list of female technology (ethics) experts.

Prior to this, Lajla Fetic worked on the topics of digitalisation and automation, among others for the Berlin think tank “Stiftung Neue Verantwortung” and for an international public sector consultancy within the “Plattform Industrie 4.0”.

She is currently completing a Master’s programme in Public Policy at the Hertie School Berlin and Sciences Po Paris with a focus on digital governance and public sector innovation.

Photocredit: Sebastian Heise

Latest posts

From principles to practice: How can we make AI ethics measurable?

Discussions about the societal consequences of algorithmic decision-making systems are omnipresent. A growing number of guidelines for the ethical development of so-called artificial intelligence (AI) have been put forward by stakeholders from the private sector, civil society, and the scientific and policymaking spheres. The Bertelsmann Stiftung’s Algo.Rules are among this body of proposals. However, it […]

Automated Decisions: Europe must speak with one voice

Automated decisions have become a part of many Europeans’ daily lives. Whether for job searches in Finland, healthcare functions in Italy, or the identification of neglected children in Denmark, such systems are coming into use in many EU countries – often for core public-administration functions. Will the EU be able to develop a common response […]