From educational institutions to healthcare professionals, from employers to governing bodies, artificial intelligence technologies and algorithms are increasingly used to assess and decide upon various aspects of our lives. However, the question arises: are these systems truly impartial and just in their judgments when they read humans and their behaviour? Our answer is that they are not. Despite their purported aim to enhance objectivity and efficiency, these technologies paradoxically harbor systemic biases and inaccuracies, particularly in the realm of human profiling. The Human Error Project has investigated how journalists, civil society organizations and tech entrepreneurs in Europe make sense of AI errors and how they are negotiating and coexisting with the human rights implications of AI. With the aim of fostering debate between academia and the public, the “Machines That Fail Us” podcast series will host the voices of some of the most engaged individuals involved in the fight for a better future with artificial intelligence.
“Machines That Fail Us” is made possible thanks to grant provided by the Swiss National Science Foundation (SNSF)’s “Agora” scheme. The podcast is produced by The Human Error Project Team. Dr. Philip Di Salvo, the main host, works as a researcher and lecturer in the HSG’s Institute for Media and Communications Management.
https://mcm.unisg.ch/
https://www.unisg.ch/ read less