This session is facilitated by Ania Calderon, Hong Qu
About this session
We seek to ground-truth the AI Blindspot discovery process through practical applications in a workshop scenario to better understand the value and gaps around each of 9 possible oversights in a team’s workflow that can generate harmful unintended consequences, including issues around purpose, representative data, abusability, privacy, discrimination by proxy, explainability, optimization criteria, generalization error, and right to contest.
We will facilitate a guided feedback session to gather and generate ideas from participants in the workshop. By assigning prompt cards including each one of the blindspots to each group group, we will guide the discussions to try to understand how it may be applied in a real case scenario and how they can guard against it. In addition, we will simulate a case study that applies the AI Blindspot discovery process to assess a specific machine learning system.
Goals of this session
Launch and test the value of the AI Blindspot project – a 9 step discovery process that helps technology practitioners spot unconscious biases and structural inequalities that can arise in the planning, building, and deploying of AI systems. This project was incubated at the Berkman Klein Center and MIT Media lab during the 2019 Assembly program.