This session is facilitated by Cori Crider, Joshua Franco
About this session
The facilitators will encourage a lively discussion among participants - aiming to ask the hard questions and encourage deep thinking - as activists and as consumers. Human rights groups like ours often say, for example, that it’s not acceptable for content moderation to be wholly automated—but until standards are brought up that’s tantamount to saying this underclass has to exist. Should the cost of a healthy internet be the cost the mental health of its guardians? If content moderation has, at one level, to be done by humans, how do we enforce minimum workplace standards to protect those viewing the content? Who is responsible for setting and enforcing those standards?
This session will explore what a workplace standard of safety would look like in the content moderation context. It will unpack how we as activists, technologists, interested parties and members of the public begin to make that vision real.
Goals of this session
A workshop and discussion confronting how better to apply workplace safety and labour protections to one of the most exploited underclasses of the digital age: content moderators.
Content moderators screen and remove offensive material—hundreds of times a day. Extremist videos, child sexual exploitation, appalling violence. Content moderators help protect the eyes of the public, but who protects theirs?
Working conditions facing most content moderators are the 21st-century equivalent of the factory floor a hundred years ago: underregulated and hazardous to health. Content professionals are leaving their jobs over appalling working conditions, often presenting with symptoms of PTSD. After their employment, they receive no onward treatment. These conditions are unsafe. Existing labour and human rights protections can, and should, be brought to bear.
This interactive workshop is led by Amnesty Tech Director Tanya O’Carroll and Director of Foxglove (a new non-profit focused on tech accountability), Cori Crider.