Date/Time

Date(s) - 04/11/2018
2:30 pm - 3:30 pm

Location

3910 15th Ave NE
Seattle WA
98195
Harm and beyond: What the data science community is doing to address ethical concerns … and why it’s necessary but insufficient. 

In recent years, it has become increasingly hard to ignore the propensity for data-intensive computational technologies to do harm by violating privacy, codifying bias, and facilitating malfeasance. In this session, Bernease Herman, a data scientist who specializes in interpretable machine learning, will help us understand recent developments in data science tools, techniques, and norms that address some of these concerns. From algorithmic audits to differential privacy to statistical definitions of fairness, Bernease will explain what these approaches are capable of doing, and what their limitations are. Then, Anna Lauren Hoffman, a scholar of technology, culture and ethics, will help us see why all those developments are necessary, but insufficient. Anna looks beyond the materialization of specific harms and invites us to think more broadly about how the underlying logics of data-intensive computational systems perpetuate cultural violences against marginalized communities.

Bernease Herman will talk on “Countering Harm: Computational Approaches to a More Ethical Data Science” that gives an accessible primer for select computational methods popular in the Fairness, Accountability, and Transparency in Machine Learning (FATML) community that address data science ethics. Bernease will present advantages, disadvantages, and current efficacy of each method as practiced today.

Anna Lauren Hoffman will talk on “Amplifying Harm: When Data, Algorithms, and Cultural Violence Collide” which talks about how many conversations around data and discrimination focus on problems of biased datasets or unfair algorithms that produce unjust material outcomes. But we also need better ways of grappling with cultural violnces – that is, discursive and symbolic harms reproduced and amplified by researches. Hoffmann argues that these harms are not secondary or even concurrent with other forms of discrimination; rather, they are foundation, as they create the social conditions against which other harms can occur. Just like physical violence in the real world, this kind of violence – dubbed ‘data violence’ – occurs as the result of choice that underwrite other harmful or even fatal outcomes produced by data-driven, algorithmically-mediated systems.