In recent years, it has become increasingly hard to ignore the propensity for data-intensive computational technologies to do harm by violating privacy, codifying bias, and facilitating malfeasance. In this session, Bernease Herman, a data scientist who specializes in interpretable machine learning, will help us understand recent developments in data science tools, techniques, and norms that address some of these concerns. From algorithmic audits to differential privacy to statistical definitions of fairness, Bernease will explain what these approaches are capable of doing, and what their limitations are. Then, Anna Lauren Hoffman, a scholar of technology, culture and ethics, will help us see why all those developments are necessary, but insufficient. Anna looks beyond the materialization of specific harms and invites us to think more broadly about how the underlying logics of data-intensive computational systems perpetuate cultural violences against marginalized communities.
Bernease Herman will talk on “Countering Harm: Computational Approaches to a More Ethical Data Science” that gives an accessible primer for select computational methods popular in the Fairness, Accountability, and Transparency in Machine Learning (FATML) community that address data science ethics. Bernease will present advantages, disadvantages, and current efficacy of each method as practiced today.