Abstract: Implementation of machine learning algorithms in decision making is, for the most part, (in)famously accompanied by the reproduction of social bias and discrimination, often based on race or gender, causing further harm to already disadvantaged populations. The dominant paradigm of "fair machine learning" proposes tweaking loss minimization or processing observed data to guarantee fairness. However, this approach is limited, as it fails to capture the ecology in which technology unfolds, therefore masking social and institutional structures that cement such discrimination independently and in spite of algorithmic tweaking. I will provide examples of racial discrimination in judicial and financial systems and the need for contextual analysis. I will then discuss the promising avenue of causal inference through which the relationship between system structure and unethical algorithm deployment can be better understood and leveraged for policy making. In the spirit of the title, I will also explain the panoply of failures standard causal inference exhibits, notably its overreliance on observational data (to the expense of a generative mechanism) and its over-overreliance on variable attributes as accurate representations of an individual (to the expense of the network of social relationships such individual is embedded in and defined by), I will then discuss ways in which complexity science can help resolve such shortcomings. Time permitting I will try to say something good about uncritically and blindly applying machine learning to all aspects of life, but I don't guarantee it.
Complex Systems Seminar by The Center for the Study of Complex Systems, The College of Literature, Science, and the Arts and Michigan Institute for Data Science