FAT* Be Wilin’

J. Khadijah Abdurahman
4 min readFeb 25, 2019

--

A Response to Racial Categories of Machine Learning by Sebastian Benthall and Bruce Haynes

Having one less than a half dozen kids, I had neither the time or financial resources to attend the recent Conference on Fairness, Accountability and Transparency (FAT* 19) in Atlanta, Georgia. Stumbling through Twitter, I discovered “Racial Categories in Machine Learning”, a paper co-authored by NYU Fellow Sebastian Benthall and UC Davis Sociology Professor Bruce D. Haynes. I didn’t know whether to laugh or cry reading the breathtaking abstract claiming to “mitigate the root causes of social disparities” through “preceding group fairness interventions with unsupervised learning to dynamically detect patterns of segregation”. In one line, Benthall and Haynes circumvent the complexity of systemic racism and the epistemic limitations of classification by focusing on technical challenges faced by computer scientists designing “machine learning systems that guarantee fairness”. The paper notably omits the social context in which algorithmic decision making is enacted and ignores whether computational thinking acolytes like Benthall ought to be in a position of determining civic decisions.

Benthall composed a lengthy essay in response to my subtweeted challenge, my initial thoughts are below.

FAT* be wilin’. Listen, this is not a grammatical error or a populist demand for plain speak from jargon laden academics- it’s arguing we pause the code switch to celebrate African American Vernacular English’s (AAVE) rich ontological understanding-that never makes it to the conferences-but been understood the limitations of fairness grounded in identifying technical shortcomings of algorithmic decision making. It’s unclear if Benthall has mis-parsed my critique or is letting his bias show but the sequencing of his rebuttal from “my co-author is a Black, and from Harlem and wrote a book about it” to “computer science jargon is alienating to basically everybody who is not trained in computer science, whether they live in the hood or not” is a familiar hermeneutic injustice or the injustice of having some significant area of one’s social experience obscured from collective understanding owing to a structural identity prejudice in the collective hermeneutical resource.

I’m not worried about whether the hood can decipher what you’re trying to say. I’m concerned that you’re getting play from the type of people “who would have voted for Obama 3 times if they could” and wear their reading of Letter to My Son by Ta Nehisi Coates like a badge of Woke 2.0 honor. You don’t never be listening to Trap Music? We call it the trap because we know the only way out is the way you get caught up. We threw Clinton Crime Bill over some 808s and danced to it, worth mentioning because the rhythmic bobbing of your head induced by the bass helps regulate the autonomic nervous system in the face of novel risk.

Despite the Paul E. Meehl inspired refutations of human decision making emphasizing the limitations of cognition in favor of stochastic methods, the foundation of human risk modeling is actually-complex neurological processing of visceral afferent information through the senses. A crying baby is rocked, Jewish people daven in prayer; we doth not Milly Rock just to get lit, we access higher order thought processes by attuning to the primal needs of our mammalian vagus with vestibular input kid.

“I challenge algorithmic designers to form peer relationships with the communities they research” — this sounds like participatory design, as Benthall alludes, except it is not, because of the questions around power, language and positionality. The key word is “peer”-it is not just that relationships are formed, but how they are enacted — whose language is used, whose viewpoint and values are privileged, whose agency is extended, and who has the right to frame the “problem”.

If you come to the hood, (please note the hood is not a proxy variable for Black, it’s the colloquial label for redlined neighborhoods where poverty is concentrated and as a result of our country’s failure to fully reckon with the 13th amendment are disproportionately Black) to present the paper’s solutions you would immediately be contested by people whose concern with predictive policing has nothing to do with how protected classes within algorithms are generated-but who viscerally reject the notion human beings should be placed in cages in the first place.

Here comes the rub- essentially I’m arguing that the problem is framed wrongly — it is not just that classification systems are inaccurate or biased, it is who has the power to classify, to determine the repercussions / policies associated thereof and their relation to historical and accumulated injustice?

**Thank you to my friend, colleague and Cornell Tech Professor-Tap Parikh for his helpful insights and encouragement. I look forward to collaborating with him in writing a more comprehensive exploration of the themes outlined above

--

--

J. Khadijah Abdurahman

Bearing witness to the present. Tentatively hopeful for the future. Middle name preferred. Follow me on twitter @upfromthecracks.