Why I Don't Defend Professional Philosophy

As a former professional philosopher who now does research on responsible AI, I’ve been to multiple events where someone working on the social impacts of AI will make a negative remark about philosophy. Typically, their criticism implicitly targets the dominant strain of academic philosophy in the Anglophone tradition, encompassing the various analytic approaches to ethics. As an ex-philosopher who finds my background in philosophy of science sometimes useful to my AI research, I might be expected to defend my former field, but in fact I largely agree with the criticisms and think on the whole that it would not be helpful to respond to these criticisms by listing the various “exceptions” to the rule.

Many of these critics are coming from a spot where they see attempts to apply existing philosophical frameworks directly to AI ethics problems, and get frustrated with how this leads to problems being framed in undesirable ways. Here’s some of the properties of mainstream philosophy that I think people are getting at:

  • Ignoring context and attempting to be universal
  • Being race/gender/class-blind
  • Dominated by white men
  • Ignoring the role of power and privilege
  • Using abstract frameworks to seem objective and feel non-threatening to other privileged folks

For examples of criticisms along these line, see Abeba Birhane’s tweet and Emily Bender’s interview at Radical AI. Another instance of this that I witnessed was at a talk where a black tech activist mentioned that she’s chary of philosophy because whenever she’s invited to a philosophy event it’s full of white men who ignore her. (In case you’re wondering, so far the responsible AI events I’ve attended have been waaaayy more diverse than philosophy events.)

There are exceptions, but the problem is structural

Now of course there exist published works of philosophy that attempt to overcome the shortcomings mention above. I’ve published a paper that is largely a response to one of them, so I’m well aware that this strand of analytic philosophy exists. But the criticisms of folks in the FAccT community rings true nonetheless as a structural problem. It remains the case that work on “general” theories gets more professional respect, that racism, sexism, and transphobia are rampant, and that professional philosophy is very white, cis, and male. Pointing out the exceptions in the face of someone describing a structural problem with the field would be analogous to responding “not all men” to a criticism of the patriarchy.

Given that the problem is structural, on the whole I’d be very nervous about any blanket move to have more professional philosophers involved in the field. Given the amount of grant money floating about for AI-related problems, I think there’s a very high risk of mainstream philosophers who take traditional context-free approaches to problems successfully Columbusing the field while ignoring all the much more important work already done by either marginalized philosophers or STS researchers who have paid more attention to actual (rather than hypothetical) social impacts. Of course, if there was a way to draw from philosophy only the more nuanced, socially aware approaches pushed largely by marginalized folks, that would be nice, but I don’t think that’s how it would work in practice.

My own beefs

I have related beefs about how disciplinary norms force professional philosophers to frame problems, that make me reluctant to defend the idea of philosophy (as it’s currently practiced) contributing to FAccT. One big source of relief of being in the responsible AI field now, instead of professional philosophy, is that most practitioners begin with particular cases of harm, opaque algorithms, etc., rather than with general abstract principles or thought experiments. This way of approaching things lends itself better to considering actual power relations on the ground. To the extent that people from more traditional areas of analytic philosophy have tried to contribute to FAccT-related subjects, I’ve found it frustrating that they don’t engage with existing (more empirically grounded) STS work, and that their work is grounded first and foremost in traditional philosophical frameworks rather than the practical problems themselves. I don’t necessary blame the individuals taking these approaches, because they are incentivized by the social structures of professional philosophy to justify their work by grounding it in some established philosophical tradition that’s considered “serious”. To ground their work primarily in concrete practical problems, while paying close attention to empirical facts, would make their work appear “too specialized” or “not philosophical” to people who would evaluate their tenure case (see Kristie Dotson on boundary-drawing in philosophy). But so long as professional philosophy has those incentives, which are of course related to its existing race, class, and gender composition, I’m pessimistic about how much I can learn from them relative to STS folks.