I want to expand on a point I heard first from Dr. Timnit Gebru about the use of the humanities and social sciences, and particularly critical race and feminist theories, as tools to design and analyze engineered systems. These tools are not traditionally used in engineering and CS papers. However, as Dr. Gebru relates during an interview in the Radical AI Podcast [1], there is a reason they are not used, and it has nothing to do with improving the design of engineered systems.
Image credit: Nathan Yao, "Most Female and Male Occupations Since 1950", https://flowingdata.com/2017/09/11/most-female-and-male-occupations-since-1950/There is a long history of fields of study being valued more or less based on which group dominates the field [2]. In the US in employment contexts, men as a group have more power than women; white workers have more power than workers of other races. When a group with less power is the majority in a profession, workers in that profession are paid less and treated with less respect [2]. One can observe the significant gap in pay between (mostly Black) faculty who are employed at historically Black colleges and universities and (mostly white) faculty employed at historically white colleges and universities [4]. Fields that are majority women are devalued in a similar way to how "women's work" has historically been devalued. Professions which change from majority women to majority men increase in pay and prestige, and then enact policies which favor men [2]. As an example, Computing as a profession switched between 1950 and today, in part by emphasizing its ties to mathematics, a profession dominated by men [3].
In a similar manner, research in CS and engineering tends to favor tools that emerge from fields that are white- and men-dominated, and to disfavor tools from other fields. Dr. Timnit Gebru describes, in an interview with the Radical AI Podcast [1], how the tendency of CS to measure papers' research contribution by its use of math as limiting:
There are many cases where you apply a concept from a different field, e.g. physics, and you apply the modeling technique, or some math, or some understanding, and you apply it to your setting. That's always something that's welcome. That's always something that people accept. Except the same kind of respect is not afforded to the disciplines that are not considered technical. And what does that even mean? if you bring ideas from critical race theory into ML, for example, that is not respected in the ML community, because they'd be like, where is the technical component? Why do you have to see math in there? Math is a tool just like everything else. What we're trying to do is advance a particular field. Why does it matter so much how you're doing it? In my opinion this is gatekeeping. Similar to how something loses status or gains status depending on who in the majority is doing it. In my opinion this is a way people are shut out. For me I don't see the difference if I'm bringing ideas from, for example, from my prior background, analog circuit design, into ML, and the thing that I found most compelling was something as simple as data sheets. That's not math. That's process. That's what I really think is important. Or if it's history. Or if it's physics. It doesn't matter, right? You can bring in different components from different disciplines, and if it's really advancing the field, I don't really know why it matters whether it has some mathematical component to it versus not. Dr. Timnit Gebru [1]
As Dr. Gebru explains, this value system kept her, for a time, from doing research in the areas in which she wanted to work. I believe it results in less research and development within computer science and engineering that uses tools like feminist theory and critical race theory. I would hypothesize that it impedes the development of computing and engineered systems that apply theories from, for example, nursing, early childhood development, or communications, thus resulting in system designs that do not perform as well as they could.
I am not saying this bias is typically conscious. I am arguing that engineers and computer scientists should consciously examine whether, for any particular system goal, what tools from what disciplines are valuable. (An incidental problem is, one can't know if a tool is valuable without knowing it exists. Generally, we need a broad base of expertise and/or wide collaborations to succeed in this goal.)
Why does this matter for the individual CS / engineering graduate student or researcher? Understanding that a useful tool is undervalued only because of bias will help you avoid that bias, use the tool, and make your contribution to CS / engineering. In fact, swimming against the current to use a tool that others unfairly devalue may help you avoid doing exactly the same research as someone else, and more importantly, may allow you to solve a problem better.
References:
[1] Radical AI podcast featuring Timnit Gebru, hosted by Dylan Doyle-Burke and Jessie J Smith. Confronting Our Reality: Racial Representation and Systemic Transformation with Dr. Timnit Gebru, June 10 2020. Quoted section is between 26:00-28:00.
[2] Asaf Levanon, Paula England and Paul Allison, Occupational Feminization and Pay: Assessing Causal Dynamics Using 1950-2000 U.S. Census Data, Social Forces, 88(2) 865-892, Dec. 2009.
[3] Brenda D. Frink, Researcher reveals how “Computer Geeks” replaced “Computer Girls, June 1, 2011. Based on interview with Nathan Ensmenger, author of "The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise
[4] Renzulli, Linda A., Linda Grant, and Sheetija Kathuria. Race, gender, and the wage gap: Comparing faculty salaries in predominately White and historically Black colleges and universities. Gender & Society 20, no. 4 (2006): 491-510.