Tuesday, October 11, 2022

Future Conferences on Anti-Oppression and CS / Engineering

We need conferences which truly value papers which apply an understanding of intersectional oppression in their work.  By “value” I mean that submissions are judged for cultural competency, for their critical awareness of racism, sexism, ableism, heterosexism, classism, casteism, capitalism, and colonialism, and how these impact engineered, automated, and educational systems.  I want conferences in which submissions that have a pattern of disfavoring citations to authors in marginalized groups are not accepted.  This call goes beyond the words “fairness” or “diversity” or “broadening participation”.  I want more than an ethics statement or checklist, which sets a minimum ethical bar for papers. Instead I want to see papers in the field that best contribute "to the people" and to "studying up" accepted. I want to read papers good at "recognizing and rectifying structural inequality".  Even if social justice words and goals are used to advertise a conference, current ideals of “merit” and “technical depth” in CS and engineering broadly discount actual expertise and experience with, and ability to accurately represent, subjects related to social justice. Ironically then, researchers more capable of directing high quality and effective research, education, and advocacy with anti-oppression as a goal, are less able to publish and present their work.  


Standard CS & engineering conferences employ a kind of gatekeeping, enforced during review, to preserve the status quo.  Many of us have seen this in our paper reviews. The best case is when reviewers outright reject a submission for discussing “racial problems” (shout out to reviewer #1!) – at least we know the real reason.  Other reviewers hide their distaste for submissions related to oppression and criticize instead some made up concerns – sometimes they inadvertently leave evidence, a “Hook” if you will.  Relatedly, Dr. Nicki Washington shouldn’t have been denigrated for suggesting a K-12 educator for a panel on identity-inclusive K-16 CS education!  In general, our field artificially defines “technical” to preserve the discriminatory ideas of “merit” (as Dr. Timnit Gebru describes during an interview for the Radical AI Podcast) created by and for the benefit of the dominant group.  Instead, I want a conference that allows us to see what happens, as Audre Lorde asks, “if we listen to the content of what is said with at least as much intensity as we defend ourselves from the manner of saying.”


How should we do this?  I suggest we start with the review process.  A conference should use reviewers with technical depth in historical and present-day systemic oppression, in white supremacy, neocolonialism, critical disability studies, intersectionality, and others.  Such reviewers exist in engineering, CS, and many other fields.  A person does not need expertise in all critical fields or topics, but must have some demonstrated ability to apply ideas from one critical field, to be allowed to be a reviewer.  These reviewers should critically evaluate each submission on how it shifts power; papers which the reviewers conclude preserve the status quo and insufficiently address issues of oppression, both in the field and broadly in society, will receive a low rating.  Reviews have always engaged in criticism based on what the contribution of the paper is; here I’m simply saying that a paper’s contribution is also along multiple axes of power and oppression, and how it shifts power among readers, users, and society, is something that should also be evaluated.  


This is not a call for a single new conference; nor is it a call to abolish any existing conference. I believe there should be a range of conferences across STEM subfields which actively value critical perspectives, rather than the current state-of-the-art, conferences which enforce white supremacy, broadly speaking.  


I’m looking for your feedback and ideas. But please don’t comment to tell me that this idea is “reverse racism” or “reverse sexism”.  If you don’t understand that oppression requires both bias and power, you should not be submitting to a conference of the type being proposed.  And that’s not because of your identity, it’s because you don’t have the required technical depth.


Saturday, August 29, 2020

Whose Tools? There's a reason we narrow our set of tools in CS & engineering

I want to expand on a point I heard first from Dr. Timnit Gebru about the use of the humanities and social sciences, and particularly critical race and feminist theories, as tools to design and analyze engineered systems.  These tools are not traditionally used in engineering and CS papers.  However, as Dr. Gebru relates during an interview in the Radical AI Podcast [1], there is a reason they are not used, and it has nothing to do with improving the design of engineered systems.

Image credit: Nathan Yao, "Most Female and Male Occupations Since 1950", https://flowingdata.com/2017/09/11/most-female-and-male-occupations-since-1950/

There is a long history of fields of study being valued more or less based on which group dominates the field [2]. In the US in employment contexts, men as a group have more power than women; white workers have more power than workers of other races.  When a group with less power is the majority in a profession, workers in that profession are paid less and treated with less respect [2]. One can observe the significant gap in pay between (mostly Black) faculty who are employed at historically Black colleges and universities and (mostly white) faculty employed at historically white colleges and universities [4]. Fields that are majority women are devalued in a similar way to how "women's work" has historically been devalued.  Professions which change from majority women to majority men increase in pay and prestige, and then enact policies which favor men [2].  As an example, Computing as a profession switched between 1950 and today, in part by emphasizing its ties to mathematics, a profession dominated by men [3].

In a similar manner, research in CS and engineering tends to favor tools that emerge from fields that are white- and men-dominated, and to disfavor tools from other fields.  Dr. Timnit Gebru describes, in an interview with the Radical AI Podcast [1], how the tendency of CS to measure papers' research contribution by its use of math as limiting:

There are many cases where you apply a concept from a different field, e.g. physics, and you apply the modeling technique, or some math, or some understanding, and you apply it to your setting.  That's always something that's welcome. That's always something that people accept. Except the same kind of respect is not afforded to the disciplines that are not considered technical.  And what does that even mean? if you bring ideas from critical race theory into ML, for example, that is not respected in the ML community, because they'd be like, where is the technical component? Why do you have to see math in there?  Math is a tool just like everything else.  What we're trying to do is advance a particular field.  Why does it matter so much how you're doing it?  In my opinion this is gatekeeping.  Similar to how something loses status or gains status depending on who in the majority is doing it. In my opinion this is a way people are shut out.  For me I don't see the difference if I'm bringing ideas from, for example, from my prior background, analog circuit design, into ML, and the thing that I found most compelling was something as simple as data sheets.  That's not math.  That's process.  That's what I really think is important.  Or if it's history.  Or if it's physics.  It doesn't matter, right? You can bring in different components from different disciplines, and if it's really advancing the field, I don't really know why it matters whether it has some mathematical component to it versus not.   Dr. Timnit Gebru [1]

As Dr. Gebru explains, this value system kept her, for a time, from doing research in the areas in which she wanted to work.  I believe it results in less research and development within computer science and engineering that uses tools like feminist theory and critical race theory.  I would hypothesize that it impedes the development of computing and engineered systems that apply theories from, for example, nursing, early childhood development, or communications, thus resulting in system designs that do not perform as well as they could.  

I am not saying this bias is typically conscious.  I am arguing that engineers and computer scientists should consciously examine whether, for any particular system goal, what tools from what disciplines are valuable.  (An incidental problem is, one can't know if a tool is valuable without knowing it exists.  Generally, we need a broad base of expertise and/or wide collaborations to succeed in this goal.)

Why does this matter for the individual CS / engineering graduate student or researcher?  Understanding that a useful tool is undervalued only because of bias will help you avoid that bias, use the tool, and make your contribution to CS / engineering.  In fact, swimming against the current to use a tool that others unfairly devalue may help you avoid doing exactly the same research as someone else, and more importantly, may allow you to solve a problem better. 


References:

[1] Radical AI podcast featuring Timnit Gebru, hosted by Dylan Doyle-Burke and Jessie J Smith.  Confronting Our Reality: Racial Representation and Systemic Transformation with Dr. Timnit Gebru, June 10 2020.  Quoted section is between 26:00-28:00.

[2] Asaf Levanon, Paula England and Paul Allison, Occupational Feminization and Pay: Assessing Causal Dynamics Using 1950-2000 U.S. Census Data, Social Forces, 88(2) 865-892, Dec. 2009.

[3] Brenda D. Frink, Researcher reveals how “Computer Geeks” replaced “Computer Girls, June 1, 2011.  Based on interview with Nathan Ensmenger, author of "The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise

[4] Renzulli, Linda A., Linda Grant, and Sheetija Kathuria. Race, gender, and the wage gap: Comparing faculty salaries in predominately White and historically Black colleges and universities. Gender & Society 20, no. 4 (2006): 491-510. 


Thursday, March 19, 2020

Video lectures for my past Probability & Random Processes course


I'm in the midst of madly recording my lecture material for my current course (Digital Communication Theory) as I prepare for online delivery of the rest of my semester.

Recording my videos reminded me that I have a four lectures of similarly recorded videos from when I was teaching ECE 5510: Random Processes at the University of Utah. If you are looking for resources to learn (or teach) topics from a graduate random processes course in electrical engineering, feel free to help yourself to my material.  Or perhaps you're a student taking an online course and you just need some additional material.

My video lectures include me recording my screen & microphone as I use a graphics tablet and a paint application as a whiteboard.  Each "lecture" is a Youtube playlist:

Wednesday, July 17, 2019

My paper is unlike any other paper, how can I write about related work?

I've heard this argument many too many times, so I wanted to address it broadly.  The essential argument is:
I found this obscure lucky wonderful impactful topic to investigate, and now that I'm done and written my 12+ page conference paper on it, I can't write about any related work.  First, I can't find any other paper on this topic, and second the idea came to me in a flash of brilliance and so it's not like I thought of it because of some other published papers.
First, this comes across as some kind of "10x engineer" BS, and you're going to want to nip that in the bud.  You didn't live in a bubble while becoming the person that could write this paper. 

Second, if you're going to get your paper accepted, and more importantly, make a difference in your research community, you're going to want to do a good job of writing about related work.  People's  minds don't hold ideas in islands, isolated from everything else they know.  (Actual islands aren't even islands...)  You want people to place your topic in context and connect it to other things they know, so that they will remember your research and results.  Like, when talking with other researchers.  Or when planning a future project.  You want them to recall your work.  Something unconnected to anything else doesn't get brought up in conversation. 

Third, writing related work sections can actually be formulaic.  There's an algorithm I think works in most cases:

  1. Figure out what topics / techniques / applications / theories define the work contained in your research paper.  These are your keywords.  There might be 2-3 really critical keywords, and maybe a couple more that help define your paper's contribution.  Let's call my keywords A, B, C, and D.
  2. Consider the subsets of the keywords.  (Perhaps the theory folks are tuning out now, realizing that this algorithm is 2^n in complexity.  But bear with me.)  Limit this to the few most important, don't try to cover every combination.  For example, you might consider AB, ACD, BCD, and ABC.     
  3. What research is published that uses that subset of keywords? Describe the research in one subset, it becomes a paragraph (more or less) of your related work.  This paragraph can start with "The area of XX research uses both A, B, and C to achieve its goals.  <more detail here>.  This paper further adds D which is awesome because YY."

Yes, LaTeX is awesome, and yet is at the intersection of multiple areas of research. 
Image credit: Stefan Kottwitz, http://www.texample.net/tikz/examples/venn/

Just a note that every research community has their technical writing style, the norm they expect to see on the page as they read a paper.  My perspective is shaped by ACM/IEEE conferences like MobiCom, IPSN, Sensys, SECON, and others for which the paper is 10-14 pages + references, and as result, there is room in the paper to use a page for the discussion of related work.  Your conference or journal writing norms may vary, please consult with your PhD advisor before submission. Blah blah blah, right?  

Finally, I want to give lots of credit to the authors who write papers, submit them, and get them accepted to conferences.  It's your related work sections that have taught me how to write one.
Hope this helps someone out there write a great related work section and position their hugely awesome idea on paper to be influential in their research community.


Saturday, December 24, 2016

Writing a good related work section

I assigned the following as a writing assignment in my graduate course, "Applications of Fading Channels".  We had read about 30 papers in the research area of RF sensing, and I asked them to write a related work section that would then go in their final report. I wanted to give them specific instructions about how to write a good section, and to give them very quantitative instructions on how I would grade their submission.  I think it helped some of them write very good related work sections, that I would have no complaints about if they had appeared in a submission to a good conference or journal.
Here's what I told them.
Write a related work section describing the area of RF sensing.  I want you to focus on following these two tips.
  1. Tell a story.  Your story is about the state-of-the-art in the area, and cite papers to back up your story.  The papers themselves are not the story.  Don't cite the papers just so that you can cite them all.  One example of this: DO NOT start a sentence with a reference -- good citations are normally at the end of a sentence rather than at the start. For example, compare these two: (a) "Objects moving near a transmitter or receiver cause fading [6]", or (b) "Reference [6] shows that objects moving near a transmitter or receiver cause fading". Using (b) makes the subject "Reference [6]", which makes the reader believe that the paper [6] is doing the action, not "objects moving near a transmitter".  Using form (b) makes it seem like you just want to check off of a reference that you need to cite.  Using (a) tells part of a story about what object motion does to the channel, which then primes the reader to think about detecting it using fading measurements.  I expect that your paper will not have the papers you're referring to as the subject of a sentence.
  2. Divide or subdivide the area of research.  For each subcategory, describe generally this category of research, giving detail about one or two example papers in this area.  Your final sentence (or two) would then describe why your research is similar or dissimilar to this subcategory, or what the limitations / drawbacks of this category are.  To make up a formula for these paragraphs:
    1. Divide or subdivide the area of research:  Example: "Device-free localization may be subdivided into two categories of algorithms: model-based or fingerprint-based."  In some way, you are categorizing.
    2. For each subcategory, describe generally this category of research, giving detail about one or two example papers in this area.  Example:  "Model-based methods assume that the change in channel on any link can be described by a statistical and/or spatial model as a function of the link's endpoints and other measurable channel characteristics [1,2,3,4,5].  For example, [1] assumes that RSS will be attenuated when a person stands in an ellipse with foci at the transmitter and receiver coordinates.  Fingerprint-based methods measure a database of channel changes as a function of person position during a calibration period, and later match measured channel changes to the database."
    3. Your research in comparison or limitations / drawbacks: For example, in the above, if my project was on a new method that did neither, I would say "This paper explores a third category which neither requires a calibration database nor makes model assumptions.".  If I just wanted to explain the drawbacks, I would say, "The model-based methods are inaccurate because of any mismatch between the model and reality; the fingerprint-based methods require significant time to record the calibration database."
I will quantitatively judge how well you are following these two guidelines:
  1. Count how many references are used as the subject of a sentence.  Try to eliminate these. 
  2. Count how many divisions and subdivisions are used.  Try to maximize this number.

I'm interested in comments or suggestions.  I've always had a hard time providing specific mentoring for research writing, my typical approach is "I know it when I see it".  I'm hoping this is one step in being specific.

Wednesday, April 20, 2016

Let's stop using engineering as in insult.

I've had this conversation way too often when discussing funding proposals, paper submissions, candidates, and talks. After reading a paper or hearing a talk, one person says,
I like XX about the work, but this is just engineering. Where is the science?
I think the language is way wrong:

  • The sentiment is usually that the work lacks technical novelty or innovation. I often agree with this sentiment. However: What field is it where innovation is sought and technical challenges overcome? Engineering.  
  • Do you really think that we should be doing science? That is, coming up with a hypothesis, developing an experiment to test that hypothesis, performing that experiment, and then reporting the statistical significance of your results? 
  • Talk about self-hatred and low self-esteem. Are we really going to say "just engineering" when we call ourselves engineers? How is this as an outreach and retention strategy?
  • Do you really think scientists say: "I like XX about the work, but this is just science.  Where is the engineering?"

Thursday, April 14, 2016

Cheap channel sounding

In 2000, when I was a research engineer at Motorola, we bought a state-of-the-art channel sounder.  It  came with a transmitter that sent a wideband (80 MHz) spread spectrum signal in the 2.4 GHz band, and a receiver that sampled the signal and computed the complex-valued channel impulse response.  It was $150,000 USD from a small custom software-defined radio company called Sigtek.  And it was worth it; it allowed me to conduct measurement campaigns to determine what accuracy was possible from a TOA indoor localization system in that band and with that bandwidth.  This was valuable information at the time for my employer.

Today we put together a channel sounder with capabilities that significantly exceed that system for $600 USD, using off-the-shelf parts and the Atheros CSI Tool, developed by Yaxiong Xie and Mo Li at NTU Singapore.  Anh Luong and Shuyu Shi got the system up and running in our lab.  The Atheros CSI tool is a hacked driver that works for several Atheros WiFi cards that allow the channel state information (CSI) calculated on the card by the standard 802.11n receiver to be exported out of the card.  We used an Intel NUC, which is essentially puts low-end laptop components into a 4 x 4 x 1 inch box.  It has two PCI express slots, and we use one to plug in an Atheros AR9462 card (Fig. 1, left).  The NUC has two antennas on the inside of its case, but internal PCB antennas like these typically are poor for propagation research (because of a non-uniform and unknown radiation pattern), so we instead set it up to attach our own external antennas by snaking a 200mm uFL to SMA adapter cable from the Atheros card to the side of the NUC case (via two holes we drilled, on the right side of Fig. 1).

Fig. 1: Inside the NUC-based Splicer channel sounding system

For one of the projects we're going to be using it for, we wanted directional antennas.  The Atheros is a 2x2 MIMO transceiver, so we need two antennas.  Also the Atheros card is dual-band, capable of 2.4 and 5.8 GHz.  But directional antennas tend to be big and bulky, and too many antennas hanging off of this unit would make it look like Medusa.  So instead we attached a dual-band dual-polarization antenna, the HG2458-10DP from L-Com.  It is a box that contains two antennas, one vertically polarized and one horizontally polarized.  The Splicer tool measures the channel between each pair of antennas, so we can measure the H-pol channel, the V-pol channel, and measure propagation for signals changing polarization in the channel.

Fig. 2: Two transceiver systems

Plus it looks like a scaled model of a 1981 IBM PC.  Or a minecraft character.  I'm not sure.

Why is this $600 system better than the $150,000 Sigtek channel sounder from 2000?

  • It's dual band, so we can measure either at 5.8 or 2.4 GHz, instead of being only at 2.4 GHz.  In fact, it can measure up to 200 MHz in the 5.8 GHz band, which is a wider bandwidth than the Sigtek system was capable of.
  • It's MIMO: we can measure four channels simultaneously.  Actually, if we had used a 3-antenna Atheros card, we could have measured nine channels simultaneously.  The Sigtek used one transmit and one receive antenna.
  • It can make multiple measurements per second, significantly faster than the Sigtek system.
  • It is smaller and uses less power.  The Sigtek system had to be pushed around on a cart, and when it needed to be battery powered, we had to use 80-pound marine batteries to power it.
Fundamentally, this is just another example of technology scaling over time.  The reduced costs ensure that many more people are able to perform research and test new communications, localization, and other applications of radio channel sensing.  I hope that the increased focus will lead to new research discoveries, new products, and even further reductions in the costs of radio channel research.