Wednesday, April 20, 2016

Let's stop using engineering as in insult.

I've had this conversation way too often when discussing funding proposals, paper submissions, candidates, and talks. After reading a paper or hearing a talk, one person says,
I like XX about the work, but this is just engineering. Where is the science?
I think the language is way wrong:

  • The sentiment is usually that the work lacks technical novelty or innovation. I often agree with this sentiment. However: What field is it where innovation is sought and technical challenges overcome? Engineering.  
  • Do you really think that we should be doing science? That is, coming up with a hypothesis, developing an experiment to test that hypothesis, performing that experiment, and then reporting the statistical significance of your results? 
  • Talk about self-hatred and low self-esteem. Are we really going to say "just engineering" when we call ourselves engineers? How is this as an outreach and retention strategy?
  • Do you really think scientists say: "I like XX about the work, but this is just science.  Where is the engineering?"

Thursday, April 14, 2016

Cheap channel sounding

In 2000, when I was a research engineer at Motorola, we bought a state-of-the-art channel sounder.  It  came with a transmitter that sent a wideband (80 MHz) spread spectrum signal in the 2.4 GHz band, and a receiver that sampled the signal and computed the complex-valued channel impulse response.  It was $150,000 USD from a small custom software-defined radio company called Sigtek.  And it was worth it; it allowed me to conduct measurement campaigns to determine what accuracy was possible from a TOA indoor localization system in that band and with that bandwidth.  This was valuable information at the time for my employer.

Today we put together a channel sounder with capabilities that significantly exceed that system for $600 USD, using off-the-shelf parts and the Atheros CSI Tool, developed by Yaxiong Xie and Mo Li at NTU Singapore.  Anh Luong and Shuyu Shi got the system up and running in our lab.  The Atheros CSI tool is a hacked driver that works for several Atheros WiFi cards that allow the channel state information (CSI) calculated on the card by the standard 802.11n receiver to be exported out of the card.  We used an Intel NUC, which is essentially puts low-end laptop components into a 4 x 4 x 1 inch box.  It has two PCI express slots, and we use one to plug in an Atheros AR9462 card (Fig. 1, left).  The NUC has two antennas on the inside of its case, but internal PCB antennas like these typically are poor for propagation research (because of a non-uniform and unknown radiation pattern), so we instead set it up to attach our own external antennas by snaking a 200mm uFL to SMA adapter cable from the Atheros card to the side of the NUC case (via two holes we drilled, on the right side of Fig. 1).

Fig. 1: Inside the NUC-based Splicer channel sounding system

For one of the projects we're going to be using it for, we wanted directional antennas.  The Atheros is a 2x2 MIMO transceiver, so we need two antennas.  Also the Atheros card is dual-band, capable of 2.4 and 5.8 GHz.  But directional antennas tend to be big and bulky, and too many antennas hanging off of this unit would make it look like Medusa.  So instead we attached a dual-band dual-polarization antenna, the HG2458-10DP from L-Com.  It is a box that contains two antennas, one vertically polarized and one horizontally polarized.  The Splicer tool measures the channel between each pair of antennas, so we can measure the H-pol channel, the V-pol channel, and measure propagation for signals changing polarization in the channel.

Fig. 2: Two transceiver systems

Plus it looks like a scaled model of a 1981 IBM PC.  Or a minecraft character.  I'm not sure.

Why is this $600 system better than the $150,000 Sigtek channel sounder from 2000?

  • It's dual band, so we can measure either at 5.8 or 2.4 GHz, instead of being only at 2.4 GHz.  In fact, it can measure up to 200 MHz in the 5.8 GHz band, which is a wider bandwidth than the Sigtek system was capable of.
  • It's MIMO: we can measure four channels simultaneously.  Actually, if we had used a 3-antenna Atheros card, we could have measured nine channels simultaneously.  The Sigtek used one transmit and one receive antenna.
  • It can make multiple measurements per second, significantly faster than the Sigtek system.
  • It is smaller and uses less power.  The Sigtek system had to be pushed around on a cart, and when it needed to be battery powered, we had to use 80-pound marine batteries to power it.
Fundamentally, this is just another example of technology scaling over time.  The reduced costs ensure that many more people are able to perform research and test new communications, localization, and other applications of radio channel sensing.  I hope that the increased focus will lead to new research discoveries, new products, and even further reductions in the costs of radio channel research.

Wednesday, April 6, 2016

Should you do an I-Corps program?

Should you take part in an I-Corps team program?  When I signed up for the NSF I-Corps, I found almost nothing written from a PIs perspective on the grant program.  From the solicitation, all I could see is that myself and the mentor, two of the three participants, would get paid nothing, and yet do a ton of work.  This post is about why I chose to do the program, what I see were its benefits and challenges.

First, some background on I-Corps: The NSF “innovation corps” or I-Corps Team is grant that allows a team of three people to take part in a entrepreneurial training program and to apply that training to develop a product-market fit for your newly discovered engineering or science technology. The grant provides enough funding to pay for travel required for the team to attend the training sessions, and to support one of the team, typically a grad student or postdoc, to be the “entrepreneurial lead”.  The training and the product-market fit work is done during an intensive 7-8 week period during which the entrepreneurial lead will work 40 hours a week, and other team members will see a workload of 15-20 hours per week.

Anyone who thinks faculty have 15-20 hours free in a week are definitely not faculty.

So why would two people sign up for lots of extra work and no pay?  First, I was able to do it because I planned it into my sabbatical; I used the time I wasn’t spending teaching and prepping to do the I-Corps program.  In general, I recommend I-Corps to you only during a semester / summer in which you are not teaching. Second, I wanted to pay my graduate student, I didn’t have another way to do so, and he was strongly interested in the commercialization of our technology.  If your student or postdoc is fascinated with your technology only because of the theory or the academic papers that could be written, and you have the funding to support that, then don’t make them the entrepreneurial lead for an I-Corps team.

I’d like to say here that I was genuinely interested in the I-Corps course material and Steve Blank’s startup philosophy.  But I wasn’t.  That’s just not why I signed up.  Honestly, I had no idea who he was or what the material was being taught.  However, it turned out to be the best thing I got out of my participation.

In the past, whenever I talked with business folks (potential investors, business faculty, U technology commercialization staff) there has been two-way frustration.  I’m frustrated because they just don’t believe me, or they make it clear after two minutes of talking with me about a technology that they know better about the potential for the technology and the right target market.  They’re frustrated (my guess is) because I can’t get to the point and give them the information that matters about the technology and its market potential.

Here’s where the course material comes in.  First, taking his course and having to do the work to apply it to your tech will make you learn the lingo — words that have meaning to investors and business types.  Second, and more importantly, you will learn and practice a revolutionary new way of gathering market evidence and finding a product-market fit.  The secret?  Ask people.  Lots of them.  (I'm kidding about the "revolutionary" part, but stay with me.)

Here are these two secret methods in slightly more detail:

  • “Ask people”: You’ve got a technology or product idea that solves a problem.  Identify who has this problem (potential users, customers, or decision makers).  So that you don’t bias their answer, forget about your technology for a while and ask them what their biggest pain points are (problems they wish they could solve).  Get details, like what they do now, what solutions they’ve tried, how much it costs them, who makes the decisions, etc.     At the end of this interview:  a) If the problem you identified isn’t on their list, this is evidence that it isn’t important to them.  You can ask them about the problem that you thought they might have — ask them if they know anyone who might have this problem.  b) if it was on their list, you can describe how your technology might solve that problem, and ask whether it would solve their problem, and if so, how much they’d pay for it, who would be paying for it, etc.
  • “Lots of them”: repeat step 1 dozens of times.  Write down a specific hypothesis and question(s) that you can ask to test it, and keep tally of the answers.  As part of I-Corps, our team did 100 interviews.  Each interview, ask for people who you might be able to interview next.   Use google.  Get creative.  Show up in person to ask them to talk with you if they don’t respond or call back.  Yes, I’ve been kicked out of buildings.  Mostly, though, people are happy to talk with someone who cares about their problems.  

Here’s the reason why his simple method is big news for academics.  We’re not good at knowing who a technology is good for.  We’re used to reading what other people say it is good for, in particular what problem they think it will solve, and we simply cite their paper and believe them.  We don’t generally talk to the people who are directly involved in the problem (the one we think we’re going to solve) on a day-to-day basis.  If we did talk to them, we might see that the problem is more complicated than we knew and our solution isn’t a good one.  Or, we might find out that there just aren’t many people with the problem we’re trying to solve.  I suspect that one of these is true for many of the problems we collectively are working on.  Yet we will continue, without asking people, to describe the “broader impacts” of our work in our proposals and papers.

I suspect that this leads to the bandwagon / “hot topic” effect within research communities.  Not many people are actually finding out what the problems are that match with the particular tools and capabilities of a research community.  So when someone does find a good fit, and they are able to show evidence that it is in fact a good problem for a community to address, lots of researchers pile on and work on that problem.  The problem being, of course, that the topic gets oversold, overworked, tired, and spread thin to the point that published “solutions” don’t address the real problem, and meanwhile other real problems get ignored.

Now that I’ve participated in an I-Corps program, I can’t imagine running my lab the same way again.  I will, and I will require that people in my lab, make connections with people outside of our building and ask them questions that don’t bias their answers.  With frequency.  The benefits are both, hopefully, more good ideas for research papers and proposals, and more direct answers about the broader impact of our work.

Further, I think we should teach engineering students these tools — they provide a straightforward way to decide if their next idea has commercial impact, and if not, what might.  Let’s move our engineers up the entrepreneurial value-chain by teaching them how to show up to investors with a good product-market fit, and not just a super-cool engineering trick.

Have you done an I-Corps team?  Or thought about proposing one?  Let me know what you think.