Crying with an audience, or How to criticize your labmates

Bullfight

by Mait Jüriado on Flickr

Presenting your ideas and work in science can be really intimidating. Your audience is skeptical and skilled and you know just enough to know how much you don’t know. I’m afraid of crying when stridently criticized during a presentation. My fear of that happening was recently somewhat alleviated. To get upset enough to cry, I’d have to notice that someone was being mean. It turns out that I’m pretty oblivious.

During my recent lab meeting presentation, someone higher up in the academic hierarchy (Let’s call them Shuah) expressed some confusion over why I was using [method x] instead of the easier [method y]. I’d actually considered method y and rejected it because [method y] would give meaningless output in my situation. But I forgot that fact in the moment  and instead sort of agreed with Shuah.

After thinking about Shuah’s criticism and the valid need for a backup simplification plan in case [method x] is too hard, I discussed my ideas with my advisor. When I mentioned Shuah’s comments as the inspiration for [method x version 2], she told me that Shuah’s comments were unnecessarily harsh. I hadn’t noticed at all. My takeaways from the interaction had been:

  1. Explain [problem aspect z] better so that people don’t think [method y] is an appropriate simplification
  2. Stop and think for a minute before agreeing with criticism

I wasn’t upset at all by Shuah’s comments and didn’t notice anything at all harsh in their tone. Which leads to a third and perhaps more important lesson from the interaction (and my advisor’s comment on it): if I didn’t notice a criticism presented harshly enough that my advisor thought it worth commenting on, I’m probably not noticing when my own comments are unnecessarily harsh.

So from now on, I’m going to try to think a bit more about how I present my criticism. While crying in front of an audience would be awful, so is making someone cry.

You’re wrong, but thanks!

I’m getting closer to defending my thesis proposal, and I presented my questions and methods to my lab group recently. I was really excited that they had so many ideas and questions about my project. One of the things the group was most concerned about was that I don’t have a plan for validating one of the models I’m building. This is something I’ve also been struggling with and it was awesome to throw around a few validation strategies with the group.

But when I got home and started thinking through their suggestions, I realized that all of the ways people had suggested validating the model wouldn’t work. I spent a few minutes with that horrible feeling in the pit of my stomach, wondering if everything I’d worked on so far was a waste and a terrible idea. What use is a model if you can’t tell if it’s right or wrong?

To stay the panic, I sat down with a pencil and a piece of paper and wrote the question I designed the model to address at the top of the page. Then I wrote down all of the suggestions and carefully explained to myself why they wouldn’t validate my model.

A very exciting thing happened during this process. I realized that measuring the process I’m modeling is possible, but not using the techniques my lab recommended. The reason it hasn’t really been measured before is because finding the things to measure is like looking for needles in a haystack. The results from my model tell you where to start looking for the needles in your haystack and how many you should find. While I can’t include actual needle searching in my project, for the first time we’ll have testable predictions for the process I’m interested in. And that’s pretty cool!