You’re wrong, but thanks!

I’m getting closer to defending my thesis proposal, and I presented my questions and methods to my lab group recently. I was really excited that they had so many ideas and questions about my project. One of the things the group was most concerned about was that I don’t have a plan for validating one of the models I’m building. This is something I’ve also been struggling with and it was awesome to throw around a few validation strategies with the group.

But when I got home and started thinking through their suggestions, I realized that all of the ways people had suggested validating the model wouldn’t work. I spent a few minutes with that horrible feeling in the pit of my stomach, wondering if everything I’d worked on so far was a waste and a terrible idea. What use is a model if you can’t tell if it’s right or wrong?

To stay the panic, I sat down with a pencil and a piece of paper and wrote the question I designed the model to address at the top of the page. Then I wrote down all of the suggestions and carefully explained to myself why they wouldn’t validate my model.

A very exciting thing happened during this process. I realized that measuring the process I’m modeling is possible, but not using the techniques my lab recommended. The reason it hasn’t really been measured before is because finding the things to measure is like looking for needles in a haystack. The results from my model tell you where to start looking for the needles in your haystack and how many you should find. While I can’t include actual needle searching in my project, for the first time we’ll have testable predictions for the process I’m interested in. And that’s pretty cool!