It's difficult to admit defeat. Surprisingly, at least in science, it's harder to know if you are truly defeated. The scientific method, or whatever bastardize version of it is implemented by the droves of present-day benchtop and desktop scientists, depends on the proposal and evaluation of competing hypotheses. In the course of the scientific method, many (indeed, most) hypotheses should be defeated. When you are feeling around in a dark room for the light switch to illuminate the truth, you'll more often that not be tripping over the rug. The strange thing is that, in my experience, I frequently ask questions for which I really, really expect an affirmative outcome. I can't say that I don't fail, but I rarely fail big.
So it's interesting to write a blog about giving up on a project. For the last three months, I've spent a great deal of time asking a very specific question about cancer metabolism. The upside of asking this question was that I knew nothing about how to answer it. I had to learn about chemotherapy, about generalized linear models, about coding in Python. I had to learn how to read cancer literature, and about the caveats of the cancer literature. And as I asked this question, I had a repeated series of ups and downs, of very small p-values and very large p-values. I learned about the details of my data, and adjusted my models, frequently shifting from one modeling framework to another. In short, I was a cancer ninja.
Then, after describing my results to a lab mate, I realized that they were, in the fine words of Mr. Kurt Vonnegut, doodley-squat. Don't get me wrong, the results were as good as they were going to get. I know the ins and outs of this thing, and my models are good. The problem was that, for the question I was asking, the data wasn't good enough. This became clear to me in an interaction which is probably quite familiar to many computational scientists working alongside experimentalists: we make a prediction, think its great, and show it to someone who we want to do an experiment to validate our predictions. The experimentalist takes one look, and says…"That's it? Really?" Maybe in not-so-few words, but you know the tone of voice.
Now comes the inevitable disappointment. Why did I spend so much time and effort working on a project doomed to fail? Why didn't I see this coming before? Eventually: how can I learn front his so I never fall into this trap again? What I've come out of this experience with is a bit of old-timer, veteran knowledge: sometimes you just can't know something is doomed until you try. It seems obvious, but it is hard to swallow when you are the one with your lab notebook covered in doodley-squat. So it goes...