My buddy Ernie Davis just sent me an article, published in Nature, called How scientists fool themselves – and how they can stop. It's really pretty great - a list of ways scientists fool themselves, essentially through cognitive biases, and another list of ways they can try to get around those biases.
There's even an accompanying graphic which summarizes the piece:
I've actually never heard of "blind data analysis" before, but I think it's an interesting idea. However, it's not clear how it would exactly work in a typical data science situation, where you perform exploratory data analysis to see what the data looks like, and you form a "story" based on that.
One thing they mentioned in the article but not in the graphic is the importance of having your research open sourced, which I think is the way to let the "devil's advocacy" and "team of rivals" approaches actually happen in practice.
It's all the rage nowadays to have meta analyses. I'd love for someone to somehow measure the ability of the above debiasing techniques to see which work well, and under what circumstances.