This article from Nature discussing the null result (“file drawer”) problem reminds me of a note I posted apparently four years ago: Analyzing data to do nothing
The article focuses on academic publishing in the natural sciences, but the problem is widespread, from business schools to small and large corporations. Positive, statistically significant results with a large effect size (?) are perceived and rewarded as superior to inconclusive (and apparently negative!) results.
While absence of evidence is not always evidence of absence, seeking an intervention (e.g., a new promotion, change in ad placement, revision in return policy) and finding no effect of the intervention is valuable information that should be appreciated as much as finding an effect. As I seem to have noted four years ago, “Rarity (of any effect) is expected simply because the probability of noise is often disproportionately higher.” To remember this is to recognize unintended consequences.