The article discusses a phenomenon called the "Decline Effect": astonishing new results are published (often in high-profile journals with lots of media coverage). The results appear to be highly statistically significant, but in subsequent studies, no one ever replicates them as well. And after numerous additional investigations, the original astonishing results are essentially debunked. Why is this? Likely because the original results were in fact an anomaly. Perhaps 100 tests were carried out, and only the case that was significant at the 99% level was published. In fact it was just an outlier. Bayesian statistics might help us avoid this trap, or a healthy willingness to test and retest.
In physical oceanography, examples of the decline effect often result from processes inferred from short time series that turn out not to persist as clearly once time series have been extended in length. Some examples: the Antarctic Circumpolar Wave, Southern Ocean EKE response to the SAM, (and help me think of some others....). To be fair, we learn a lot from preliminary studies, even if early inferences don't hold up to further scrutiny. Nonetheless when possible, let's strive for robust results.
Note however that journalist Jonah Lehrer has had some integrity issues of his own. Wikipedia provides quite a thorough accounting. Perhaps that means that he's especially well qualified to discuss scientific integrity, or perhaps it means we should doubt his analysis too.
This article discusses shoddy statistics and makes a case for verifying published results from other authors and for publishing negative results (if only to provide full data for meta-studies that amalgamate the results of all published work.) While most of the anguish about science and statistics is directed to biomedical research, the issues undoubtedly matter to us too, and this underscores the urgency of us ensuring that we understand statistics.