The article discusses a phenomenon called the "Decline Effect": astonishing new results are published (often in high-profile journals with lots of media coverage). The results appear to be highly statistically significant, but in subsequent studies, no one ever replicates them as well. And after numerous additional investigations, the original astonishing results are essentially debunked. Why is this? Likely because the original results were in fact an anomaly. Perhaps 100 tests were carried out, and only the case that was significant at the 99% level was published. In fact it was just an outlier. Bayesian statistics might help us avoid this trap, or a healthy willingness to test and retest.
In physical oceanography, examples of the decline effect often result from processes inferred from short time series that turn out not to persist as clearly once time series have been extended in length. Some examples: the Antarctic Circumpolar Wave, Southern Ocean EKE response to the SAM, (and help me think of some others....). To be fair, we learn a lot from preliminary studies, even if early inferences don't hold up to further scrutiny. Nonetheless when possible, let's strive for robust results.
Note however that journalist Jonah Lehrer has had some integrity issues of his own. Wikipedia provides quite a thorough accounting. Perhaps that means that he's especially well qualified to discuss scientific integrity, or perhaps it means we should doubt his analysis too.
This article discusses shoddy statistics and makes a case for verifying published results from other authors and for publishing negative results (if only to provide full data for meta-studies that amalgamate the results of all published work.) While most of the anguish about science and statistics is directed to biomedical research, the issues undoubtedly matter to us too, and this underscores the urgency of us ensuring that we understand statistics.
Ian Eisenman pointed me to this wonderfully cynical view of shoddy statistics in health studies and the lax habits of journalists in covering science reporting. This hoax study used the basic principle that merely by chance 1 in 20 cases you examine should be statistically significant at the 95% level, so if you look at about 20 cases, something is certain to look real.