Yes to this:
A sloppy attitude towards statistics has led to exaggerated and unjustified claims becoming commonplace in science, according to one of Britain’s most eminent statisticians.
Speaking ahead of his president’s address to the Royal Statistical Society, Prof Sir David Spiegelhalter, said that questionable practices such as cherry-picking data and “hacking statistics” to make findings appear more dramatic threatens to undermine public trust in science.
“We’re not concerned with either lies, utter falseness or fabrications. It’s to do with exaggerations or misleading claims,” he said. “It’s more difficult to deal with.”
The lecture, entitled Trust in Numbers, draws parallels between concerns around the reliability of published scientific research and the rise of fake news and alternative facts in politics.
“Both are associated with claims of a decrease in trust in expertise, and both concern the use of numbers and scientific evidence,” he said.
By the time the general public comes to read about a scientific result, often after it has been filtered through a university press office and a media outlet, they would be right to treat it with suspicion, Spiegelhalter argues.
“In my darkest moods I follow what could be called the ‘Groucho principle’: because stories have gone through so many filters that encourage distortion and selection, the very fact that I am hearing a claim based on statistics is reason to disbelieve it,” he will say in the address on Wednesday evening.
He said that an overwhelming pressure to publish even marginal and mediocre work was partly to blame for dragging down the reliability of published scientific findings.
Half my time as a referee is spent trying to figure out if the authors are cherry-picking results. it's worse at the top journals as authors feel the need to publish there or someone in their department will give them a hard time for publishing in a [gasp!] second tier field journal. There is plenty of room in lower ranked journals for honest research with interesting results that are not totally robust to alternative specifications. If you present one, and only one, result (which reviewers know is an extreme), don't do the robustness checks and/or argue against robustness checks then this is a red flag. If you then exaggerate the implications of your result based on cherry-picked results then "the very fact that I am hearing a claim based on statistics is reason to disbelieve it."
And now, I'll try to dismount from my high horse and write a referee report.