Andrew Gelman:
Dear Major Academic Publisher,
You just sent me, unsolicited, an introductory statistics textbook that is 800 pages and weighs about 5 pounds. It’s the 3rd edition of a book by someone I’ve never heard of. That’s fine—a newcomer can write a good book. The real problem is that the book is crap. It’s just the usual conventional intro stat stuff. The book even has a table of the normal distribution on the inside cover! How retro is that?
The book is bad in so many many ways, I don’t really feel like going into it. There’s nothing interesting here at all, the examples are uniformly fake, and I really can’t imagine this is a good way to teach this material to anybody. None of it makes sense, and a lot of the advice is out-and-out bad (for example, a table saying that a p-value between 0.05 and 0.10 is “moderate evidence” and that a p-value between 0.10 and 0.15 is “slight evidence”). This is not at all the worst thing I saw; I’m just mentioning it here to give a sense of the book’s horrible mixture of ignorance and sloppiness.
I could go on and on. But, again, I don’t want to do so.
I can’t blame the author, who, I’m sure, has no idea what he is doing in any case. It would be as if someone hired me to write a book about, ummm, I dunno, football. Or maybe rugby would be an even better analogy, since I don’t even know the rules to that one.
Who do I blame, then? I blame you, the publisher.
You bastards.
Out of some goal of making a buck, you inflict this pile of crap on students, charging them $200—that’s right, the list price is just about two hundred dollars—for the privilege of ingesting some material that is both boring and false.
And, the worst thing is, this isn’t even your only introductory statistics book! You publish others that are better than this one. I guess you figure there’s a market for anything. It’s free money, right?
And then you go the extra environment-destroying step of printing a copy just for me and mailing it over here, just so that I can throw it out.
Please do me a favor. Shut your business down and go into something more productive to the world. For example, you could run a three-card monte game on the street somewhere. Three-card monte, that’s still a thing, right?
via andrewgelman.com
I mostly agree with the p-value criticism (attaching ambiguous qualifiers to significance levels seems silly) but p-values can be useful. I like them as a continuous measure of statistical significance, I don't see anything special about the .01, .05 and .10 cutoffs, when paired with discussion/recognition of economic significance. I just recently had one of these situations. I tried some alternative specifications of a voting model and found a result that I thought might be interesting that was supported by statistically significant coefficients (e.g., p = 0.08*). After a couple of late night tweets boasting about my accomplishments I realized that the marginal effect was nothing. Sad.
*And don't tell me that a coefficient with p = 0.08 is not statistically significant. It is statistically significant at the p = 0.08 level.