http://rsos.royalsocietypublishing.org/content/1/3/140216
"Never, ever, use the word ‘significant’ in a paper. It is arbitrary, and, as we have seen, deeply misleading."
"[under rather common conditions] if you declare that you have made a discovery when you observe a p-value close to 0.05, you have at the least a 26% chance of being wrong"
"If you want to avoid making a fool of yourself very often, do not regard anything greater than p<0.001 as a demonstration that you have discovered something."
"One [contributor to the lack of reproducibility in science] is the self-imposed publish-or-perish culture... which values quantity over quality, and which has done enormous harm to science... The mis-assessment of individuals by silly bibliometric methods has contributed to this harm... ‘altmetrics’ is demonstrably the most idiotic... Another cause of problems is scientists’ own vanity, which leads to the public relations department issuing disgracefully hyped up press releases"
A little off our central topic, but this commentary is illuminating and important. Under quite common conditions, claiming that a discovery has been made when you see p < 0.05 means that you will "make a fool of yourself" almost a third of the time (have a particular look at Figure 2). Compare the p < 0.05 convention in biology to particle physics, where the 5-sigma level is required for a discovery -- roughly corresponding to p < 0.0000006.
Of course, there are lots of reasons why just worrying about significance (as opposed to size) of results is also misleading ( http://www.deirdremccloskey.com/docs/jsm.pdf is a classic).
"Never, ever, use the word ‘significant’ in a paper. It is arbitrary, and, as we have seen, deeply misleading."
"[under rather common conditions] if you declare that you have made a discovery when you observe a p-value close to 0.05, you have at the least a 26% chance of being wrong"
"If you want to avoid making a fool of yourself very often, do not regard anything greater than p<0.001 as a demonstration that you have discovered something."
"One [contributor to the lack of reproducibility in science] is the self-imposed publish-or-perish culture... which values quantity over quality, and which has done enormous harm to science... The mis-assessment of individuals by silly bibliometric methods has contributed to this harm... ‘altmetrics’ is demonstrably the most idiotic... Another cause of problems is scientists’ own vanity, which leads to the public relations department issuing disgracefully hyped up press releases"
A little off our central topic, but this commentary is illuminating and important. Under quite common conditions, claiming that a discovery has been made when you see p < 0.05 means that you will "make a fool of yourself" almost a third of the time (have a particular look at Figure 2). Compare the p < 0.05 convention in biology to particle physics, where the 5-sigma level is required for a discovery -- roughly corresponding to p < 0.0000006.
Of course, there are lots of reasons why just worrying about significance (as opposed to size) of results is also misleading ( http://www.deirdremccloskey.com/docs/jsm.pdf is a classic).
No comments:
Post a Comment
Note: only a member of this blog may post a comment.