[ABE-L] Fwd: {MEDSTATS} Re: "Basic and Applied Psychology" Bans The Use of P Values and Confidence Intervals

Basilio De Braganca Pereira basiliopereira em gmail.com
Dom Mar 8 11:58:08 -03 2015



Enviado do meu iPhone

Início da mensagem encaminhada

> De: Marc Schwartz <marc_schwartz em me.com>
> Data: 8 de março de 2015 11:18:31 BRT
> Para: MedStats MedStats <medstats em googlegroups.com>
> Assunto: Re: {MEDSTATS} Re: "Basic and Applied Psychology" Bans The Use  of P Values and Confidence Intervals
> Responder A: medstats em googlegroups.com
> 
> 
>> On Mar 8, 2015, at 8:08 AM, John Whittington <John.W em mediscience.co.uk> wrote:
>> 
>> At 12:46 08/03/2015 +0000, 'Martin Holt' via MedStats wrote:
>>> One comment:  You do not "accept your research hypothesis."  You do not reject it.
>> 
>> Very true - I hadn't noticed that when I responded!
>> 
>> It's just like the English criminal legal system.  The 'null hypothesis' is that everyone is innocent, and that is the default.  If a jury feels that evidence of guilt is sufficiently strong, they may reject that null hypothesis and return a 'guilty' verdict, but that does not mean that the alternative hypothesis ('guilty') has been 'accepted' (with 100% certainty) -  i.e. the possibility of a Type I error always exists.  Similarly, if they don't think that the evidence against the null hypothesis is sufficient strong, they will _not_ reject it, leaving the accused, by default, with a 'not guilty' verdict.  However, again, that certainly does not mean that one has 'accepted' that they are not guilty - there will be a (probably very substantial) number of Type II errors.
> 
> 
> That is a great parallel and one that I often use when teaching basic statistics to new employees at my company.
> 
> The notion that the legal system is not infallible is critical to understanding how statistics is not either. 
> 
> That we would rather find a guilty person innocent (allow for a higher probability of a Type II error, typically 10% to 20%) than find an innocent personal guilty (a tighter Type I error threshold of 5%) is an important concept.
> 
> There are innocent people in jail and in the U.S., on "death row". Sometimes that can be due to the nature of evidence presented or even manipulated, a poor defense attorney, the lack of technology in the past, such as DNA, or even prosecutorial overreach.
> 
> There are guilty people that have gone free, due to lack of evidence or legal processes that may have in fact, prevented certain evidence from being presented.
> 
> The members of the jury are also a critical part and can be manipulated by a variety of factors.
> 
> So, what is "truth"?
> 
> It is also critical to note that there are different thresholds/standards of evidence in finding guilt in criminal trials as compared to civil trials. The standard is much higher in the former ("beyond a reasonable doubt") than it is in the latter ("the preponderance of evidence"), because of the consequences of making a Type I error.
> 
> Those are all appropriate parallels to clinical research and also why in clinical research, there are different levels of evidence associated with different types of trial designs, RCT's being at the top of the pyramid.
> 
> Poor study designs, sample sizes that are too small, unmeasured confounders, narrow inclusion/exclusion criteria and the like all affect the reliability, reproducibility and generalizability of research findings.
> 
> BTW, the process described by Paul from Mindless Statistics in a prior post is frankly absurd:
> 
> 1. You don't set up a formal study design in the absence of an alternative hypothesis. 
> 
> 2. You don't just blindly use an alpha of 0.05, because that threshold may or may not be apropos. 
> 
> 3. You don't publish p values as <0.05 or "N.S.", because you don't provide the reader with enough information and you are dichotomizing a continuous variable. You publish p values to meaningful precision (eg. 3 or 4 decimal places). You might use <0.001 or <0.0001, because below those thresholds, the number is largely meaningless, in deference to other information, like effect sizes and their precision. Not to mention, you should not be looking at p values in isolation either.
> 
> 4. You don't just repeat the cycle blindly.
> 
> 
> If you are conducting research in that manner, then you should not be conducting research. The problem is not with NHST, the problem is with those implementing it inappropriately and interpreting it poorly, or in a black and white manner as if it came down from the mountain top.
> 
> The best part of the BASP editorial, despite being a problematic decision, is perhaps serving as a catalyst for the important conversations that we are having in the field. It will be interesting to see what the ASA comes out with later in the year, as per their recent blog post:
> 
> http://community.amstat.org/blogs/ronald-wasserstein/2015/02/26/asa-comment-on-a-journals-ban-on-null-hypothesis-statistical-testing
> 
> 
> 
> Regards,
> 
> Marc
> 
> -- 
> -- 
> To post a new thread to MedStats, send email to MedStats em googlegroups.com .
> MedStats' home page is http://groups.google.com/group/MedStats .
> Rules: http://groups.google.com/group/MedStats/web/medstats-rules
> 
> --- 
> You received this message because you are subscribed to the Google Groups "MedStats" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to medstats+unsubscribe em googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
-------------- Próxima Parte ----------
Um anexo em HTML foi limpo...
URL: <https://lists.ime.usp.br/archives/abe/attachments/20150308/2ad0bb11/attachment.html>


Mais detalhes sobre a lista de discussão abe