The answer cited knew almost
Why are accustomed to a statistically significant level set at 0.05?
1.
First, what is the P value?
P value is the time when the null hypothesis is true, the probability of a rejection region in accordance with the calculated test statistic falls sample observations. If the P value is small, indicating the probability of occurrence of such cases is very small, and if there was, according to the principle of small probability, we have reason to reject the null hypothesis, the smaller the P value, we reject the null hypothesis of the reasons more fully.
Secondly, the level of significance usually two values (0.01 and 0.05)- If P <0.01, described the determination result is strong, the conditions reject the null hypothesis.
- If 0.01 <P value <0.05, indicating a determination result of the weak, the conditions reject the null hypothesis.
- If the value of P> 0.05, indicating that the results can not reject the null hypothesis.
link: https: //www.zhihu.com/question/38096459/answer/77078761
The difference between “significant” and “non-significant” is not itself statistically significant.
——Andrew Gelman
------------------------------------
Upstairs @ Xu line already citing @Phil Zeng, said very clearly. I'll add it to be -
Once statistically university teacher training courses, the NPC Jiajun Ping teacher said:? "Why take a significance level of .05 (95% confidence level) since it first to propose this approach (it should Fisher Great God) is to do so, we will follow to do so. in fact, .1 significance level is enough. "
Consider also: When the interval estimate of 0.10 was significant (90% confidence level) have been met, we usually refer almost certain. And 90% compared with 95% confidence interval, which is estimated to be smaller margin of error, that is, at the expense of the estimated degree of certainty can be exchanged for the accuracy of the estimate.
And in the hypothesis testing, the probability of committing Type I error is compressed to 0.10 to 10 times before 1 samples occur as evidence of a small probability event to overthrow the null hypothesis, under normal circumstances is enough (when writing papers, which enough to play a "*" a). Of course, in reality, we still need to trade-off between Type II error in type I error - if compared to committing type I error (discard really wrong), we can not stand guilty of Type II error (taking false alarms) consequences, in view of these two types of errors committed probability shift in the relationship, the significance level of 0.1 clearly more committed to reduce the chance of type II error than 0.05. On the other hand, if you want to lower your Type I error, and the significance level set too small, will increase the chances. Type II error.
In conclusion, I humbly believe the default significance level is 0.10, it is possible to be better than 0.05. And then the eggs, we still use 0.05, because the habit is not easy to change how ah.
------------------
attached: generally enough to use α = 0.1, but the p-value is as small as possible.
What Does it Mean When p-value < α?
(a) .10, we have some evidence that H0 is not true.
(b) .05, we have strong evidence that H0 is not true.
(c) .01, we have very strong evidence that H0 is not true.
(d) .001, we have extremely strong evidence that H0 is not true
link: https: //www.zhihu.com/question/38096459/answer/77104959
Author: Anonymous User
5.
I understood as a simple P values reject the null hypothesis probability Van wrong.
1-0.95 = 0.05
and 0.95 is the standard normal distribution probability of two standard deviations
this thing probability of occurring within two standard deviations of the normal range, is acceptable.
That is, the saying goes, Shibuguosan.
What Does it Mean When p-value < α?
(a) 0.1, we have some evidence that H0 is not true.
(b) 0.05, we have strong evidence that H0 is not true.
(c) 0.01, we have very strong evidence that H0 is not true.
(d) 0.001, we have extremely strong evidence that H0 is not true.