background preloader

Statistics

Facebook Twitter

Assessing Equivalence: An Alternative to the Use of Difference Tests for Measuring Disparities in Vaccination Coverage. Abstract Eliminating health disparities in vaccination coverage among various groups is a cornerstone of public health policy. However, the statistical tests traditionally used cannot prove that a state of no difference between groups exists. Instead of asking, “Has a disparity—or difference—in immunization coverage among population groups been eliminated ? ,” one can ask, “Has practical equivalence been achieved?” A method called equivalence testing can show that the difference between groups is smaller than a tolerably small amount. This paper demonstrates the method and introduces public health considerations that have an impact on defining tolerable levels of difference. Epidemiologic methods; ethnic groups; hypothesis testing; immunization; statistics; vaccination; vaccines Abbreviations: DTP, diphtheria and tetanus toxoids and pertussis; NIS, National Immunization Survey; TOST, two one-sided test.

Received for publication April 16, 2002; accepted for publication July 19, 2002. Garrett1997. Effect size. In statistics, an effect size is a measure of the strength of a phenomenon[1] (for example, the change in an outcome after experimental intervention). An effect size calculated from data is a descriptive statistic that conveys the estimated magnitude of a relationship without making any statement about whether the apparent relationship in the data reflects a true relationship in the population. In that way, effect sizes complement inferential statistics such as p-values.

Among other uses, effect size measures play an important role in meta-analysis studies that summarize findings from a specific area of research, and in statistical power analyses. The concept of effect size already appears in everyday language. For example, a weight loss program may boast that it leads to an average weight loss of 30 pounds. In this case, 30 pounds is the claimed effect size. Overview[edit] Population and sample effect sizes[edit] being the estimate of the parameter Relationship to test statistics[edit] or.

Type I and type II errors. In statistics, a null hypothesis is a statement that the thing being studied produces no effect or makes no difference. An example of a null hypothesis is the statement "This diet has no effect on people's weight. " Usually an experimenter frames a null hypothesis with the intent of rejecting it: that is, intending to run an experiment which produces data that shows that the thing under study does make a difference.[1] A type I error (or error of the first kind) is the incorrect rejection of a true null hypothesis. With respect to the non-null hypothesis, it represents a false positive. Usually a type I error leads one to conclude that a supposed effect or relationship exists when in fact it doesn't.

A type II error (or error of the second kind) is the failure to reject a false null hypothesis. All statistical hypothesis tests have a probability of making type I and type II errors. Statistical test theory[edit] Type I error[edit] False positive error[edit] Type II error[edit] Example[edit] Statistical power. The power of a statistical test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false (i.e. the probability of not committing a Type II error).

That is, It can be equivalently thought of as the probability of correctly accepting the alternative hypothesis when the alternative hypothesis is true. That is, the ability of a test to detect an effect, if the effect actually exists. The power is in general a function of the possible distributions, often determined by a parameter, under the alternative hypothesis. Power analysis can be used to calculate the minimum sample size required so that one can be reasonably likely to detect an effect of a given size. There is also the concept of a power function of a test, which is the probability of rejecting the null when the null is true. [1] Background[edit] Factors influencing power[edit] Statistical power may depend on a number of factors.

Interpretation[edit] A priori vs. post hoc analysis[edit] and . Since. Methods for Equivalence and Noninferiority Testing.