Statistical power is the probability that a statistical test will correctly reject a false null hypothesis (H0​) when a specific alternative hypothesis (H1​) is true. H0​ is the null hypothesis, which states there is no effect or no difference. H1​ is the alternative hypothesis, which states there is a real effect or difference. Alpha (α) is the probability of a Type I error (a false positive), which is the risk of incorrectly rejecting the H0​ when it is actually true. You set this value before the experiment, commonly at 0.05. Beta (β) is the probability of a Type II error (a false negative), which is the risk of failing to reject the H0​ when it is actually false.
Power is calculated as 1−β. Increasing power means you are decreasing the probability of making a Type II error.
Several factors can be adjusted to increase the power of a statistical test:
Effect Size: This is the magnitude of the difference you are trying to detect. A larger effect size is easier to detect, thus increasing power.Â
Sample Size: The number of observations in a study. A larger sample size provides more information about the population, reducing the margin of error and increasing the power to detect a true effect.
Variation: Refers to the spread or standard deviation of the data within the population. Less variation makes it easier to distinguish a real effect from random noise, thereby increasing power.
Alpha (α): Increasing the alpha level (e.g., from 0.05 to 0.10) also increases power, but at the cost of a higher risk of a Type I error. This trade-off is often undesirable.
