The Power Analysis module implements the techniques of statistical power analysis, sample size estimation, and advanced techniques for confidence interval estimation. The main goal of the first two techniques is to allow you to decide, while in the process of designing an experiment, (a) how large a sample is needed to allow statistical judgments that are accurate and reliable and (b) how likely your statistical test will be to detect effects of a given size in a particular situation. The third technique is useful in implementing objectives (a) and (b) above, and in evaluating the size of experimental effects in practice.
Performing power analysis and sample size estimation is an important aspect of experimental design, because, without these calculations, the sample size may be too high or too low. If the sample size is too low, the experiment will lack the precision to provide reliable answers to the questions it is investigating. If the sample size is too large, time and resources will be wasted, often for minimal gain.
Suppose you are planning a 1Way ANOVA to study the effect of a drug. Prior to planning the study, you find that there has been a similar study previously. This particular study had 4 groups, with N = 50 subjects per group, and obtained an Fstatistic of 15.4. From this information, you can (a) gauge the population effect size with an exact confidence interval, and (b) use this information to set a lower bound to the appropriate sample size in your study.
Other features available with this module:

calculates power as a function of sample size, effect size, and Type I error rate for the tests listed below:
 1sample ttest
 2sample independent sample ttest
 2sample dependent sample ttest
 planned contrasts
 1way ANOVA (fixed and random effects)
 2way ANOVA
 Chisquare test on a single variance
 Ftest on 2 variances
 Ztest (or chisquare test) on a single proportion
 Ztest on 2 independent proportions
 Mcnemar's test on 2 dependent proportions
 Ftest of significance in multiple regression
 ttest for significance of a single correlation
 Ztest for comparing 2 independent correlations
 Logrank test in survival analysis
 Test of equal exponential survival, with accrual period
 Test of equal exponential survival, with accrual period and dropouts
 Chisquare test of significance in structural equation modeling
 Tests of "close fit" in structural equation modeling confirmatory factor analysis

calculates probability distributions that are of special value in performing power and sample size calculations
 noncentral distributions are also distinguished by the ability to calculate a noncentrality parameter that places a given observation at a given percentage point in the noncentral distribution; the ability to perform this calculation is essential to the technique of "noncentrality interval estimation"
 routines, which include the noncentral t, noncentral F, noncentral chisquare, binomial, Pearson Correlation, and the exact distribution of the squared multiple correlation coefficient, are characterized by their ability to solve for an unknown parameter, and for their ability to handle "nonnull" cases
For additional information on noncentrality interval estimation see Steiger and Fouladi (1997).
Recommended Comments
There are no comments to display.