Shapiro–Wilk test
The Shapiro–Wilk test is a test of normality in frequentist statistics. It was published in 1965 by Samuel Sanford Shapiro and Martin Wilk.^{[1]}
Contents
 Theory 1
 Interpretation 2
 Power analysis 3
 Approximation 4
 See also 5
 References 6
 External links 7
Theory
The Shapiro–Wilk test utilizes the null hypothesis principle to check whether a sample x_{1}, ..., x_{n} came from a normally distributed population. The test statistic is:
 W = {\left(\sum_{i=1}^n a_i x_{(i)}\right)^2 \over \sum_{i=1}^n (x_i\overline{x})^2}
where
 x_{(i)} (with parentheses enclosing the subscript index i) is the ith order statistic, i.e., the ithsmallest number in the sample;
 \overline{x} = \left( x_1 + \cdots + x_n \right) / n is the sample mean;
 the constants a_i are given by^{[1]}

 (a_1,\dots,a_n) = {m^{\mathsf{T}} V^{1} \over (m^{\mathsf{T}} V^{1}V^{1}m)^{1/2}}
 where

 m = (m_1,\dots,m_n)^{\mathsf{T}}\,
 and m_1,\ldots,m_n are the expected values of the order statistics of independent and identically distributed random variables sampled from the standard normal distribution, and V is the covariance matrix of those order statistics.
The user may reject the null hypothesis if W is below a predetermined threshold.
Interpretation
The nullhypothesis of this test is that the population is normally distributed. Thus if the pvalue is less than the chosen alpha level, then the null hypothesis is rejected and there is evidence that the data tested are not from a normally distributed population. In other words, the data are not normal. On the contrary, if the pvalue is greater than the chosen alpha level, then the null hypothesis that the data came from a normally distributed population cannot be rejected. E.g. for an alpha level of 0.05, a data set with a pvalue of 0.02 rejects the null hypothesis that the data are from a normally distributed population.^{[2]} However, since the test is biased by sample size,^{[3]} the test may be statistically significant from a normal distribution in any large samples. Thus a Q–Q plot is required for verification in addition to the test.
Power analysis
Monte Carlo simulation has found that Shapiro–Wilk has the best power for a given significance, followed closely by Anderson–Darling when comparing the Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors, and Anderson–Darling tests.^{[4]}
Approximation
Royston proposed an alternative method of calculating the coefficients vector by providing an algorithm for calculating values, which extended the sample size to 2000.^{[5]} This technique is used in several software packages including R,^{[6]} Stata,^{[7]}^{[8]} SPSS and SAS.^{[9]}
See also
 Anderson–Darling test
 Cramér–von Mises criterion
 Kolmogorov–Smirnov test
 Normal probability plot
 Ryan–Joiner test
 Watson test
 Lilliefors test
References
 ^ ^{a} ^{b} Shapiro, S. S.; p. 593
 ^ "How do I interpret the Shapiro–Wilk test for normality?". JMP. 2004. Retrieved March 24, 2012.
 ^ Field, Andy (2009). Discovering statistics using SPSS (3rd ed.). Los Angeles [i.e. Thousand Oaks, Calif.]: SAGE Publications. p. 143.
 ^ Razali, Nornadiah; Wah, Yap Bee (2011). "Power comparisons of Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors and Anderson–Darling tests" (PDF). Journal of Statistical Modeling and Analytics 2 (1): 21–33. Retrieved 5 June 2012.
 ^ Royston, Patrick (September 1992). test for nonnormality"W"Approximating the Shapiro–Wilk . Statistics and Computing 2 (3): 117–119.
 ^ Korkmaz, Selcuk. "'"Package 'royston (PDF). Cran.rproject.org. Retrieved 26 February 2014.
 ^ Royston, Patrick. "Shapiro–Wilk and Shapiro–Francia Tests". Stata Technical Bulletin, StataCorp LP 1 (3).
 ^ Shapiro–Wilk and Shapiro–Francia tests for normality
 ^ Park, Hun Myoung (2002–2008). "Univariate Analysis and Normality Test Using SAS, Stata, and SPSS" (PDF). [working paper]. Retrieved 26 February 2014.
External links
 Samuel Sanford Shapiro
 Algorithm AS R94 (Shapiro Wilk) FORTRAN code
 Exploratory analysis using the Shapiro–Wilk normality test in R
