OLS Fundamentals
This page covers the statistical theory behind the Linear Regression tab. See that page for usage instructions.
Model Formulation
The linear regression model is formulated as:
where is the response vector, is the design matrix (predictors and intercept), is the coefficient vector, and is the error term.
The OLS estimator minimizes the residual sum of squares and is obtained from the normal equations:
The properties of this estimator depend on the assumptions placed on .
Assuming : The OLS estimator is consistent. Homoscedasticity and uncorrelated errors are not required.
Further assuming and (homoscedastic and uncorrelated): By the Gauss-Markov theorem, the OLS estimator is the Best Linear Unbiased Estimator (BLUE).
Further assuming (normality): Exact finite-sample distributions for -tests and -tests are obtained.
Without normality, the central limit theorem ensures the test statistics are asymptotically normal in large samples. The required sample size depends on the true error distribution, so no universal threshold applies.
OLS is a special case of GLM (Gaussian family with identity link).
Standardized Residuals and Diagnostic Statistics
Residual diagnostics in OLS use the internally studentized residual :
where is the residual, is the error standard deviation estimated from all observations, and is the diagonal element of the hat matrix (leverage). is the number of columns in the design matrix , including the intercept. Leverage measures how far an observation's predictor values are from the others. Since , the average leverage is , and is the conventional threshold for high leverage.
Cook's Distance combines residual magnitude and leverage into a single influence measure:
Multicollinearity and VIF
When predictors are highly correlated, approaches singularity and coefficient estimates become unstable.
VIF (Variance Inflation Factor) = is computed from , the R-squared obtained by regressing on all other predictors. A high means most of the variation in is already explained by other variables, leaving little unique information. VIF tells you how many times the variance of is inflated as a result. For example, VIF = 5 means the standard error of is times wider than it would be with uncorrelated predictors. itself remains unbiased, but its estimate becomes less reliable and the confidence interval widens.
See also
- Linear Regression - How to run OLS regression and interpret results
- GLM Fundamentals - Generalized linear model theory, which includes OLS as a special case
- Glossary - Statistical term definitions