GLM Fundamentals
This page covers the statistical theory behind the GLM tab. See that page for usage instructions.
Model Formulation
GLM generalizes the normal linear model to the exponential family of distributions, as introduced by Nelder & Wedderburn (1972). A GLM is defined by three components:
- Distribution family: The response variable follows a distribution in the exponential family
- Linear predictor: (a linear combination of explanatory variables)
- Link function: A monotonic function such that , connecting the linear predictor to the mean
OLS is a special case of GLM (Gaussian family with identity link). In this case, IRLS converges in a single iteration to the normal equations solution. Because MIDAS estimates from the data, the Wald statistic follows exactly, so the Wald test is equivalent to the OLS -test in finite samples. For other family/link combinations, the Wald test is only an asymptotic approximation.
Exponential Family
By restricting distributions to this form, GLM expresses the mean-variance relationship uniformly through and derives a common estimation algorithm (IRLS) applicable across families.
A family of distributions is called an exponential family if its density (or mass) function can be written as:
where is the natural (canonical) parameter, is the dispersion parameter, and is the log-partition function. The mean and variance are derived from :
Rewriting as a function of rather than gives the variance function , so . For example, Poisson has , giving and , hence .
Exponential family parameters for each distribution family:
| Family | (natural parameter) | |||
|---|---|---|---|---|
| Gaussian | ||||
| Binomial | ||||
| Poisson | ||||
| Gamma | ||||
| Negative Binomial |
- In the Binomial row, is the proportion of successes (, ), is the number of successes, is the number of trials, and is the success probability. When , it reduces to the Bernoulli distribution
- The Negative Binomial is displayed as in the MIDAS UI; on this page we use to avoid confusion with the exponential family natural parameter. The Negative Binomial belongs to the exponential family only when is known. In MIDAS's automatic estimation mode, is estimated by maximizing the profile likelihood (with profiled out) in an outer loop (see GLM usage). The standard errors for reported in this mode are computed from the information matrix with treated as known, so uncertainty in is not reflected
Link Functions
The link function is a monotonic function connecting the linear predictor to the expected value of the response. A link function satisfying (the natural parameter) is called the canonical link.
| Link Function | Formula | Canonical Link For |
|---|---|---|
| Identity | Gaussian | |
| Logit | Binomial | |
| Log | Poisson | |
| Inverse | Gamma | |
| Probit | — |
The canonical link has important properties: since , becomes a sufficient statistic for , and the log-likelihood is concave in . When the design matrix has full rank and the MLE exists, the solution is unique and IRLS converges stably. However, complete separation (when a linear combination of predictors perfectly separates the response) leaves the log-likelihood concave and full-rank but without a finite maximum, so the MLE does not exist. Binary logistic regression is the canonical example, and similar cases arise in other discrete-response models such as multinomial logit (see the convergence issues section in GLM usage).
Non-canonical links forfeit these properties but may be chosen for easier coefficient interpretation. For example, the canonical link for Gamma is Inverse (), which puts coefficients on a scale that is hard to interpret. The Log link ( as a multiplicative effect) is more commonly used in practice.
Parameter Estimation (IRLS)
GLM parameters are estimated by maximum likelihood. Under regularity conditions, the estimator is consistent, asymptotically normal, and asymptotically efficient. In general no closed-form solution exists, so IRLS (Iteratively Reweighted Least Squares) is used (Gaussian + Identity is an exception: and make and in the formulas below, so the weights do not depend on the data and IRLS reaches the OLS solution in a single iteration from any starting point).
At each iteration, working weights and an adjusted dependent variable are computed, then the weighted least squares problem:
is solved to update . The entries of and are computed from the current and the link function as:
where is the variance function and is the derivative of the link function. See Nelder & Wedderburn (1972) for the original formulation of IRLS for GLMs. Iteration stops when the maximum absolute change in coefficients falls below the convergence threshold.
With the canonical link, the concavity of the log-likelihood ensures stable convergence. Non-canonical links may lead to slower convergence or convergence failure.
Variance Functions and Overdispersion
As described in the Exponential Family section, the variance function is the second derivative of the log-partition function rewritten in terms of . Through the relationship , it defines the mean-variance relationship for each family.
| Family | ||||
|---|---|---|---|---|
| Gaussian | (= ) | |||
| Binomial | ||||
| Poisson | ||||
| Gamma | ||||
| Negative Binomial |
Poisson and Binomial assume a dispersion parameter . When the actual data variance exceeds this assumption, the condition is called overdispersion. Overdispersion leads to underestimated standard errors and confidence intervals that are too narrow. To diagnose overdispersion, check . Under the assumption, should be close to 1, so a value far from 1 suggests overdispersion. Note that fluctuates more around 1 when is small. The Deviance Goodness-of-Fit chart in the GLM tab is also available for diagnosis.
When overdispersion is detected with Poisson data, switching to Negative Binomial adds a term to the variance, explicitly modeling the extra dispersion. When is estimated, overdispersion is already captured by the variance function, so . When is fixed, is estimated as the dispersion parameter.
However, for binary data with (logistic regression), each observation follows Bernoulli, and once the mean is fixed the marginal variance is determined as well. There is no degree of freedom in the per-observation variance, so there is simply nothing to compare against to say "the data variance exceeds the theoretical variance." This is why Pearson and deviance cannot detect overdispersion at the individual level. This does not mean overdispersion is absent — only that it cannot be detected from the same data; extra dispersion arising from clusters or repeated measurements can still exist and must be handled separately (see the glossary). Classical overdispersion diagnostics and remedies are meaningful only for grouped Binomial data with .
For grouped Binomial overdispersion, MIDAS does not currently support quasi-binomial or Beta-Binomial alternatives. If the extra dispersion arises from cluster structure, introducing random effects via GLMM is an option. When overdispersion is suspected, check the estimated dispersion parameter and consider that standard errors and confidence intervals may be underestimated.
Prediction Interval Methods
Mathematical background for prediction intervals computed by the GLM prediction feature.
In the formulas below, denotes the estimated dispersion parameter. For Gaussian, this is the residual deviance divided by (identically equal to Pearson for Gaussian). For Gamma, this is Pearson divided by (the deviance-based estimator is not consistent for Gamma, so Pearson is used). For Poisson/Binomial, . is the leverage of the prediction point, measuring how far the new observation is from the center of the training data in predictor space.
A plug-in method treats the estimated parameters as if they were the true values and computes the interval from those values. Unlike a confidence interval, it does not account for parameter estimation uncertainty.
Prediction interval computation depends on the family:
- Gaussian with identity link: The analytical formula accounts for both the variance of a new observation () and the estimation uncertainty of the mean ()
- Gaussian with non-identity link: A plug-in method is used: , where is the -distribution quantile at the selected confidence level. The non-linear link transformation prevents exact incorporation of estimation uncertainty on the scale in closed form. Alternatives such as (a) building the interval on the link scale as and back-transforming via , or (b) a first-order delta-method approximation , both weaken coverage guarantees under non-linear links or in small samples, so MIDAS does not use them. As a result, prediction intervals for this combination are a simplified form that does not reflect estimation uncertainty, meaning predictions at the center of the data and at extrapolation points receive the same interval width
- Poisson, Binomial, Gamma, Negative Binomial: A plug-in quantile method is used. The estimated distribution parameters are treated as true values, and distribution quantiles are computed directly:
- Poisson: quantiles of Poisson
- Binomial: quantiles of Binomial, where is the trial count specified at the prediction point
- Gamma: quantiles of the Gamma distribution with mean and shape
- Negative Binomial: quantiles of the Negative Binomial distribution with mean and ( in automatic estimation mode, the specified value in fixed mode)
For discrete distributions (Poisson, Binomial, Negative Binomial), quantiles are rounded conservatively to the smallest integer satisfying (no randomized intervals). The actual coverage probability therefore meets or exceeds the nominal level. For individual Binomial data (), the only quantile candidates are , limiting the informativeness of the interval.
The plug-in methods do not account for parameter estimation uncertainty, so the actual coverage probability may fall below the stated confidence level, particularly in small samples or for predictions far from the observed data range. When coverage matters in small samples, consider increasing the sample size or using a more computationally expensive method such as bootstrapping (not currently supported in MIDAS).
See also
- Generalized Linear Model (GLM) - How to run GLM analysis and interpret results
- OLS Fundamentals - Mathematical background of OLS, a special case of GLM
- GLMM Fundamentals - Generalized linear mixed model theory with random effects
- Glossary - Statistical term definitions
References
- Nelder, J. A., & Wedderburn, R. W. M. (1972). Generalized linear models. Journal of the Royal Statistical Society: Series A, 135(3), 370-384. https://www.jstor.org/stable/2344614