GLM Fundamentals

This page covers the statistical theory behind the GLM tab. See that page for usage instructions.

Model Formulation

GLM generalizes the normal linear model to the exponential family of distributions, as introduced by Nelder & Wedderburn (1972). A GLM is defined by three components:

  1. Distribution family: The response variable YY follows a distribution in the exponential family
  2. Linear predictor: η=Xβ\eta = X\beta (a linear combination of explanatory variables)
  3. Link function: A monotonic function gg such that η=g(μ)\eta = g(\mu), connecting the linear predictor to the mean μ=E[Y]\mu = E[Y]

OLS is a special case of GLM (Gaussian family with identity link). In this case, IRLS converges in a single iteration to the normal equations solution. Because MIDAS estimates ϕ\phi from the data, the Wald statistic follows tnpt_{n-p} exactly, so the Wald test is equivalent to the OLS tt-test in finite samples. For other family/link combinations, the Wald test is only an asymptotic approximation.

Exponential Family

By restricting distributions to this form, GLM expresses the mean-variance relationship uniformly through b(θ)b(\theta) and derives a common estimation algorithm (IRLS) applicable across families.

A family of distributions is called an exponential family if its density (or mass) function can be written as:

f(yθ,ϕ)=exp ⁣{yθb(θ)a(ϕ)+c(y,ϕ)}f(y \mid \theta, \phi) = \exp\!\left\{\frac{y\theta - b(\theta)}{a(\phi)} + c(y, \phi)\right\}

where θ\theta is the natural (canonical) parameter, ϕ\phi is the dispersion parameter, and b(θ)b(\theta) is the log-partition function. The mean and variance are derived from b(θ)b(\theta):

  • E[Y]=b(θ)=μE[Y] = b'(\theta) = \mu
  • Var(Y)=b(θ)a(ϕ)\operatorname{Var}(Y) = b''(\theta) \cdot a(\phi)

Rewriting b(θ)b''(\theta) as a function of μ\mu rather than θ\theta gives the variance function V(μ)V(\mu), so Var(Y)=V(μ)a(ϕ)\operatorname{Var}(Y) = V(\mu) \cdot a(\phi). For example, Poisson has b(θ)=eθb(\theta) = e^\theta, giving b(θ)=eθ=μb'(\theta) = e^\theta = \mu and b(θ)=eθ=μb''(\theta) = e^\theta = \mu, hence V(μ)=μV(\mu) = \mu.

Exponential family parameters for each distribution family:

Familyθ\theta (natural parameter)a(ϕ)a(\phi)b(θ)b(\theta)c(y,ϕ)c(y, \phi)
Gaussianμ\muϕ\phiθ2/2\theta^2/2y22ϕlog(2πϕ)2-\dfrac{y^2}{2\phi} - \dfrac{\log(2\pi\phi)}{2}
Binomiallog ⁣(μ/(1μ))\log\!\bigl(\mu/(1-\mu)\bigr)1/n1/nlog(1+eθ)\log(1+e^\theta)log(nk)\log\binom{n}{k}
Poissonlogμ\log\mu11eθe^\thetalog(y!)-\log(y!)
Gamma1/μ-1/\muϕ\philog(θ)-\log(-\theta)(1/ϕ1)logy+(1/ϕ)log(1/ϕ)logΓ(1/ϕ)(1/\phi - 1)\log y + (1/\phi)\log(1/\phi) - \log\Gamma(1/\phi)
Negative Binomiallog ⁣(μ/(μ+r))\log\!\bigl(\mu/(\mu+r)\bigr)11rlog(1eθ)-r\log(1-e^\theta)logΓ(y+r)logΓ(r)log(y!)\log\Gamma(y+r) - \log\Gamma(r) - \log(y!)
  • In the Binomial row, yy is the proportion of successes (y=k/ny = k/n, 0y10 \le y \le 1), kk is the number of successes, nn is the number of trials, and μ\mu is the success probability. When n=1n=1, it reduces to the Bernoulli distribution
  • The Negative Binomial rr is displayed as θ\theta in the MIDAS UI; on this page we use rr to avoid confusion with the exponential family natural parameter. The Negative Binomial belongs to the exponential family only when rr is known. In MIDAS's automatic estimation mode, rr is estimated by maximizing the profile likelihood Lp(r)=maxβL(β,r)L_p(r) = \max_\beta L(\beta, r) (with β\beta profiled out) in an outer loop (see GLM usage). The standard errors for β^\hat\beta reported in this mode are computed from the information matrix with r=r^r = \hat r treated as known, so uncertainty in rr is not reflected

The link function is a monotonic function η=g(μ)\eta = g(\mu) connecting the linear predictor η\eta to the expected value μ\mu of the response. A link function satisfying g(μ)=θg(\mu) = \theta (the natural parameter) is called the canonical link.

Link FunctionFormulaCanonical Link For
Identityη=μ\eta = \muGaussian
Logitη=log ⁣(μ/(1μ))\eta = \log\!\bigl(\mu / (1 - \mu)\bigr)Binomial
Logη=log(μ)\eta = \log(\mu)Poisson
Inverseη=1/μ\eta = 1/\muGamma
Probitη=Φ1(μ)\eta = \Phi^{-1}(\mu)

The canonical link has important properties: since η=θ\eta = \theta, XyX'y becomes a sufficient statistic for β\beta, and the log-likelihood is concave in β\beta. When the design matrix XX has full rank and the MLE exists, the solution is unique and IRLS converges stably. However, complete separation (when a linear combination of predictors perfectly separates the response) leaves the log-likelihood concave and XX full-rank but without a finite maximum, so the MLE does not exist. Binary logistic regression is the canonical example, and similar cases arise in other discrete-response models such as multinomial logit (see the convergence issues section in GLM usage).

Non-canonical links forfeit these properties but may be chosen for easier coefficient interpretation. For example, the canonical link for Gamma is Inverse (η=1/μ\eta = 1/\mu), which puts coefficients on a 1/μ1/\mu scale that is hard to interpret. The Log link (exp(β)\exp(\beta) as a multiplicative effect) is more commonly used in practice.

Parameter Estimation (IRLS)

GLM parameters β\beta are estimated by maximum likelihood. Under regularity conditions, the estimator is consistent, asymptotically normal, and asymptotically efficient. In general no closed-form solution exists, so IRLS (Iteratively Reweighted Least Squares) is used (Gaussian + Identity is an exception: V(μ)=1V(\mu)=1 and dη/dμ=1d\eta/d\mu=1 make W=IW=I and z=yz=y in the formulas below, so the weights do not depend on the data and IRLS reaches the OLS solution β^=(XX)1Xy\hat\beta = (X'X)^{-1}X'y in a single iteration from any starting point).

At each iteration, working weights WW and an adjusted dependent variable zz are computed, then the weighted least squares problem:

β^(t+1)=(XW(t)X)1XW(t)z(t)\hat\beta^{(t+1)} = (X'W^{(t)}X)^{-1}X'W^{(t)}z^{(t)}

is solved to update β\beta. The entries of WW and zz are computed from the current μ^(t)\hat\mu^{(t)} and the link function as:

Wii=1V(μi)(dη/dμ)i2,zi=ηi+(yiμi)(dηdμ)iW_{ii} = \frac{1}{V(\mu_i)\,(d\eta/d\mu)_i^2}, \qquad z_i = \eta_i + (y_i - \mu_i)\,\Bigl(\frac{d\eta}{d\mu}\Bigr)_i

where V(μ)V(\mu) is the variance function and dη/dμd\eta/d\mu is the derivative of the link function. See Nelder & Wedderburn (1972) for the original formulation of IRLS for GLMs. Iteration stops when the maximum absolute change in coefficients falls below the convergence threshold.

With the canonical link, the concavity of the log-likelihood ensures stable convergence. Non-canonical links may lead to slower convergence or convergence failure.

Variance Functions and Overdispersion

As described in the Exponential Family section, the variance function V(μ)=b(θ)V(\mu) = b''(\theta) is the second derivative of the log-partition function rewritten in terms of μ\mu. Through the relationship Var(Y)=V(μ)a(ϕ)\operatorname{Var}(Y) = V(\mu) \cdot a(\phi), it defines the mean-variance relationship for each family.

Familyb(θ)b''(\theta)V(μ)V(\mu)a(ϕ)a(\phi)Var(Y)\operatorname{Var}(Y)
Gaussian1111ϕ\phiϕ\phi (= σ2\sigma^2)
Binomialeθ(1+eθ)2\dfrac{e^\theta}{(1+e^\theta)^2}μ(1μ)\mu(1 - \mu)1/n1/nμ(1μ)/n\mu(1-\mu)/n
Poissoneθe^\thetaμ\mu11μ\mu
Gamma1/θ21/\theta^2μ2\mu^2ϕ\phiμ2ϕ\mu^2 \phi
Negative Binomialreθ(1eθ)2\dfrac{re^\theta}{(1-e^\theta)^2}μ+μ2/r\mu + \mu^2/r11μ+μ2/r\mu + \mu^2/r

Poisson and Binomial assume a dispersion parameter ϕ=1\phi = 1. When the actual data variance exceeds this assumption, the condition is called overdispersion. Overdispersion leads to underestimated standard errors and confidence intervals that are too narrow. To diagnose overdispersion, check ϕ^=Pearson χ2/(np)\hat\phi = \text{Pearson } \chi^2/(n-p). Under the assumption, ϕ^\hat\phi should be close to 1, so a value far from 1 suggests overdispersion. Note that ϕ^\hat\phi fluctuates more around 1 when npn - p is small. The Deviance Goodness-of-Fit chart in the GLM tab is also available for diagnosis.

When overdispersion is detected with Poisson data, switching to Negative Binomial adds a μ2/r\mu^2/r term to the variance, explicitly modeling the extra dispersion. When rr is estimated, overdispersion is already captured by the variance function, so ϕ=1\phi = 1. When rr is fixed, ϕ^=Pearson χ2/(np)\hat\phi = \text{Pearson }\chi^2/(n-p) is estimated as the dispersion parameter.

However, for binary data with ni=1n_i = 1 (logistic regression), each observation follows Bernoulli(μi)(\mu_i), and once the mean μi\mu_i is fixed the marginal variance μi(1μi)\mu_i(1-\mu_i) is determined as well. There is no degree of freedom in the per-observation variance, so there is simply nothing to compare against to say "the data variance exceeds the theoretical variance." This is why Pearson χ2\chi^2 and deviance cannot detect overdispersion at the individual level. This does not mean overdispersion is absent — only that it cannot be detected from the same data; extra dispersion arising from clusters or repeated measurements can still exist and must be handled separately (see the glossary). Classical overdispersion diagnostics and remedies are meaningful only for grouped Binomial data with ni>1n_i > 1.

For grouped Binomial overdispersion, MIDAS does not currently support quasi-binomial or Beta-Binomial alternatives. If the extra dispersion arises from cluster structure, introducing random effects via GLMM is an option. When overdispersion is suspected, check the estimated dispersion parameter and consider that standard errors and confidence intervals may be underestimated.

Prediction Interval Methods

Mathematical background for prediction intervals computed by the GLM prediction feature.

In the formulas below, ϕ^\hat\phi denotes the estimated dispersion parameter. For Gaussian, this is the residual deviance divided by npn - p (identically equal to Pearson χ2\chi^2 for Gaussian). For Gamma, this is Pearson χ2\chi^2 divided by npn - p (the deviance-based estimator is not consistent for Gamma, so Pearson χ2/(np)\chi^2/(n-p) is used). For Poisson/Binomial, ϕ^=1\hat\phi = 1. hi=xnewT(XTW^X)1xnewh_i = x_\text{new}^T (X^T \hat W X)^{-1} x_\text{new} is the leverage of the prediction point, measuring how far the new observation is from the center of the training data in predictor space.

A plug-in method treats the estimated parameters as if they were the true values and computes the interval from those values. Unlike a confidence interval, it does not account for parameter estimation uncertainty.

Prediction interval computation depends on the family:

  • Gaussian with identity link: The analytical formula μ^±tnpϕ^(1+hi)\hat\mu \pm t_{n-p} \sqrt{\hat\phi(1 + h_i)} accounts for both the variance of a new observation (ϕ^\hat\phi) and the estimation uncertainty of the mean (ϕ^hi\hat\phi \cdot h_i)
  • Gaussian with non-identity link: A plug-in method is used: μ^±tnpϕ^\hat\mu \pm t_{n-p} \sqrt{\hat\phi}, where tnpt_{n-p} is the tt-distribution quantile at the selected confidence level. The non-linear link transformation prevents exact incorporation of estimation uncertainty on the μ\mu scale in closed form. Alternatives such as (a) building the interval on the link scale as η^±tnpϕ^(1+hi)\hat\eta \pm t_{n-p}\sqrt{\hat\phi(1+h_i)} and back-transforming via g1g^{-1}, or (b) a first-order delta-method approximation Var(μ^)(dμ/dη)2ϕ^hi\operatorname{Var}(\hat\mu) \approx (d\mu/d\eta)^2 \cdot \hat\phi h_i, both weaken coverage guarantees under non-linear links or in small samples, so MIDAS does not use them. As a result, prediction intervals for this combination are a simplified form that does not reflect estimation uncertainty, meaning predictions at the center of the data and at extrapolation points receive the same interval width
  • Poisson, Binomial, Gamma, Negative Binomial: A plug-in quantile method is used. The estimated distribution parameters are treated as true values, and distribution quantiles are computed directly:
    • Poisson: quantiles of Poisson(μ^)(\hat\mu)
    • Binomial: quantiles of Binomial(nnew,μ^)(n_\text{new}, \hat\mu), where nnewn_\text{new} is the trial count specified at the prediction point
    • Gamma: quantiles of the Gamma distribution with mean μ^\hat\mu and shape α=1/ϕ^\alpha = 1/\hat\phi
    • Negative Binomial: quantiles of the Negative Binomial distribution with mean μ^\hat\mu and rr (r^\hat r in automatic estimation mode, the specified value in fixed mode)

For discrete distributions (Poisson, Binomial, Negative Binomial), quantiles are rounded conservatively to the smallest integer kk satisfying P(Xk)αP(X \le k) \ge \alpha (no randomized intervals). The actual coverage probability therefore meets or exceeds the nominal level. For individual Binomial data (ni=1n_i = 1), the only quantile candidates are {0,1}\{0, 1\}, limiting the informativeness of the interval.

The plug-in methods do not account for parameter estimation uncertainty, so the actual coverage probability may fall below the stated confidence level, particularly in small samples or for predictions far from the observed data range. When coverage matters in small samples, consider increasing the sample size or using a more computationally expensive method such as bootstrapping (not currently supported in MIDAS).

See also

References

  • Nelder, J. A., & Wedderburn, R. W. M. (1972). Generalized linear models. Journal of the Royal Statistical Society: Series A, 135(3), 370-384. https://www.jstor.org/stable/2344614