Numerical Computing Fundamentals
This page covers the concepts behind the Numerical Accuracy page.
Floating-Point Numbers and Significant Digits
Computers represent real numbers as finite-precision approximations. All browser applications, including MIDAS, use IEEE 754 double-precision floating-point numbers.
A double-precision number represents a real number as . is a 1-bit sign, is an 11-bit exponent encoding the scale, and is the 52-bit significand representing the fractional part where . For normal numbers1, the integer part of is always 1, so this 1 is not stored but implied. The 52 stored bits plus the implicit 1 bit give 53 bits of effective precision, corresponding to about 15.9 decimal digits:
A single double-precision number can therefore represent at most about 15.9 significant digits. On the Numerical Accuracy page, LRE (Log Relative Error: a measure of how many significant digits match) close to this value means the computation is accurate to the limit of double precision.
When converting a real number to a floating-point number, any fraction that cannot be represented in a finite number of bits is rounded. This rounding error is tiny for a single operation, but can accumulate over successive computations. The extent of accumulation depends on the algorithm and data; catastrophic cancellation and condition numbers, discussed below, are the primary factors.
Catastrophic Cancellation
Catastrophic cancellation occurs when subtracting two nearly equal floating-point numbers, causing a large loss of significant digits.
Consider and . Both have 15 significant digits, but their difference has only 2. The leading zeros merely indicate position and are not significant, so only the trailing 45 carries meaningful information. The 13 digits that were shared between and cancel out, leaving only the least reliable bits.
In statistics, variance computation is a classic example. The definitional form computes deviations from the mean, so the subtracted values are not close to each other. The algebraically equivalent form is vulnerable to cancellation when and are close in magnitude. This happens when the mean is large relative to the standard deviation.
Condition Number
The condition number measures how much small perturbations in the input are amplified in the output. For a nonsingular matrix , the condition number is defined as:
is the matrix 2-norm (largest singular value). Since , the condition number has a lower bound of 1. A matrix with a condition number close to 1 is called well-conditioned; one with a large condition number is called ill-conditioned. There is no sharp threshold; the distinction depends on the precision required.
When solving in floating-point arithmetic, a relative perturbation of in can produce a relative error up to in . A condition number of can destroy up to digits of precision, so starting from double precision's roughly 15.9 digits, the number of surviving significant digits is at least approximately:
This lower bound assumes a numerically stable algorithm. An unstable algorithm may lose more digits than this.
Design Matrix
In the linear regression model , is the design matrix. is the response vector, is the coefficient vector, and is the error term. The design matrix has rows (one per observation) and columns (one per predictor). When an intercept is included, a column of ones is added.
The OLS (ordinary least squares) estimator is defined as , where denotes the transpose of . A large condition number in the design matrix causes rounding errors to be amplified, reducing the accuracy of computing . This is a numerical issue, distinct from the statistical problem of inflated variance due to multicollinearity. Design matrices become ill-conditioned when predictors are highly correlated; VIF in OLS Fundamentals measures a related phenomenon for individual predictors.
See OLS Fundamentals for the full mathematical treatment of the design matrix and OLS estimator properties.
Polynomial Regression
In polynomial regression, the columns of the design matrix are . As the degree increases, the power columns become strongly correlated and the design matrix becomes ill-conditioned.
The NIST StRD datasets on the Numerical Accuracy page show this effect. The simple regression dataset Norris achieves a coefficient LRE of 12.3, but the 10th degree polynomial Filip drops to 7.3. Filip's design matrix has a condition number around , and the increasing condition number with polynomial degree limits precision.
To reduce the condition number, replace the monomial basis with an orthogonal polynomial basis. With an orthogonal basis, the columns of the design matrix are mutually orthogonal, reducing the condition number to approximately 1. The Orthogonal Polynomials tab in MIDAS applies this transformation.
See also
- Numerical Accuracy - MIDAS accuracy verification using NIST datasets
- OLS Fundamentals - Mathematical foundations of linear regression and design matrix formulation
- Glossary - Statistical term definitions
Footnotes
-
Normalization adjusts the exponent so that the integer part is 1, making the representation unique. For example, is represented as . However, the exponent has a minimum value. For numbers close enough to zero that this minimum is reached, the integer part can no longer be kept at 1, producing subnormal numbers where the implicit leading 1 does not apply. Subnormal numbers have reduced precision, but the values encountered in statistical computations are never in this range. ↩