Numerical Computing Fundamentals
This page covers the concepts behind the Numerical Accuracy page.
Floating-Point Numbers and Significant Digits
Computers represent real numbers as finite-precision approximations. All browser applications, including MIDAS, use IEEE 754 double-precision floating-point numbers.
A double-precision number represents a real number as . is a 1-bit sign, is an 11-bit exponent encoding the scale, and is the 52-bit significand representing the fractional part where . For normal numbers1, the integer part of is always 1, so this 1 is not stored but implied. The 52 stored bits plus the implicit 1 bit give 53 significant bits, corresponding to about 15.9 decimal significant digits:
A single double-precision number can therefore represent at most about 15.9 significant digits. On the Numerical Accuracy page, LRE (Log Relative Error: a measure of how many significant digits match) close to this value means the computation is accurate to the limit of double precision.
When converting a real number to a floating-point number, any fraction that cannot be represented in a finite number of bits is rounded. This rounding error is tiny for a single operation, but can accumulate over successive computations. The extent of accumulation depends on the algorithm and data; catastrophic cancellation and condition numbers, discussed below, are the primary factors.
Catastrophic Cancellation
Catastrophic cancellation occurs when subtracting two nearly equal floating-point numbers, causing a large loss of significant digits.
Consider and . Both have 15 significant digits, but their difference has only 2. The leading zeros merely indicate position and are not significant, so only the trailing 45 carries meaningful information. The 13 digits that were shared between and cancel out, leaving only the least reliable bits.
In statistics, variance computation is a classic example. The definitional form can suffer cancellation in each deviation when the mean is large relative to the standard deviation. However, the squared deviations are all non-negative, so their sum involves no subtraction. The catastrophic single-subtraction loss that occurs in the algebraic form does not arise. That said, significant digits lost in each individual deviation are not recovered by squaring, so precision does degrade—though not as severely as with the algebraic form. The algebraically equivalent form concentrates the cancellation into a single subtraction between and , which can cause a catastrophic loss of precision. MIDAS uses Welford's online algorithm2, which incrementally updates the mean and sum of squared deviations for each data point, avoiding the subtraction of large intermediate values that occurs in the algebraic form and achieving precision comparable to the two-pass definitional form in a single pass.
Condition Number
The condition number measures how much small perturbations in the input are amplified in the output. For a nonsingular matrix , the condition number is defined as:
is the matrix 2-norm (largest singular value). Since , the condition number has a lower bound of 1. A matrix with a condition number close to 1 is called well-conditioned; one with a large condition number is called ill-conditioned. There is no sharp threshold; the distinction depends on the precision required.
When solving in floating-point arithmetic, a relative perturbation of in can produce a relative error up to in . A condition number of causes a loss on the order of significant digits. Subtracting from double precision's roughly 15.9 digits gives an approximate estimate:
This is an order-of-magnitude estimate, not a strict lower bound. Actual precision also depends on the problem dimension and algorithm details. When computing in linear regression, the relevant condition number depends on the solver: QR decomposition works directly with , so applies; the normal equations solve , so applies. MIDAS uses QR decomposition for coefficient estimation, so the dominant factor in precision is .
Design Matrix
In the linear regression model , is the design matrix. is the response vector, is the coefficient vector, and is the error term. The design matrix has rows and columns, with each row corresponding to one observation. When an intercept is included, one of the columns is a constant column of ones, and the remaining columns correspond to the predictors ( is the total column count including the intercept).
The OLS (ordinary least squares) estimator minimizes the residual sum of squares; when has full column rank, it can be written as (see derivation), where denotes the transpose of . A large condition number in the design matrix causes rounding errors to be amplified, reducing the accuracy of computing . When predictors are highly correlated, the design matrix tends to be ill-conditioned.
See OLS Fundamentals for the full mathematical treatment of the design matrix and OLS estimator properties.
Polynomial Regression
In polynomial regression, the columns of the design matrix are . As the degree increases, the power columns become strongly correlated and the design matrix becomes ill-conditioned.
The NIST StRD datasets on the Numerical Accuracy page show this effect. The simple regression dataset Norris achieves a coefficient LRE of 12.3, but the 10th degree polynomial Filip drops to 7.3. Filip has (). Since MIDAS uses QR decomposition for coefficient estimation, is the relevant quantity. The estimate predicts digits of precision, but the observed LRE is 7.3. The gap reflects factors not captured by the simple formula, such as error accumulation during back-substitution across the problem dimension (Filip has ).
To reduce the condition number, replace the monomial basis with an orthogonal polynomial basis. When the design matrix consists only of the intercept and orthogonal polynomial columns, the columns are mutually orthogonal with equal norms, so the condition number is theoretically 1 (in practice it deviates slightly from 1 due to floating-point rounding). The Orthogonal Polynomials tab in MIDAS applies this transformation.
See also
- Numerical Accuracy - MIDAS accuracy verification using NIST datasets
- OLS Fundamentals - Mathematical foundations of linear regression and design matrix formulation
- Glossary - Statistical term definitions
Footnotes
-
Normalization adjusts the exponent so that the integer part is 1, making the representation unique. For example, is represented as . However, the exponent has a minimum value. For numbers close enough to zero that this minimum is reached, the integer part can no longer be kept at 1, producing subnormal numbers where the implicit leading 1 does not apply. Subnormal numbers have reduced precision, but values in typical statistical input data are not normally in this range. ↩
-
Welford, B. P. (1962). Note on a method for calculating corrected sums of squares and products. Technometrics, 4(3), 419–420. ↩