Survival Analysis Fundamentals
This page covers the statistical theory behind the Survival Analysis tabs. See that page for usage instructions.
Time-to-Event Data and Censoring
Survival analysis is a set of methods for analyzing time until an event occurs. Despite the name "survival," the event need not be death — it can be machine failure, customer churn, time to recidivism, or any event of interest.
The defining feature of survival data is censoring. Subjects who did not experience the event during the observation period (e.g., patients still alive at the end of a clinical trial, or patients lost to follow-up) carry only incomplete information: "the event had not occurred by at least this time."
Simply excluding censored observations biases the analysis toward subjects who experienced the event sooner, underestimating survival times. Treating censored observations as "no event" overstates survival times since the true event time is unknown. Survival analysis methods are designed to handle censoring properly, provided that censoring is independent of event occurrence (non-informative censoring). When censoring is related to the likelihood of the event — for example, when patients drop out due to worsening side effects — the KM estimator and Cox model estimates become biased. MIDAS handles right censoring only (censoring due to end of observation or loss to follow-up). Left censoring and interval censoring are not supported.
Why Ordinary Regression Fails
Without censoring, survival times could be analyzed as a response variable in ordinary regression. But censored data provides inequality information — "the true value is at least as large as the observed value" — and the usual residual () cannot be defined. Survival analysis incorporates this inequality into the likelihood function, correctly accounting for censoring.
Survival Function and Hazard Function
The distribution of survival time is characterized by two functions.
The survival function is the probability of not having experienced the event by time . It starts at and decreases monotonically over time.
The hazard function is the instantaneous rate of event occurrence at time , given survival up to that point:
The hazard is a rate (per unit time), not a probability, so it can exceed 1. The survival and hazard functions are related by ; knowing one determines the other.
Kaplan-Meier Estimator
The Kaplan-Meier estimator is a nonparametric estimator of the survival function. It makes no distributional assumptions, estimating directly from observed event times.
Let the distinct event times be , with subjects at risk and events at each time :
This cumulatively multiplies the "survival fraction" at each event time. Censoring is reflected through changes in the risk set: when a subject is censored, they leave the risk set but are not counted as an event.
Under non-informative censoring, the KM estimator is a consistent estimator of . The variance is estimated using Greenwood's formula, derived via the delta method:
Confidence intervals are constructed from this variance. MIDAS uses the log transformation method by default, computing . The log transformation prevents the interval from falling outside .
Log-rank Test
The log-rank test is a nonparametric test for whether two or more groups have equal hazard functions.
At each event time , the observed event count and expected event count are computed for group , where is the size of group 's risk set at time . Under the null hypothesis, events at each time point are expected to be distributed according to each group's share of the risk set. This allocation follows a hypergeometric distribution, from which the variance is also derived. For two groups, since and , it follows that , so only one group's information is needed. The test statistic is:
where (total observed events in group 1), (total expected events), and (variance). The denominator is the variance derived from the hypergeometric distribution, not the expected value . For three or more groups, this extends to the quadratic form using the variance-covariance matrix. Under the null hypothesis, this statistic approximately follows a chi-squared distribution with degrees of freedom ( = number of groups).
The log-rank test is a member of the weighted log-rank test family. It has maximum power compared to other weighted variants (such as Wilcoxon-type tests) when the hazard ratio is constant over time (under the proportional hazards assumption). Power decreases when survival curves cross.
Cox Proportional Hazards Model
Model Formulation
The Cox (1972) proportional hazards model is a semiparametric model that estimates the effect of covariates on hazard:
is the baseline hazard (the hazard when all covariates are zero), and is the hazard ratio for a one-unit increase in covariate .
It is called "semiparametric" because is estimated parametrically, but no functional form is specified for . This removes the need to assume a distribution for the baseline hazard. After estimating , the baseline hazard can be estimated nonparametrically using the Breslow estimator, which in turn allows computing the survival function for specific covariate values. MIDAS outputs the baseline cumulative hazard and the adjusted survival curve at user-specified covariate values.
The covariates in this model are fixed values for each subject throughout the observation period. Handling covariates that change over time (time-varying covariates) requires extensions that MIDAS does not currently support.
Proportional Hazards Assumption
The core assumption is that covariate effects are constant over time. That is, the hazard ratio does not depend on .
When this assumption is violated (e.g., a treatment effect that fades over time), the estimated represents a weighted average of the time-varying effect, with weights that depend on the risk set composition and baseline hazard, making interpretation difficult (Struthers & Kalbfleisch, 1986).
Schoenfeld Residuals
Schoenfeld residuals are used to assess the proportional hazards assumption. They are defined at each event time for each covariate :
is the value of covariate for the subject who experienced the event. is the weighted mean of covariate over the risk set, defined as:
Here ranges over subjects in the risk set , is the covariate vector for subject , and is the linear predictor. The weight is subject-specific — subjects with higher hazard contribute more — and is the same for all covariates .
The sum of Schoenfeld residuals equals the score function (the gradient of the log partial likelihood with respect to ) (Schoenfeld, 1982). At the MLE the score function is theoretically zero, so the sum of residuals is also zero. In practice, Newton-Raphson terminates after finitely many iterations, so the sum is zero only up to convergence tolerance.
Scaled Schoenfeld residuals adjust the raw residuals by the variance-covariance matrix:
where is the total number of events, is the estimated variance-covariance matrix, and and are the raw and scaled residual vectors at event time . The -th component of can be interpreted as an estimate of . Under proportional hazards, shows no systematic trend over time (Grambsch & Therneau, 1994).
MIDAS displays the following diagnostics (usage):
- Grambsch-Therneau test: Tests the association between scaled Schoenfeld residuals and time. Reports per-covariate tests and a global test
- Scaled Schoenfeld residual plots: Plots against time with a LOESS smooth
- log(-log(S(t))) plot: Plots Kaplan-Meier estimates as versus by group. Under proportional hazards, the curves should be approximately parallel
Partial Likelihood
Cox model parameters are estimated using partial likelihood. For subject who experienced an event at time , consider the conditional probability that subject — among all subjects still at risk at that time — is the one who experiences the event:
Each factor corresponds to the conditional probability within the risk set at time . Substituting , the terms cancel between numerator and denominator, so estimating does not require knowing . Although the partial likelihood is not a full likelihood, it has been shown to yield estimators with the same asymptotic properties as maximum likelihood — consistency and asymptotic normality (Cox, 1975).
Interpreting Hazard Ratios
is interpreted as the hazard ratio (HR):
- HR > 1: A one-unit increase in increases the hazard by
- HR < 1: The hazard decreases by
- HR = 1: has no effect on the hazard
When the confidence interval for the hazard ratio does not include 1, the data are inconsistent with the null hypothesis of no effect. The width of the confidence interval reflects estimation precision: a narrow interval indicates a more precise estimate, while a wide interval indicates limited information from the data. Hazard ratios directly convey the direction and magnitude of the effect, making them more informative than the p-value alone.
Test Statistics
Three test statistics are reported for the null hypothesis that all . Each is asymptotically with (number of covariates) degrees of freedom, and they converge to the same value in large samples.
Likelihood Ratio Test
Based on the difference in log partial likelihood between the null model () and the fitted model. It reflects the global shape of the likelihood surface rather than relying on the local approximations used by the Wald and Score tests.
Wald Test
Based on the MLE and the estimated information matrix . This is a local approximation using the curvature of the log-likelihood at the MLE. The per-covariate p-values in the coefficients table are the univariate version of this test: . Comparing the overall Wald statistic with individual values can reveal inconsistencies — for example, individually significant covariates that are not jointly significant — which may indicate multicollinearity or estimation instability.
The weakness of the Wald test is that it fails to capture asymmetry of the likelihood surface when is large.
Score Test
is the score function (gradient of the log partial likelihood) evaluated at , and is the information matrix at that point. Since it does not require , it can be computed even when Newton-Raphson fails to converge.
The typical non-convergence scenario is when a covariate nearly perfectly predicts events. In a small clinical trial where events concentrate in the treatment group, and the iterations diverge. Because the model parameterizes the hazard ratio as , an extremely large effect causes the coefficient to diverge. The likelihood ratio and Wald tests cannot be computed in this case, but the Score test can still provide evidence that covariates are associated with survival. However, the effect size cannot be estimated. Firth correction and exact methods can address this, but MIDAS does not currently support them.
See also
- Survival Analysis - Usage instructions and interpreting results
- Tutorial: Kaplan-Meier Analysis - A practical example with sample data
- Glossary - Statistical term definitions
References
- Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society: Series B, 34(2), 187-220. https://www.jstor.org/stable/2985181
- Kaplan, E. L., & Meier, P. (1958). Nonparametric estimation from incomplete observations. Journal of the American Statistical Association, 53(282), 457-481. https://www.jstor.org/stable/2281868
- Cox, D. R. (1975). Partial likelihood. Biometrika, 62(2), 269-276. https://www.jstor.org/stable/2335362
- Struthers, C. A., & Kalbfleisch, J. D. (1986). Misspecified proportional hazard models. Biometrika, 73(2), 363-369. https://www.jstor.org/stable/2336212
- Schoenfeld, D. (1982). Partial residuals for the proportional hazards regression model. Biometrika, 69(1), 239-241. https://www.jstor.org/stable/2335876
- Grambsch, P. M., & Therneau, T. M. (1994). Proportional hazards tests and diagnostics based on weighted residuals. Biometrika, 81(3), 515-526. https://www.jstor.org/stable/2337123