A comparison of self-selectivity corrections in economic evaluations and outcomes research

Jinook Jeong, Edmund R. Becker, Patrick D. Mauldin, William S. Weintraub

Research output: Contribution to journalArticlepeer-review


Objective: Two alternative selectivity correction methods have been widely applied in the health economics literature: the sample selection model (SSM) and the multipart model (MPM). The difference between these two approaches results from their initial assumptions about the distribution of error terms. Because the distributional assumptions cannot be theoretically verified, the usefulness of the methods can only be evaluated by real world comparison. This article reviews and empirically tests the two alternative selectivity correction methods to give a reality-based evaluation. Methods: Using a randomized sample of patients as the "gold standard," the SSM and MPM are applied to a non-randomized sample of patients with an identical set of dependent and independent variables. By comparing the actual estimates of the two methods, we evaluate the robustness of the two approaches. Results: The results show that neither method is empirically robust in replicating the results of the randomized trial. There is no consistent pattern in the coefficients from either selectivity-correction method for replicating the coefficients in the randomized sample. Conclusions: Researchers should be cautious in applying these correction methods, and any conclusions based on these approaches may need to be qualified.

Original languageEnglish
Pages (from-to)656-666
Number of pages11
JournalValue in Health
Issue number6
Publication statusPublished - 2005 Nov

Bibliographical note

Funding Information:
Source of financial support: This work was supported in part by Yonsei University Research Fund of 2002.

All Science Journal Classification (ASJC) codes

  • Health Policy
  • Public Health, Environmental and Occupational Health


Dive into the research topics of 'A comparison of self-selectivity corrections in economic evaluations and outcomes research'. Together they form a unique fingerprint.

Cite this