Format

Send to

Choose Destination
  • This is a preview / test site. Please update your PubMed URL to pubmed.gov.
See comment in PubMed Commons below
J Biomed Inform. 2015 Apr;54:114-20. doi: 10.1016/j.jbi.2015.02.003. Epub 2015 Feb 17.

Efficient and sparse feature selection for biomedical text classification via the elastic net: Application to ICU risk stratification from nursing notes.

Author information

  • 1Philip R. Lee Institute for Health Policy Studies, School of Medicine, University of California, San Francisco, United States; Center for Healthcare Value, University of California, San Francisco, United States. Electronic address: ben.marafino@ucsf.edu.
  • 2Department of Epidemiology and Biostatistics, University of California, San Francisco, United States; Department of Medicine, University of California, San Francisco, United States.
  • 3Philip R. Lee Institute for Health Policy Studies, School of Medicine, University of California, San Francisco, United States; Center for Healthcare Value, University of California, San Francisco, United States; Department of Epidemiology and Biostatistics, University of California, San Francisco, United States; Department of Medicine, University of California, San Francisco, United States.

Abstract

BACKGROUND AND SIGNIFICANCE:

Sparsity is often a desirable property of statistical models, and various feature selection methods exist so as to yield sparser and interpretable models. However, their application to biomedical text classification, particularly to mortality risk stratification among intensive care unit (ICU) patients, has not been thoroughly studied.

OBJECTIVE:

To develop and characterize sparse classifiers based on the free text of nursing notes in order to predict ICU mortality risk and to discover text features most strongly associated with mortality.

METHODS:

We selected nursing notes from the first 24h of ICU admission for 25,826 adult ICU patients from the MIMIC-II database. We then developed a pair of stochastic gradient descent-based classifiers with elastic-net regularization. We also studied the performance-sparsity tradeoffs of both classifiers as their regularization parameters were varied.

RESULTS:

The best-performing classifier achieved a 10-fold cross-validated AUC of 0.897 under the log loss function and full L2 regularization, while full L1 regularization used just 0.00025% of candidate input features and resulted in an AUC of 0.889. Using the log loss (range of AUCs 0.889-0.897) yielded better performance compared to the hinge loss (0.850-0.876), but the latter yielded even sparser models.

DISCUSSION:

Most features selected by both classifiers appear clinically relevant and correspond to predictors already present in existing ICU mortality models. The sparser classifiers were also able to discover a number of informative - albeit nonclinical - features.

CONCLUSION:

The elastic-net-regularized classifiers perform reasonably well and are capable of reducing the number of features required by over a thousandfold, with only a modest impact on performance.

KEYWORDS:

Elastic net; Feature selection; ICU; Machine learning; Risk stratification; Text mining

PMID:
25700665
DOI:
10.1016/j.jbi.2015.02.003
[PubMed - indexed for MEDLINE]
Free full text
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Elsevier Science
    Loading ...
    Support Center