A Partial Least Squares based algorithm for parsimonious variable selectionReport as inadecuate

A Partial Least Squares based algorithm for parsimonious variable selection - Download this document for free, or read online. Document in PDF available to download.

Algorithms for Molecular Biology

, 6:27

First Online: 05 December 2011Received: 28 September 2011Accepted: 05 December 2011


BackgroundIn genomics, a commonly encountered problem is to extract a subset of variables out of a large set of explanatory variables associated with one or several quantitative or qualitative response variables. An example is to identify associations between codon-usage and phylogeny based definitions of taxonomic groups at different taxonomic levels. Maximum understandability with the smallest number of selected variables, consistency of the selected variables, as well as variation of model performance on test data, are issues to be addressed for such problems.

ResultsWe present an algorithm balancing the parsimony and the predictive performance of a model. The algorithm is based on variable selection using reduced-rank Partial Least Squares with a regularized elimination. Allowing a marginal decrease in model performance results in a substantial decrease in the number of selected variables. This significantly improves the understandability of the model. Within the approach we have tested and compared three different criteria commonly used in the Partial Least Square modeling paradigm for variable selection; loading weights, regression coefficients and variable importance on projections. The algorithm is applied to a problem of identifying codon variations discriminating different bacterial taxa, which is of particular interest in classifying metagenomics samples. The results are compared with a classical forward selection algorithm, the much used Lasso algorithm as well as Soft-threshold Partial Least Squares variable selection.

ConclusionsA regularized elimination algorithm based on Partial Least Squares produces results that increase understandability and consistency and reduces the classification error on test data compared to standard approaches.

Electronic supplementary materialThe online version of this article doi:10.1186-1748-7188-6-27 contains supplementary material, which is available to authorized users.

Download fulltext PDF

Author: Tahir Mehmood - Harald Martens - Solve Sæbø - Jonas Warringer - Lars Snipen

Source: https://link.springer.com/

Related documents