Comparison of feature selection and classification for MALDI-MS dataReport as inadecuate




Comparison of feature selection and classification for MALDI-MS data - Download this document for free, or read online. Document in PDF available to download.

BMC Genomics

, 10:S3

First Online: 07 July 2009

Abstract

IntroductionIn the classification of Mass Spectrometry MS proteomics data, peak detection, feature selection, and learning classifiers are critical to classification accuracy. To better understand which methods are more accurate when classifying data, some publicly available peak detection algorithms for Matrix assisted Laser Desorption Ionization Mass Spectrometry MALDI-MS data were recently compared; however, the issue of different feature selection methods and different classification models as they relate to classification performance has not been addressed. With the application of intelligent computing, much progress has been made in the development of feature selection methods and learning classifiers for the analysis of high-throughput biological data. The main objective of this paper is to compare the methods of feature selection and different learning classifiers when applied to MALDI-MS data and to provide a subsequent reference for the analysis of MS proteomics data.

ResultsWe compared a well-known method of feature selection, Support Vector Machine Recursive Feature Elimination SVMRFE, and a recently developed method, Gradient based Leave-one-out Gene Selection GLGS that effectively performs microarray data analysis. We also compared several learning classifiers including K-Nearest Neighbor Classifier KNNC, Naïve Bayes Classifier NBC, Nearest Mean Scaled Classifier NMSC, uncorrelated normal based quadratic Bayes Classifier recorded as UDC, Support Vector Machines, and a distance metric learning for Large Margin Nearest Neighbor classifier LMNN based on Mahanalobis distance. To compare, we conducted a comprehensive experimental study using three types of MALDI-MS data.

ConclusionRegarding feature selection, SVMRFE outperformed GLGS in classification. As for the learning classifiers, when classification models derived from the best training were compared, SVMs performed the best with respect to the expected testing accuracy. However, the distance metric learning LMNN outperformed SVMs and other classifiers on evaluating the best testing. In such cases, the optimum classification model based on LMNN is worth investigating for future study.

Download fulltext PDF



Author: Qingzhong Liu - Andrew H Sung - Mengyu Qiao - Zhongxue Chen - Jack Y Yang - Mary Qu Yang - Xudong Huang - Youping Deng

Source: https://link.springer.com/article/10.1186/1471-2164-10-S1-S3



DOWNLOAD PDF




Related documents