Systematic Error Removal Using Random Forest for Normalizing Large-Scale Untargeted Lipidomics Data.

Published

Journal Article

Large-scale untargeted lipidomics experiments involve the measurement of hundreds to thousands of samples. Such data sets are usually acquired on one instrument over days or weeks of analysis time. Such extensive data acquisition processes introduce a variety of systematic errors, including batch differences, longitudinal drifts, or even instrument-to-instrument variation. Technical data variance can obscure the true biological signal and hinder biological discoveries. To combat this issue, we present a novel normalization approach based on using quality control pool samples (QC). This method is called systematic error removal using random forest (SERRF) for eliminating the unwanted systematic variations in large sample sets. We compared SERRF with 15 other commonly used normalization methods using six lipidomics data sets from three large cohort studies (832, 1162, and 2696 samples). SERRF reduced the average technical errors for these data sets to 5% relative standard deviation. We conclude that SERRF outperforms other existing methods and can significantly reduce the unwanted systematic variation, revealing biological variance of interest.

Full Text

Duke Authors

Cited Authors

  • Fan, S; Kind, T; Cajka, T; Hazen, SL; Tang, WHW; Kaddurah-Daouk, R; Irvin, MR; Arnett, DK; Barupal, DK; Fiehn, O

Published Date

  • March 5, 2019

Published In

Volume / Issue

  • 91 / 5

Start / End Page

  • 3590 - 3596

PubMed ID

  • 30758187

Pubmed Central ID

  • 30758187

Electronic International Standard Serial Number (EISSN)

  • 1520-6882

Digital Object Identifier (DOI)

  • 10.1021/acs.analchem.8b05592

Language

  • eng

Conference Location

  • United States