Lifelong Machine Learning and root cause analysis for large-scale cancer patient data



Pal, Gautam ORCID: 0000-0002-2594-9699, Hong, Xianbin ORCID: 0000-0003-1678-0948, Wang, Zhuo, Wu, Hongyi, Li, Gangmin and Atkinson, Katie ORCID: 0000-0002-5683-4106
(2019) Lifelong Machine Learning and root cause analysis for large-scale cancer patient data. JOURNAL OF BIG DATA, 6 (1).

[img] Text
Pal2019_Article_LifelongMachineLearningAndRoot.pdf - Author Accepted Manuscript

Download (4MB) | Preview

Abstract

<jats:title>Abstract</jats:title><jats:sec> <jats:title>Introduction</jats:title> <jats:p>This paper presents a lifelong learning framework which constantly adapts with changing data patterns over time through incremental learning approach. In many big data systems, iterative re-training high dimensional data from scratch is computationally infeasible since constant data stream ingestion on top of a historical data pool increases the training time exponentially. Therefore, the need arises on how to retain past learning and fast update the model incrementally based on the new data. Also, the current machine learning approaches do the model prediction without providing a comprehensive root cause analysis. To resolve these limitations, our framework lays foundations on an ensemble process between stream data with historical batch data for an incremental lifelong learning (LML) model.</jats:p> </jats:sec><jats:sec> <jats:title>Case description</jats:title> <jats:p>A cancer patient’s pathological tests like blood, DNA, urine or tissue analysis provide a unique signature based on the DNA combinations. Our analysis allows personalized and targeted medications and achieves a therapeutic response. Model is evaluated through data from The National Cancer Institute’s Genomic Data Commons unified data repository. The aim is to prescribe personalized medicine based on the thousands of genotype and phenotype parameters for each patient.</jats:p> </jats:sec><jats:sec> <jats:title>Discussion and evaluation</jats:title> <jats:p>The model uses a dimension reduction method to reduce training time at an online sliding window setting. We identify the Gleason score as a determining factor for cancer possibility and substantiate our claim through Lilliefors and Kolmogorov–Smirnov test. We present clustering and Random Decision Forest results. The model’s prediction accuracy is compared with standard machine learning algorithms for numeric and categorical fields.</jats:p> </jats:sec><jats:sec> <jats:title>Conclusion</jats:title> <jats:p>We propose an ensemble framework of stream and batch data for incremental lifelong learning. The framework successively applies first streaming clustering technique and then Random Decision Forest Regressor/Classifier to isolate anomalous patient data and provides reasoning through root cause analysis by feature correlations with an aim to improve the overall survival rate. While the stream clustering technique creates groups of patient profiles, RDF further drills down into each group for comparison and reasoning for useful actionable insights. The proposed MALA architecture retains the past learned knowledge and transfer to future learning and iteratively becomes more knowledgeable over time.</jats:p> </jats:sec>

Item Type: Article
Uncontrolled Keywords: Lifelong learning, Real-time data processing, Lambda Architecture, Streaming k-means, Random Decision Forest, Dimension reduction
Depositing User: Symplectic Admin
Date Deposited: 02 Jan 2020 15:26
Last Modified: 19 Jan 2023 00:12
DOI: 10.1186/s40537-019-0261-9
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3067752