Experiments in non-personalized future blood glucose level prediction



Bevan, R and Coenen, F ORCID: 0000-0003-1026-6649
(2020) Experiments in non-personalized future blood glucose level prediction. .

[img] Text
bglp_final_2020.pdf - Author Accepted Manuscript

Download (280kB) | Preview

Abstract

In this study we investigate the need for training future blood glucose level prediction models at the individual level (i.e. per patient). Specifically, we train various model classes: linear models, feed-forward neural networks, recurrent neural networks, and recurrent neural networks incorporating attention mechanisms, to predict future blood glucose levels using varying time series history lengths and data sources. We also compare methods of handling missing time series data during training. We found that relatively short history lengths provided the best results: a 30 minute history length proved optimal in our experiments. We observed long short-term memory (LSTM) networks performed better than linear and feed-forward neural networks, and that including an attention mechanism in the LSTM model further improved performance, even when processing sequences with relatively short length. We observed models trained using all of the available data outperformed those trained at the individual level. We also observed models trained using all of the available data, except for the data contributed by a given patient, were as effective at predicting the patient's future blood glucose levels as models trained using all of the available data. These models also significantly outperformed models trained using the patient's data only. Finally, we found that including sequences with missing values during training produced models that were more robust to missing values.

Item Type: Conference or Workshop Item (Unspecified)
Depositing User: Symplectic Admin
Date Deposited: 11 Sep 2020 07:49
Last Modified: 18 Jan 2023 23:34
URI: https://livrepository.liverpool.ac.uk/id/eprint/3100649