Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality.



Alwashmi, Kholoud ORCID: 0000-0001-6691-2641, Meyer, Georg, Rowe, Fiona ORCID: 0000-0001-9210-9131 and Ward, Ryan ORCID: 0000-0002-9850-5191
(2024) Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality. NeuroImage, 285. 120483-.

Access the full-text of this item by clicking on the Open Access link.

Abstract

The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.

Item Type: Article
Uncontrolled Keywords: Brain, Humans, Blindness, Magnetic Resonance Imaging, Learning, Auditory Perception, Visual Perception, Virtual Reality
Divisions: Faculty of Health and Life Sciences
Faculty of Science and Engineering > IDEAS
Faculty of Health and Life Sciences > Institute of Population Health
Depositing User: Symplectic Admin
Date Deposited: 02 Jan 2024 15:59
Last Modified: 19 Jan 2024 15:33
DOI: 10.1016/j.neuroimage.2023.120483
Open Access URL: https://www.sciencedirect.com/science/article/pii/...
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3177644