Learning from Demonstration in the Wild



Behbahani, Feryal, Shiarlis, Kyriacos, Chen, Xi, Kurin, Vitaly, Kasewa, Sudhanshu, Stirbu, Ciprian, Gomes, Joao, Paul, Supratik, Oliehoek, Frans A ORCID: 0000-0003-4372-5055, Messias, Joao
et al (show 1 more authors) (2019) Learning from Demonstration in the Wild. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019-M. pp. 775-781.

[img] Text
1811.03516v1.pdf - Submitted version

Download (4MB)

Abstract

Learning from demonstration (LfD) is useful in settings where hand-coding behaviour or a reward function is impractical. It has succeeded in a wide range of problems but typically relies on manually generated demonstrations or specially deployed sensors and has not generally been able to leverage the copious demonstrations available in the wild: those that capture behaviours that were occurring anyway using sensors that were already deployed for another purpose, e.g., traffic camera footage capturing demonstrations of natural behaviour of vehicles, cyclists, and pedestrians. We propose Video to Behaviour (ViBe), a new approach to learn models of behaviour from unlabelled raw video data of a traffic scene collected from a single, monocular, initially uncalibrated camera with ordinary resolution. Our approach calibrates the camera, detects relevant objects, tracks them through time, and uses the resulting trajectories to perform LfD, yielding models of naturalistic behaviour. We apply ViBe to raw videos of a traffic intersection and show that it can learn purely from videos, without additional expert knowledge.

Item Type: Article
Additional Information: Accepted to the IEEE International Conference on Robotics and Automation (ICRA) 2019; extended version with appendix
Uncontrolled Keywords: cs.LG, cs.LG, stat.ML
Depositing User: Symplectic Admin
Date Deposited: 28 Feb 2020 11:18
Last Modified: 19 Jan 2023 01:12
DOI: 10.1109/icra.2019.8794412
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3028894