FindVehicle and VehicleFinder: a NER dataset for natural language-based vehicle retrieval and a keyword-based cross-modal vehicle retrieval system



Guan, Runwei, Man, Ka Lok, Chen, Feifan, Yao, Shanliang, Hu, Rongsheng, Zhu, Xiaohui, Smith, Jeremy ORCID: 0000-0002-0212-2365, Lim, Eng Gee and Yue, Yutao ORCID: 0000-0003-4532-0924
(2023) FindVehicle and VehicleFinder: a NER dataset for natural language-based vehicle retrieval and a keyword-based cross-modal vehicle retrieval system. MULTIMEDIA TOOLS AND APPLICATIONS, 83 (8). pp. 24841-24874.

Access the full-text of this item by clicking on the Open Access link.

Abstract

<jats:title>Abstract</jats:title><jats:p>Natural language (NL) based vehicle retrieval is a task aiming to retrieve a vehicle that is most consistent with a given NL query from among all candidate vehicles. Because NL query can be easily obtained, such a task has a promising prospect in building an interactive intelligent traffic system (ITS). Current solutions mainly focus on extracting both text and image features and mapping them to the same latent space to compare the similarity. However, existing methods usually use dependency analysis or semantic role-labelling techniques to find keywords related to vehicle attributes. These techniques may require a lot of pre-processing and post-processing work, and also suffer from extracting the wrong keyword when the NL query is complex. To tackle these problems and simplify, we borrow the idea from named entity recognition (NER) and construct FindVehicle, a NER dataset in the traffic domain. It has 42.3k labelled NL descriptions of vehicle tracks, containing information such as the location, orientation, type and colour of the vehicle. FindVehicle also adopts both overlapping entities and fine-grained entities to meet further requirements. To verify its effectiveness, we propose a baseline NL-based vehicle retrieval model called VehicleFinder. Our experiment shows that by using text encoders pre-trained by FindVehicle, VehicleFinder achieves 87.7% precision and 89.4% recall when retrieving a target vehicle by text command on our homemade dataset based on UA-DETRAC [1]. From loading the command into VehicleFinder to identifying whether the target vehicle is consistent with the command, the time cost is 279.35 ms on one ARM v8.2 CPU and 93.72 ms on one RTX A4000 GPU, which is much faster than the Transformer-based system. The dataset is open-source via the link <jats:ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:href="https://github.com/GuanRunwei/FindVehicle">https://github.com/GuanRunwei/FindVehicle</jats:ext-link>, and the implementation can be found via the link <jats:ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:href="https://github.com/GuanRunwei/VehicleFinder-CTIM">https://github.com/GuanRunwei/VehicleFinder-CTIM</jats:ext-link>.</jats:p>

Item Type: Article
Uncontrolled Keywords: Cross modal learning, Named entity recognition, Intelligent traffic system, Vehicle retrieval, Human-computer interaction, Object detection
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Faculty of Science and Engineering > School of Physical Sciences
Depositing User: Symplectic Admin
Date Deposited: 05 Sep 2023 13:45
Last Modified: 26 Feb 2024 16:53
DOI: 10.1007/s11042-023-16373-y
Open Access URL: https://link.springer.com/article/10.1007/s11042-0...
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3172549