EnnCore: End-to-End Conceptual Guarding of Neural Architectures



Manino, E, Carvalho, D, Dong, Y, Rozanova, J, Song, X, Mustafa, MA, Freitas, A, Brown, G, Luján, M, Huang, X ORCID: 0000-0001-6267-0366
et al (show 1 more authors) (2022) EnnCore: End-to-End Conceptual Guarding of Neural Architectures. .

[img] Text
SafeAI_2022_paper_9.pdf - Author Accepted Manuscript

Download (407kB) | Preview

Abstract

The EnnCore project addresses the fundamental security problem of guaranteeing safety, transparency, and robustness in neural-based architectures. Specifically, EnnCore aims at enabling system designers to specify essential conceptual/behavioral properties of neural-based systems, verify them, and thus safeguard the system against unpredictable behavior and attacks. In this respect, EnnCore will pioneer the dialogue between contemporary explainable neural models and full-stack neural software verification. This paper describes existing studies' limitations, our research objectives, current achievements, and future trends towards this goal. In particular, we describe the development and evaluation of new methods, algorithms, and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behavior is guaranteed, and robust against attacks. We also describe how EnnCore will be validated on two diverse and high-impact application scenarios: securing an AI system for (i) cancer diagnosis and (ii) energy demand response.

Item Type: Conference or Workshop Item (Unspecified)
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 07 Mar 2022 15:00
Last Modified: 18 Jan 2023 21:11
URI: https://livrepository.liverpool.ac.uk/id/eprint/3150295