Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding (Extended Abstract)



Wang, Z, Huang, C ORCID: 0000-0002-9300-1787 and Zhu, Q
(2022) Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding (Extended Abstract). In: Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}, 2023-8-19 - 2023-8-25.

[img] PDF
IJCAI_2023_Global_Robustness__Extended_Abstract_.pdf - Author Accepted Manuscript

Download (525kB) | Preview

Abstract

The robustness of deep neural networks in safety-critical systems has received significant interest recently, which measures how sensitive the model output is under input perturbations. While most previous works focused on the local robustness property, the studies of the global robustness property, i.e., the robustness in the entire input space, are still lacking. In this work, we formulate the global robustness certification problem for ReLU neural networks and present an efficient approach to address it. Our approach includes a novel interleaving twin-network encoding scheme and an over-approximation algorithm leveraging relaxation and refinement techniques. Its timing efficiency and effectiveness are evaluated and compared with other state-of-the-art global robustness certification methods, and demonstrated via case studies on practical applications.

Item Type: Conference or Workshop Item (Unspecified)
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 16 Oct 2023 09:09
Last Modified: 21 Mar 2024 16:50
DOI: 10.24963/ijcai.2023/727
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3173753