Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation



Yang, Pengfei, Li, Jianlin, Liu, Jiangchao, Huang, Cheng-Chao, Li, Renjue, Chen, Liqian, Huang, Xiaowei ORCID: 0000-0001-6267-0366 and Zhang, Lijun
(2021) Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation. FORMAL ASPECTS OF COMPUTING, 33 (3). pp. 407-435.

[img] Text
FAOC21_dnn.pdf - Submitted version

Download (1MB) | Preview

Abstract

<jats:title>Abstract</jats:title><jats:p>Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable to small perturbations on the inputs. This has led to safety concerns on applying DNNs to safety-critical domains. Several verification approaches based on constraint solving have been developed to automatically prove or disprove safety properties for DNNs. However, these approaches suffer from the scalability problem, i.e., only small DNNs can be handled. To deal with this, abstraction based approaches have been proposed, but are unfortunately facing the precision problem, i.e., the obtained bounds are often loose. In this paper, we focus on a variety of local robustness properties and a<jats:inline-formula><jats:alternatives><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>δ</mml:mi><mml:mo>,</mml:mo><mml:mi>ε</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></jats:alternatives></jats:inline-formula>-global robustness property of DNNs, and investigate novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties:<jats:list list-type="bullet"><jats:list-item><jats:p>We propose a method to verify local robustness, which improves a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. Specifically, the values of neurons are represented<jats:italic>symbolically</jats:italic>and propagated from the input layer to the output layer, on top of the underlying abstract domains. It achieves significantly higher precision and thus can prove more properties.</jats:p></jats:list-item></jats:list><jats:list list-type="bullet"><jats:list-item><jats:p>We propose a Lipschitz constant based verification framework. By utilising Lipschitz constants solved by semidefinite programming, we can prove global robustness of DNNs. We show how the Lipschitz constant can be tightened if it is restricted to small regions. A tightened Lipschitz constantcan be helpful in proving local robustness properties. Furthermore, a global Lipschitz constant can be used to accelerate batch local robustness verification, and thus support the verification of global robustness.</jats:p></jats:list-item></jats:list><jats:list list-type="bullet"><jats:list-item><jats:p>We show how the proposed abstract interpretation and Lipschitz constant based approaches can benefit from each other to obtain more precise results. Moreover, they can be also exploited and combined to improve constraints based approach.</jats:p></jats:list-item></jats:list>We implement our methods in the tool PRODeep, and conduct detailed experimental results on several benchmarks</jats:p>

Item Type: Article
Uncontrolled Keywords: Deep neural network, Verification, Robustness, Abstract interpretation, Symbolic propagation, Lipschitz constant
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 04 Feb 2022 08:55
Last Modified: 15 Mar 2024 13:59
DOI: 10.1007/s00165-021-00548-1
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3148181