Learn by Oneself: Exploiting Weight-Sharing Potential in Knowledge Distillation Guided Ensemble Network



Zhao, Qi, Lyu, Shuchang, Chen, Lijiang, Liu, Binghao, Xu, Ting-Bing, Cheng, Guangliang ORCID: 0000-0001-8686-9513 and Feng, Wenquan
(2024) Learn by Oneself: Exploiting Weight-Sharing Potential in Knowledge Distillation Guided Ensemble Network. IEEE Transactions on Circuits and Systems for Video Technology, 33 (11). pp. 6661-6678.

[img] PDF
Learn_by_Oneself_Exploiting_Weight-Sharing_Potential_in_Knowledge_Distillation_Guided_Ensemble_Network.pdf - Author Accepted Manuscript

Download (2MB) | Preview

Abstract

Recent CNNs (convolutional neural networks) have become more and more compact. The elegant structure design highly improves the performance of CNNs. With the development of knowledge distillation technique, the performance of CNNs gets further improved. However, existing knowledge distillation guided methods either rely on offline pretrained high-quality large teacher models or online heavy training burden. To solve the above problems, we propose a feature-sharing and weight-sharing based ensemble network (training framework) guided by knowledge distillation (EKD-FWSNet) to make baseline models stronger in terms of representation ability with less training computation and memory cost involved. Specifically, motivated by getting rid of the dependence of offline pretrained teacher model, we design an end-to-end online training scheme to optimize EKD-FWSNet. Motivated by decreasing the online training burden, we only introduce one auxiliary classmate branch to construct multiple forward branches, which will then be integrated as ensemble teacher to guide baseline model. Compared to previous online ensemble training frameworks, EKD-FWSNet can provide diverse output predictions without relying on increasing auxiliary classmate branches. Motivated by maximizing the optimization power of EKD-FWSNet, we exploit the representation potential of weight-sharing blocks and design efficient knowledge distillation mechanism in EKD-FWSNet. Extensive comparison experiments and visualization analysis on benchmark datasets (CIFAR-10/100, tiny-ImageNet, CUB-200 and ImageNet) show that self-learned EKD-FWSNet can boost the performance of baseline models by large margin, which has obvious superiority compared to previous related methods. Extensive analysis also proves the interpretability of EKD-FWSNet. Our code is available at https://github.com/cv516Buaa/EKD-FWSNet.

Item Type: Article
Uncontrolled Keywords: Behavioral and Social Science, Basic Behavioral and Social Science
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 24 Apr 2024 11:20
Last Modified: 24 Apr 2024 11:20
DOI: 10.1109/tcsvt.2023.3267115
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3180562