Meng, Yanda
ORCID: 0000-0001-7344-2174, Wei, Meng, Gao, Dongxu
ORCID: 0000-0001-7008-0737, Zhao, Yitian, Yang, Xiaoyun, Huang, Xiaowei
ORCID: 0000-0001-6267-0366 and Zheng, Yalin
ORCID: 0000-0002-7873-0922
(2020)
CNN-GCN Aggregation Enabled Boundary Regression for Biomedical Image Segmentation
In: MICCAI, Lima, Peru.
|
Text
MICCAI_2020 (1).pdf - Author Accepted Manuscript Download (1MB) | Preview |
Abstract
Accurate segmentation of anatomic structure is an essential task for biomedical image analysis. Recent popular object contours regression based segmentation methods have increasingly attained researchers’ attentions. They made a new starting point to tackle segmentation tasks instead of commonly used dense pixels classification methods. However, because of the nature of CNN based network (lack of spatial information) and the difficulty of this methodology itself (need of more spatial information), these methods needed extra process to maintain more spatial features, which may cause longer inference time or tedious design and inference process. To address the issue, this paper proposes a simple, intuitive deep learning based contour regression model. We develop a novel multi-level, multi-stage aggregated network to regress the coordinates of the contour of instances directly in an end-to-end manner. The proposed network seamlessly links convolution neural network (CNN) with Attention Refinement module (AR) and Graph Convolution Network (GCN). By hierarchically and iteratively combining features over different layers of the CNN, the proposed model obtains sufficient low-level features and high-level semantic information from the input image. Besides, our model pays distinct attention to the objects’ contours with the help of AR and GCN. Primarily, thanks to the proposed aggregated GCN and vertices sampling method, our model benefits from direct feature learning of the objects’ contour locations from sparse to dense and the spatial information propagation across the whole input image. Experiments on the segmentation of fetal head (FH) in ultrasound images and of the optic disc (OD) and optic cup (OC) in color fundus images demonstrate that our method outperforms state-of-the-art methods in terms of effectiveness and efficiency.
| Item Type: | Conference Item (Unspecified) |
|---|---|
| Uncontrolled Keywords: | 46 Information and Computing Sciences, 4611 Machine Learning, Machine Learning and Artificial Intelligence, Biomedical Imaging, Networking and Information Technology R&D (NITRD), Neurosciences, Bioengineering |
| Depositing User: | Symplectic Admin |
| Date Deposited: | 08 Jun 2020 08:17 |
| Last Modified: | 02 Jan 2026 07:48 |
| DOI: | 10.1007/978-3-030-59719-1_35 |
| Related Websites: | |
| URI: | https://livrepository.liverpool.ac.uk/id/eprint/3089656 |
| Disclaimer: | The University of Liverpool is not responsible for content contained on other websites from links within repository metadata. Please contact us if you notice anything that appears incorrect or inappropriate. |
Altmetric
Altmetric