Existing learning-based methods to automatically trace axons in 3D brain imagery often rely on manually annotated segmentation labels. Labeling is a labor-intensive process and is not scalable to whole-brain analysis, which is needed for improved understanding of brain function. We propose a self-supervised auxiliary task that utilizes the tube-like structure of axons to build a feature extractor from unlabeled data. The proposed auxiliary task constrains a 3D convolutional neural network (CNN) to predict the order of permuted slices in an input 3D volume. By solving this task, the 3D CNN is able to learn features without ground-truth labels that are useful for downstream segmentation with the 3D U-Net model. To the best of our knowledge, our model is the first to perform automated segmentation of axons imaged at subcellular resolution with the SHIELD technique. We demonstrate improved segmentation performance over the 3D U-Net model on both the SHIELD PVGPe dataset and the BigNeuron Project, single neuron Janelia dataset.
|Title of host publication
|Proceedings - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2020
|IEEE Computer Society
|Number of pages
|Published - 2020 Jun
|2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2020 - Virtual, Online, United States
Duration: 2020 Jun 14 → 2020 Jun 19
|IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
|2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2020
|20/6/14 → 20/6/19
Bibliographical notePublisher Copyright:
© 2020 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering