Learning-based local-to-global landmark annotation for automatic 3D cephalometry

Hye Sun Yun, Tae Jun Jang, Sung Min Lee, Sang Hwy Lee, Jin Keun Seo

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

The annotation of three-dimensional (3D) cephalometric landmarks in 3D computerized tomography (CT) has become an essential part of cephalometric analysis, which is used for diagnosis, surgical planning, and treatment evaluation. The automation of 3D landmarking with high-precision remains challenging due to the limited availability of training data and the high computational burden. This paper addresses these challenges by proposing a hierarchical deep-learning method consisting of four stages: 1) a basic landmark annotator for 3D skull pose normalization, 2) a deep-learning-based coarse-to-fine landmark annotator on the midsagittal plane, 3) a low-dimensional representation of the total number of landmarks using variational autoencoder (VAE), and 4) a local-to-global landmark annotator. The implementation of the VAE allows two-dimensional-image-based 3D morphological feature learning and similarity/dissimilarity representation learning of the concatenated vectors of cephalometric landmarks. The proposed method achieves an average 3D point-to-point error of 3.63 mm for 93 cephalometric landmarks using a small number of training CT datasets. Notably, the VAE captures variations of craniofacial structural characteristics.

Original languageEnglish
Article number085018
JournalPhysics in medicine and biology
Volume65
Issue number8
DOIs
Publication statusPublished - 2020 Apr 21

Bibliographical note

Publisher Copyright:
© 2020 Institute of Physics and Engineering in Medicine.

All Science Journal Classification (ASJC) codes

  • Radiological and Ultrasound Technology
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Learning-based local-to-global landmark annotation for automatic 3D cephalometry'. Together they form a unique fingerprint.

Cite this