Abstract
The estimation of antenatal amniotic fluid (AF) volume (AFV) is important as it offers crucial information about fetal development, fetal well-being, and perinatal prognosis. However, AFV measurement is cumbersome and patient specific. Moreover, it is heavily sonographer-dependent, with measurement accuracy varying greatly depending on the sonographer's experience. Therefore, the development of accurate, robust, and adoptable methods to evaluate AFV is highly desirable. In this regard, automation is expected to reduce user-based variability and workload of sonographers. However, automating AFV measurement is very challenging, because accurate detection of AF pockets is difficult owing to various confusing factors, such as reverberation artifact, AF mimicking region and floating matter. Furthermore, AF pocket exhibits an unspecified variety of shapes and sizes, and ultrasound images often show missing or incomplete structural boundaries. To overcome the abovementioned difficulties, we develop a hierarchical deep-learning-based method, which consider clinicians’ anatomical-knowledge-based approaches. The key step is the segmentation of the AF pocket using our proposed deep learning network, AF-net. AF-net is a variation of U-net combined with three complementary concepts - atrous convolution, multi-scale side-input layer, and side-output layer. The experimental results demonstrate that the proposed method provides a measurement of the amniotic fluid index (AFI) that is as robust and precise as the results from clinicians. The proposed method achieved a Dice similarity of 0.877±0.086 for AF segmentation and achieved a mean absolute error of 2.666±2.986 and mean relative error of 0.018±0.023 for AFI value. To the best of our knowledge, our method, for the first time, provides an automated measurement of AFI.
Original language | English |
---|---|
Article number | 101951 |
Journal | Medical Image Analysis |
Volume | 69 |
DOIs | |
Publication status | Published - 2021 Apr |
Bibliographical note
Funding Information:This work was supported by Samsung Medison and Samsung Science & Technology Foundation (No. SSTF-BA1402-01). C.H.C. and J.K.S. were supported in part by the National Research Foundation of Korea (NRF) Grant 2015R1A5A1009350 and 2017R1A2B20005661 . We would like to express our deepest gratitude to two sonographers who make a ground truth data, Hye Mi Jeon and Hye Ri Kim. We also appreciate Haeeun Han for helping to draw Fig. 1
Publisher Copyright:
© 2021 The Author(s)
All Science Journal Classification (ASJC) codes
- Radiological and Ultrasound Technology
- Radiology Nuclear Medicine and imaging
- Computer Vision and Pattern Recognition
- Health Informatics
- Computer Graphics and Computer-Aided Design