| Literature DB >> 35489005 |
Chiara Di Vece1,2, Brian Dromey3,4, Francisco Vasconcelos3,5, Anna L David4,6, Donald Peebles4,6, Danail Stoyanov3,5.
Abstract
PURPOSE: In obstetric ultrasound (US) scanning, the learner's ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a major challenge in skill acquisition. We aim to build a US plane localisation system for 3D visualisation, training, and guidance without integrating additional sensors.Entities:
Keywords: Deep learning; Fetal ultrasound; Pose regression
Mesh:
Year: 2022 PMID: 35489005 PMCID: PMC9110476 DOI: 10.1007/s11548-022-02609-z
Source DB: PubMed Journal: Int J Comput Assist Radiol Surg ISSN: 1861-6410 Impact factor: 3.421
Fig. 1Three main standard US planes to evaluate the development of brain, abdominal and femoral structures. Their acquisition is subject to intra- and inter-operator variability
Fig. 2Examples of automatically generated supervised data to train our network using pre-acquired 3D US volumes of both phantom and real fetuses.
Fig. 3Diagram of the proposed pose regression network. During training, it receives US images sliced from the volume and their 6D pose relative to the centre of the fetal brain. It outputs a pose prediction as , from which a rotation matrix and a translation vector are extracted for the loss function. This pose is also represented as for visualisation in Unity. K and S refer to the kernel size and the stride
Translation and rotation errors of our method for test planes acquired at random coordinates (Test RP), and around TV SP (Test SP). Norm: Euclidean distance, GE: Geodesic Error. and refer to phantom and real volumes, respectively, where i indicates the fetus considered. For tests on real data, indicates the GA
| Initial weights | Training data | Testing data | Interval | Translation Norm [mm] | Rotation GE [deg] | ||||
|---|---|---|---|---|---|---|---|---|---|
| Median | Min | Max | Median | Min | Max | ||||
| ImageNet | Phantom ( | Phantom ( | Test RP | 0.90 | 0.01 | 53.47 | 1.17 | 0.04 | 20.85 |
| Test SP | 0.44 | 0.02 | 10.43 | 1.21 | 0.13 | 137.78 | |||
| Phantom | Real ( | Real ( | Test RP | 9.94 | 0.26 | 37.58 | 30.58 | 0.54 | 155.3 |
| Real ( | Test RP | 10.74 | 0.29 | 43.03 | 30.81 | 0.91 | 146.1 | ||
| Real ( | Test RP | 10.39 | 0.32 | 39.08 | 21.94 | 0.43 | 137.22 | ||
| Real ( | Test RP | 17.76 | 0.24 | 58.44 | 37.93 | 1.39 | 131.3 | ||
| Real ( | Test RP | 18.80 | 0.25 | 50.59 | 42.23 | 1.82 | 108.2 | ||
| Real ( | Test RP | 17.15 | 1.31 | 61.39 | 34.43 | 0.71 | 159.64 | ||
Fig. 4Left: Translation and rotation error distributions in phantom (1.1) and real US data (1.2). Test RP refers to test planes acquired at random coordinates, whereas Test SP refers to test planes acquired around the annotated TV SP. Right: a shows the central slice of fetal brain volumes used for training (blue labels) and testing (yellow labels) in Experiment 1.2. b RP and SP comparison for a GA of 23 weeks, i.e. the best aligned volume
Fig. 5TV SP prediction performed on phantom (2.1) and real (2.2) US data. The green and orange boxes indicate the ground truth and the prediction, respectively. The ground truth pose of the TV SP was manually annotated by an experienced sonographer within the Unity environment. In 2.3, a is the TV SP acquired on the phantom with a 2D probe, b is where the predicted plane intersects the phantom 3D US volume (scanned with 3D US probe), and c is the SP annotation in the 3D US volume, which is similar to the prediction