| Literature DB >> 35856024 |
Mahmood Alzubaidi1, Marco Agus1, Khalid Alyafei2,3, Khaled A Althelaya1, Uzair Shah1, Alaa Abd-Alrazaq1,2, Mohammed Anbar4, Michel Makhlouf3, Mowafa Househ1.
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications.Entities:
Keywords: Artificial intelligence; Diagnostic technique in health technology; Health informatics; Medical imaging
Year: 2022 PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713
Source DB: PubMed Journal: iScience ISSN: 2589-0042
Figure 1Literature map showing the selected studies in red dot and recommended studies in black dot
Definition of the excluded terminologies
| Terminologies | Definition |
|---|---|
| Irrelevant | Publication that is not related to the scope of this survey |
| Wrong intervention | Publication that targets fetus health but not using AI technology and ultrasound image |
| Wrong population | Publication that uses AI technology and ultrasound image but did not target fetus health |
| Wrong publication type | Publication that is conference abstract, review, magazine, or newspaper |
| Unavailable | Publication that is not accessible or cannot be found |
| Foreign language | Publication that is not written in English |
Figure 2PRISMA diagram showing our literature search inclusion process
Figure 3Summary of AI methods implemented on fetal ultrasound images
Characteristics of the included studies (n = 107)
| Characteristics | Number of studies |
|---|---|
| (n, %) | |
| Journal article | |
| Conference proceedings | |
| Book chapter | |
| 2021 | |
| 2020 | |
| 2019 | |
| 2018 | |
| 2017 | |
| 2016 | |
| 2015 | |
| 2014 | |
| 2013 | |
| 2012 | |
| 2011 | |
| 2010 | |
| Fetus body | |
| Fetal part structures | (13, 12.14) |
| Anatomical structures | (8, 7.47) |
| Growth disease | (6, 5.60) |
| Gestational age | (3, 2.80) |
| Gender | (1, 0.93) |
| Fetus head | |
| Skull localization and measurement | (25, 23.36) |
| Brain standard plane | (13, 12.14) |
| Brain disease | (5, 4.67) |
| Fetus face | |
| Fetal facial standard planes | (5, 4.67) |
| Face anatomical landmarks | (3, 2.80) |
| Facial expressions | (2, 1.86) |
| Fetus heart | |
| Heart disease | (7, 6.54) |
| Heart chambers view | (6, 5.60) |
| Fetus abdomen | |
| Abdominal anatomical landmarks | (10, 9.34) |
| China | |
| Shenzhen University | (13, 12.14) |
| Beihang University | (5, 4.67) |
| Chinese University of Hong Kong | (4, 3.73) |
| Hebei University of Technology | (2, 1.86) |
| Fudan University | (2, 1.86) |
| Shanghai Jiao Tong University | (2, 1.86) |
| South China University of Technology | (2, 1.86) |
| Other institutes | (11, 10.28) |
| UK | |
| University of Oxford | (20, 18.69) |
| Imperial College London | (4, 3.73) |
| King’s College, London | (1, 0.93) |
| India | |
| Japan | |
| Indonesia | |
| USA | |
| South Korea | |
| Iran | |
| Australia | |
| Canada | |
| Mexico | |
| France | |
| Italy | |
| Tunisia | |
| Spain | |
| Iraq | |
| Brazil | |
| Malaysia |
Articles published using AI to improve fetus body monitoring: objective, backbone methods, optimization, fetal age, and AI tasks
| Study | Objective | Backbone Methods/Framework | Optimization/Extractor methods | Fetal age | AI tasks |
|---|---|---|---|---|---|
| ( | To identify the fetal skull, heart and abdomen from ultrasound images | SVM as the classifier | Gaussian Mixture Model (GMM) | 26th week | classification |
| ( | To segment the seven key structures of the neonatal hip joint | Neonatal Hip Bone Segmentation Network (NHBSNet) | Feature Extraction Module | 16 - 25 weeks. | segmentation |
| ( | To segment organs head, femur, and humerus in ultrasound images using multilayer super pixel images features | Simple Linear Iterative Clustering (SLIC) | Unary pixel shape feature | N/A | segmentation |
| ( | To automate kidney segmentation using fully convolutional neural networks. | FCNN: U-Net & UNET++ | N/A | 20 to 40 weeks | segmentation |
| ( | To evaluate the maturity of current Deep Learning classification techniques for their application in a real maternal-fetal clinical environment | CNN DenseNet-169 | N/A | 18 to 40 weeks | classification |
| ( | To use the learnt visual attention maps to guide standard plane detection on all three standard biometry planes: ACP, HCP and FLP. | Temporal SonoEyeNet (TSEN) | CNN feature extractor: VGG-16 | N/A | classification |
| ( | To support first trimester fetal assessment of multiple fetal anatomies including both visualization and the measurements from a single 3D ultrasound scan | Multi-Task Fully Convolutional Network (FCN) | N/A | 11 to 14 weeks | Segmentation Classification |
| ( | To automatically classify 14 different fetal structures in 2D fetal ultrasound images by fusing information from both cropped regions of fetal structures and the whole image | support vector machine (SVM)+ Decision fusion | Fine-tuning AlexNet CNN | 18 to 20 weeks | Classification |
| ( | To automatic identification of different standard planes from US images | T-RNN framework: | Features extracted using J-CNN classifier | 18 to 40 weeks | Classification |
| ( | To classify abdominal fetal ultrasound video frames into standard AC planes or background | M-SEN architecture | Generator CNN | N/A | Classification |
| ( | To detect multiple fetal structures in free-hand ultrasound | CNN | Class Activation Mapping (CAM) | 28 to 40 weeks | classification |
| ( | To extract features from regions inside the images where meaningful structures exist. | Guided Random Forests | Probabilistic Boosting Tree (PBT) | 18 to 22 weeks | Classification |
| ( | To detect standard planes from US videos | T-RNN | Spatio-Semporal Feature | 18 to 40 weeks | Classification |
| ( | To propose the first and fully automatic framework in the field to simultaneously segment fetus, gestational sac and placenta, | 3D FCN + RNN hierarchical deep supervision mechanism (HiDS) | BiLSTM module denoted as FB-nHiDS | 10 - 14 weeks | Segmentation |
| ( | To segment the placenta, amniotic fluid, and fetus. | FCNN | N/A | 11 - 19 weeks | Segmentation |
| ( | To segment the amniotic fluid and fetal tissues in fetal US images | The encoder-decoder network based on VGG16 | N/A | 22ND weeks | Segmentation |
| ( | To localize the fetus and extract the best fetal biometry planes for the head and abdomen from first trimester 3D fetal US images | CNN | Structured Random Forests | 11 - 13 weeks | Classification |
| ( | To detect and localize fetal anatomical regions in 2D US images | ResNet18 | Soft Proposal Layer (SP) | 22 - 32 weeks | Classification |
| ( | To reliably estimate abdominal circumference | CNN + Gradient Boosting Machine (GBM) | Histogram of Oriented Gradient (HoG) | 15 - 40 weeks | Classification |
| ( | To detect and recognize the fetal NT based on 2D ultrasound images by using artificial neural network techniques. | Artificial Neural Network (ANN) | Multilayer Perceptron (MLP) Network | N/A | Classification |
| ( | To detect NT region | U-Net NT Segmentation | VGG16 NT Region Detection | 4 - 12 weeks | Segmentation |
| ( | To propose the biometric measurement and classification of IUGR, using OpenGL concepts for extracting the feature values and ANN model is designed for diagnosis and classification | ANN | OpenGL | 12–40 | Classification |
| ( | To find the region of interest (ROI) of the fetal biometric and organ region in the US image | DCNN AlexNet | N/A | 16 -27 weeks | Classification |
| ( | To detect fetal abnormality in 2D US images | ANN + Multilayered perceptron neural networks (MLPNN) | Gradient vector flow (GVF) | 14 - 40 weeks | Classification segmentation |
| ( | To develop a computer-aided diagnosis and classification tool for extracting ultrasound sonographic features and classify IUGR fetuses | ANN | Two-Step Splitting Method (TSSM) for Reaction-Diffusion (RD) | 12–40 | Classification segmentation |
| ( | To develop an automatic classification algorithm on the US examination result using Convolutional Neural Network in Blighted Ovum detection | CNN | N/A | N/A | Classification |
| ( | To propose an intelligent system based on combination of ConvNet and PSO for Down syndrome diagnosis. | CNN | Particle Swarm Optimization (PSO) | N/A | Classification |
| ( | To automatically detect and measure the | CNN FCN | N/A | 16- - 26 weeks | Classification segmentation |
| ( | To accurately estimate the gestational age from the fetal lung region of US images. | U-NET | N/A | 24 - 40 weeks | Classification segmentation |
| ( | To classify, segment, and measure several fetal structures for the purpose of GA estimation | U-NET | Residual UNET (RUNET) | 16th weeks | Classification segmentation |
| ( | To measure the accuracy of Learning Vector Quantization (LVQ) to classify the gender of the fetus in the US image" | ANN | Learning Vector Quantization (LVQ) | N/A | Classification |
Articles published using AI to improve fetus head monitoring: objective, backbone methods, optimization, fetal age, and AI tasks
| Study | Objective | Backbone Methods/Framework | Optimization/Extractor methods | Fetal age | AI tasks |
|---|---|---|---|---|---|
| ( | To localize the fetal head region in US imaging | Multi-scale mini-LinkNet network | N/A | 12 - 40 weeks | Segmentation |
| ( | To locate the fetal head from 3D ultrasound images using shape model | AdaBoost | Shape Model | 11 - 14 weeks | Classification |
| ( | To detect fetal head | Deep Belief Network (DBN) | Hough transform | 11 - 14 weeks | Classification |
| ( | To semantically segment fetal head from maternal and other fetal tissue | U-NET | Ellipse fitting | 12 - 20 weeks | Segmentation |
| ( | To automatically discover and localize anatomical landmarks; measure the HC, TV, and the TC | CNN | Saliency maps | 13 - 26 weeks | Miscellaneous |
| ( | To demonstrate the effectiveness of hybrid method to segment fetal head | DU-Net | Scattering Coefficients (SC) | 13 - 26 weeks | Segmentation |
| ( | To segment fetal head using Network Binarization | Depthwise Separable Convolutional Neural Networks DSCNNs. | Network Binarization | 12 - 40 weeks | Segmentation |
| ( | To segment the fetal skull boundary and fetal skull for fetal HC measurement | U-NET | Squeeze and Excitation (SE) blocks | 12 - 40 weeks | Segmentation |
| ( | To automatically segment and estimate HC ellipse. | Multi-Task network based on Link-Net architecture (MTLN) | Ellipse Tuner | 12 - 40 weeks | Segmentation |
| ( | To capture more information with multiple-channel convolution from US images | Multiple-Channel and Atrous MA-Net | Encoder and Decoder Module | N/A | Segmentation |
| ( | To automatically segment fetal ultrasound image and HC biometry | Deeply Supervised Attention-Gated (DAG) V-Net | Attention-Gated Module | 12 - 40 weeks | Segmentation |
| ( | To compound a new US volume containing the whole brain anatomy | U-NET + Incidence Angle Maps (IAM) | CNN | 13 to 26 weeks | Segmentation |
| ( | To directly measure the head circumference, without having to resort the handcrafted features or manually labeled segmented images. | CNN regressor (Reg-Resnet50) | N/A | 12 - 40 weeks | Segmentation |
| ( | To propose region-CNN for head localization and centering, and a regression CNN to accurately delineate the HC | CNN regressor (U-net) | Tiny-YOLOv2 | 12 - 40 weeks | Miscellaneous |
| ( | To present a novel end-to-end deep learning network to automatically measure the fetal HC, biparietal diameter (BPD), and occipitofrontal diameter (OFD) length from 2D US images | FCNN (SAPNet) | Regression network | 12 - 40 weeks | Miscellaneous |
| ( | To segment fetal head from US images | FCN | Faster R-CNN | 12 - 40 weeks | Miscellaneous |
| ( | To deal with a completely computerized detection device of next fetal head composition | Multi-Task network based on Link-Net architecture (MTLN) | Hadamard Transform (HT) | N/A | Miscellaneous |
| ( | To measure HC automatically | Random Forest Classifier | Haar-like features | 18 - 33 weeks | Classification |
| ( | To determine measurements of fetal HC and BPD | FNC | N/A | 18 - 22 weeks | Segmentation |
| ( | To segment the whole fetal head in US volumes | Hybrid attention scheme (HAS) | 3D U-NET + Encoder and Decoder architecture for dense labeling | 20 - 31 weeks | Segmentation |
| ( | To segment fetal head using a flexibly plug-and-play module called vector self-attention layer (VSAL) | CNN | Vector Self-Attention Layer (VSAL) | 12 - 40 weeks | Segmentation |
| ( | To provide automatic framework for skull segmentation in fetal 3D US | Two-Stage Cascade CNN (2S-CNN) U-NET | Incidence Angle Map | 20 - 36 weeks | Segmentation |
| ( | To segment 2D ultrasound images of fetal skulls based on a V-Net architecture | Fully Convolutional Neural Network Combination (VNet-c) | N/A | 12 - 40 weeks | Segmentation |
| ( | To segment the cranial pixels in an ultrasound image using a random forest classifier | Random Forest Classifier | Simple Linear Iterative Clustering (SLIC) | 25 - 34 weeks | Segmentation |
| ( | To automatically estimate fetal HC | U-Net | Monte-Carlo Dropout | 18 - 22 weeks | Segmentation |
| ( | To automatically recognize six standard planes of fetal brains. | CNN+ Transfer learning DCNN | N/A | 18 - 22 weeks | Classification |
| ( | To help the clinician or sonographer obtain these planes of interest by finding the fetal head alignment in 3D US | Random forest classifier | Shape model and template deformation algorithm | 19 - 24 weeks. | Classification segmentation |
| ( | To segment the fetal cerebellum from 2D US images | U-NET +ResNet (ResU-NET-C) | N/A | 18 - 20 weeks | Segmentation |
| ( | To detect multiple planes simultaneously in challenging 3D US datasets | Multi-Agent Reinforcement Learning (MARL) | RNN | 19 - 31 weeks | Miscellaneous |
| ( | To detect standard plane and quality assessment | Multi-task learning Framework Faster Regional CNN (MF R-CNN) | N/A | 14 - 28 weeks | Miscellaneous |
| ( | To tackle the automated problem of fetal biometry measurement with a high degree of accuracy and reliability | U-Net, CNN | Bounding-box regression (object-detection) | N/A | Miscellaneous |
| ( | To determine the standard plane in US images | Faster R-CNN | Region Proposal Network (RPN) | 14 - 28 weeks | Miscellaneous |
| ( | To address the problem of 3D fetal brain localization, structural segmentation, and alignment to a referential coordinate system | Multi-Task FCN | Slice-Wise Classification | 18 - 34 weeks | Classification segmentation |
| ( | To simultaneously localize multiple brain structures in 3D fetal US | View-based Projection Networks (VP-Nets) | U-Net | 20 - 29 weeks | Classification segmentation |
| ( | To automatically identify six fetal brain standard planes (FBSPs) from the non-standard planes. | Differential-CNN | Modified feature map | 16 - 34 weeks | Classification |
| ( | To obtain the desired position of the gate and Middle Cerebral Artery (MCA) | MCANet | Dilated Residual Network (DRN) | 28 - 40 weeks | Segmentation |
| ( | To segment four important fetal brain structures in 3D US | Random Decision Forests (RDF) | Generalized Haar-features | 18 - 26 weeks | Segmentation |
| ( | To automatically localize fetal brain standard planes in 3D US | Dueling Deep Q Networks (DDQN) | RNN-based Active Termination (AT) (LSTM) | 19 - 31 weeks | Miscellaneous |
| ( | To evaluate the feasibility of CNN-based DL algorithms predicting the fetal lateral ventricular width from prenatal US images. | ResNet50 | Faster R-CNN | 22 - 26 weeks. | Miscellaneous |
| ( | To recognize and separate the studied US data into two categories: healthy (HL) and hydrocephalus (HD) subjects | CNN | N/A | 20 - 22 weeks. | Classification |
| ( | To automatically measure fetal lateral ventricles (LVs) in 2D US images | Mask R-CNN | Feature Pyramid Networks (FPN) | N/A | Miscellaneous |
| ( | To apply binary classification for central nervous system (CNS) malformations in standard fetal US brain images in axial planes | CNN | Split-view Segmentation | 18 - 32 weeks | Classification segmentation |
| ( | To develop computer-aided diagnosis algorithms for five common fetal brain abnormalities. | Deep convolutional neural networks (DCNNs) VGG-net | U-net | 18 - 32 weeks | Classification segmentation |
Articles published using AI to improve fetus face monitoring: objective, backbone methods, optimization, fetal age, and AI tasks
| Study | Objective | Backbone Methods/Framework | Optimization/Extractor methods | Fetal age | AI tasks |
|---|---|---|---|---|---|
| ( | To address the issue of recognition of standard planes (i.e., axial, coronal and sagittal planes) in the fetal US image | SVM classifier | AdaBoost for detect region of interest, ROI) | 20 - 36 weeks | Classification |
| ( | To automatically recognize the FFSP from US images | Deep convolutional networks (DCNN) | N/A | 20 - 36 weeks | Classification |
| ( | To automatically recognize FFSP via a deep convolutional neural network (DCNN) architecture | DCNN | t-Distributed Stochastic Neighbor Embedding (t-SNE) | 20 - 36 weeks | Classification |
| ( | To automatically recognize the fetal facial standard planes (FFSPs) | SVM classifier | Root scale invariant feature transform (RootSIFT) | 20 - 36 weeks | Classification |
| ( | To automatically recognize and classify FFSPs | SVM classifier | Local Binary Pattern (LBP) | 20 - 24 weeks | Classification |
| ( | To detect position and orientation of facial region and landmarks | SFFD-Net (Samsung Fetal Face Detection Network) multi-class segmented | N/A | 14 - 30 weeks | Miscellaneous |
| ( | To detect landmarks in 3D fetal facial US volumes | CNN Backbone Network | Region Proposal Network (RPN) | N/A | Miscellaneous |
| ( | To detect nasal bone for US of fetus | Back Propagation Neural Network (BPNN) | Discrete Cosine Transform (DCT) | 11 - 13 weeks | Miscellaneous |
| ( | To recognize facial expressions from 3D US | ANN | Histogram equalization | N/A | Classification |
| ( | To recognize fetal facial expressions that are considered as being related to the brain development of fetuses | CNN | N/A | 19 - 38 weeks | Classification |
Articles published using AI to improve fetus heart monitoring: Objective, backbone methods, optimization, fetal age, and AI tasks
| Study | Objective | Backbone Methods/Framework | Optimization/Extractor methods | Fetal age | AI tasks |
|---|---|---|---|---|---|
| ( | To perform multi-disease segmentation and multi-class semantic segmentation of the five key components | U-NET + DeepLabV3+ | N/A | N/A | Segmentation |
| ( | To recognize and judge fetal congenital heart disease (FHD) development | DGACNN Framework | CNN | 18–39 weeks | Miscellaneous |
| ( | To segment the ventricular septum in US | Cropping-Segmentation-Calibration (CSC) | YOLOv2 cropping module | 18-28 weeks | Miscellaneous |
| ( | To detect cardiac substructures and structural abnormalities in fetal US videos | Supervised Object detection with Normaldata Only (SONO) | CNN | 18-34 weeks | Miscellaneous |
| ( | To identify recommended cardiac views and distinguish between normal hearts and complex CHD and to calculate standard fetal cardiothoracic measurements | Ensemble of Neural Networks | ResNet and U-Net | 18-24 weeks | Classification segmentation |
| ( | To learn the features of Echogenic Intracardiac Focus (EIF) that can cause Down Syndrome (DS) whereas testing phase classifies the EIF into DS positive or DS negative based | Multi-scale Quantized Convolution Neural Network (MSQCNN) | Cross-Correlation Technique (CCT) | 24–26 weeks | Classification |
| ( | To perform automated diagnosis of hypoplastic left heart syndrome (HLHS) | SonoNet (VGG16) | N/A | 18–22 weeks | Classification |
| ( | To perform automated segmentation of cardiac structures | CU-NET | Structural Similarity Index Measure (SSIM) | N/A | Segmentation |
| ( | To accurately segment seven important anatomical structures in the A4C view | DW-Net | Dilated Convolutional Chain (DCC) module | N/A | Segmentation |
| ( | To automatically quality control the fetal US cardiac four-chamber plane | Three CNN-based Framework | Basic-CNN, a variant of SqueezeNet | 14 - 28 weeks | Miscellaneous |
| ( | To localize the end-systolic (ES) and end-diastolic (ED) from ultrasound | Hybrid CNN based framework | YOLOv3 | 18 - 36 weeks | Miscellaneous |
| ( | To detect the fetal heart and classifying each individual frame as belonging to one of the standard viewing planes | FCN | N/A | 20 - 35 weeks | Segmentation |
| ( | To jointly predict the visibility, view plane, location of the fetal heart in US videos. | Multi-Task CNN | Hierarchical Temporal Encoding (HTE) | 20 - 35 weeks | Classification |
Articles published using AI to improve fetus abdomen monitoring: objective, backbone methods, optimization, fetal age, and AI tasks
| Study | Objective | Backbone Methods/Framework | Optimization/Extractor methods | Fetal age | AI tasks |
|---|---|---|---|---|---|
| ( | To automatically detect two anatomical landmarks in an abdominal image plane stomach bubble (SB) and the umbilical vein (UV). | AdaBoost | Haar-like feature | 14 - 19 weeks | Classification |
| ( | To localize fetal abdominal standard plane (FASP) from US including SB, UV, and spine (SP) | Random Forests Classifier+ SVM | Haar-like feature | 18 - 40 weeks | Classification |
| ( | To classify ultrasound images (SB, amniotic fluid (AF), and UV) and to obtain an initial estimate of the AC." | Initial Estimation CNN + U-Net | Hough transform | N/A | Classification segmentation |
| ( | To classify ultrasound images (SB, AF, and UV) and measure AC | CNN | Hough transform | 20 - 34 weeks | Classification segmentation |
| ( | To find the region of interest (ROI) of the fetal abdominal region in the US image. | Fetal US Image Quality Assessment (FUIQA) | L-CNN is able to localize the fetal abdominal ROI AlexNet | 16 - 40 weeks | Classification |
| ( | To localize the fetal abdominal standard plane from ultrasound | Random forest classifier+ SVM classifier | Radial Component-based Model (RCM) | 18 - 40 weeks | Classification |
| ( | To diagnose the (prenatal) US images by design and implement a novel framework | Defending Against Child Death (DACD) | CNN | N/A | Classification segmentation |
| ( | To detect important landmarks employed in manual scoring of ultrasoundimages. | AdaBoost | Haar-like feature | 18 - 37 weeks | Classification |
| ( | To automatically select the standard plane from the fetal US volume for the application of fetal biometry measurement. | AdaBoost | One Combined Trained Classifier (1CTC) | 20 - 28 weeks | Classification |
| ( | To localize the FASP from US images. | DCNN | Fine-Tuning with Knowledge Transfer | 18 - 40 weeks) | Classification |
Compared between studies that utilized the HC18 dataset
| Study | DSC | HD | DF | ADF |
|---|---|---|---|---|
| ( | 0.926 | 3.53 | 0.94 | 2.39 |
| ( | N/A | N/A | 14.9% | N/A |
| ( | 0.973 | 1.58 | N/A | N/A |
| ( | 0.968 | N/A | N/A | N/A |
| ( | 0.973 | N/A | N/A | 2.69 |
| ( | 0.968 | 1.72 | 1.13 | 2.12 |
| ( | 1.27 | |||
| ( | 0.977 | 1.32 | 0.21 | 1.90 |
| ( | 0.977 | N/A | 2.03 | |
| ( | 0.977 | 1.39 | 1.49 | 2.33 |
| ( | 0.971 | 3.23 | N/A | N/A |
| ( | N/A | N/A | N/A |
DSC, Dice similarity coefficient; ACC, Accuracy; Pre, Precision; HD, Hausdorff distance; DF, Difference; ADF, Absolute Difference; IoU, Intersection overUnion; mPA, mean Pixel Accuracy.
Figure 4AI techniques used within all studies in this survey, AI techniques are ordered by US images type and further categorized by fetal organ. Totals here equal the number of included papers
Best DL study in each category based on the achieved result
| Mian Organ | Subsection | Best study |
|---|---|---|
| Fetal Body | Fetal part structures | ( |
| Anatomical structures | ( | |
| Growth disease | ( | |
| Gestational age | ( | |
| Head | Skull localization | ( |
| Brain Standard plan | ( | |
| Brain disease | ( | |
| Face | Fetal facial standard | ( |
| Face anatomical | ( | |
| Facial expression | ( | |
| Heart | Heart disease | ( |
| Hear chamber view | ( | |
| Abdomen | Abdominal anatomical | ( |
| REAGENT or RESOURCE | SOURCE | IDENTIFIER |
|---|---|---|
| Studies’ methodologies | Contained in the article | N/A |
| Publicly available dataset | Grand Challenge Fetal dataset | Automated measurement of fetal head circumference using 2D ultrasound images | Zenodo |
| Fetal Planes Dataset | FETAL_PLANES_DB: Common maternal-fetal ultrasound images|Zenodo | |
A literature retrieval strategy for AI for fetal monitoring
| Databases | Search terms |
|---|---|
| PubMed | ((“Ultrasound” OR “sonographic” OR “neurosonogram” OR “Sonography” OR “Obstetric” AND (y_10[Filter])) AND (“artificial intelligence” OR “machine learning” OR “neural network” OR “Deep learning” AND (y_10[Filter]))) AND (“Fetus” OR “fetal” OR “embryo” OR “baby” AND (y_10[Filter])) Default full text. |
| Embase | ((“Ultrasound” OR “sonographic” OR “neurosonogram” OR “Sonography” OR “Obstetric” AND (y_10[Filter])) AND (“artificial intelligence” OR “machine learning” OR “neural network” OR “Deep learning” AND (y_10[Filter]))) AND (“Fetus” OR “fetal” OR “embryo” OR “baby” AND (y_10[Filter])) Default full text. |
| PsycINFO | ((“Ultrasound” OR “sonographic” OR “neurosonogram” OR “Sonography” OR “Obstetric” AND (y_10[Filter])) AND (“artificial intelligence” OR “machine learning” OR “neural network” OR “Deep learning” AND (y_10[Filter]))) AND (“Fetus” OR “fetal” OR “embryo” OR “baby” AND (y_10[Filter])) full text. |
| ScienceDirect | (“Fetus” OR “fetal”) AND (“artificial intelligence” OR “neural network”) AND (“Ultrasound” OR “sonography”) |
| IEEE | “"Fetus” OR “All Metadata”: “fetal” OR “All Metadata”: “embryo” OR “All Metadata”: “baby”) AND (“All Metadata”: “artificial intelligence” OR “All Metadata”: “machine learning” OR “All Metadata”: “neural network” OR “All Metadata”: “Deep learning”) AND (“All Metadata”: “Ultrasound” OR “All Metadata”: “sonographic” OR “All Metadata”: “neurosonogram” OR “All Metadata”: “Sonography” OR “All Metadata”: “Obstetric”) Filters Applied: 2010 - 2021. |
| ACM Digital library | [[All: “fetus”] OR [All: “fetal”] OR [All: “embryo”] OR [All: “baby”]] AND [[All: “artificial intelligence”] OR [All: “machine learning”] OR [All: “neural network”] OR [All: “deep learning”]] AND [[All: “ultrasound”] OR [All: “sonographic”] OR [All: “neurosonogram”] OR [All: “sonography”] OR [All: “obstetric”]] AND [Publication Date: (01/0½010 TO 06/30/2021)] |
| Google Scholar | (“Fetus” OR “fetal” OR “embryo” OR “baby”) AND (“artificial intelligence” OR “machine learning” OR “neural network” OR “Deep learning”) AND (“Ultrasound” OR “sonographic” OR “neurosonogram” OR “Sonography” OR “Obstetric”) |
| Web of science | ((ALL=(“Fetus” OR “fetal” OR “embryo” OR “baby”)) AND ALL=(“artificial intelligence” OR “machine learning” OR “neural network” OR “Deep learning”)) AND ALL=(“Ultrasound” OR “sonographic” OR “neurosonograms” OR “Sonography” OR “Obstetric”). |