Literature DB >> 33145532

Artificial intelligence versus expert: a comparison of rapid visual inferior vena cava collapsibility assessment between POCUS experts and a deep learning algorithm.

Michael Blaivas1, Srikar Adhikari2, Eric A Savitsky3, Laura N Blaivas4, Yiju T Liu5.   

Abstract

OBJECTIVES: We sought to create a deep learning algorithm to determine the degree of inferior vena cava (IVC) collapsibility in critically ill patients to enable novice point-of-care ultrasound (POCUS) providers.
METHODS: We used publicly available long short term memory (LSTM) deep learning basic architecture that can track temporal changes and relationships in real-time video, to create an algorithm for ultrasound video analysis. The algorithm was trained on public domain IVC ultrasound videos to improve its ability to recognize changes in varied ultrasound video. A total of 220 IVC videos were used, 10% of the data was randomly used for cross correlation during training. Data were augmented through video rotation and manipulation to multiply effective training data quantity. After training, the algorithm was tested on the 50 new IVC ultrasound video obtained from public domain sources and not part of the data set used in training or cross validation. Fleiss' κ was calculated to compare level of agreement between the 3 POCUS experts and between deep learning algorithm and POCUS experts.
RESULTS: There was very substantial agreement between the 3 POCUS experts with κ = 0.65 (95% CI = 0.49-0.81). Agreement between experts and algorithm was moderate with κ = 0.45 (95% CI = 0.33-0.56).
CONCLUSIONS: Our algorithm showed good agreement with POCUS experts in visually estimating degree of IVC collapsibility that has been shown in previously published studies to differentiate fluid responsive from fluid unresponsive septic shock patients. Such an algorithm could be adopted to run in real-time on any ultrasound machine with a video output, easing the burden on novice POCUS users by limiting their task to obtaining and maintaining a sagittal proximal IVC view and allowing the artificial intelligence make real-time determinations.
© 2020 The Authors. JACEP Open published by Wiley Periodicals LLC on behalf of the American College of Emergency Physicians.

Entities:  

Keywords:  artificial intelligence; critical care; deep learning; fluid responsiveness; inferior vena cava; point‐of‐care ultrasound

Year:  2020        PMID: 33145532      PMCID: PMC7593461          DOI: 10.1002/emp2.12206

Source DB:  PubMed          Journal:  J Am Coll Emerg Physicians Open        ISSN: 2688-1152


INTRODUCTION

Point‐of‐care ultrasound (POCUS) assessment of the inferior vena cava (IVC) has evolved over time in response to various studies either supporting or questioning its use as a non‐invasive corollary for patient volume status, central venous pressure, and right atrial pressure. , , Although initially heralded as an accurate assessment of patient volume status dating back to the 1990s, IVC ultrasound use has been appropriately challenged due to the variability of study results and inter‐rater reliability challenges. , Practical clinical experience and a careful analysis of confounding literature indicate that IVC collapsibility is most useful in its extremes, either relatively flat with significant collapse, or a plethoric state with little to no diameter variation throughout the respiratory cycle in the spontaneous breathing patient. Recent literature has supported IVC collapsibility index ([IVC expiratory diameter–IVC inspiratory diameter]/IVC expiratory diameter) use to predict critically ill patient fluid responsiveness in shock states. , Two recent studies have indicated that a 25% collapsibility index may be adequately sensitive to differentiate between fluid responsive shock patients and fluid unresponsive ones, resulting in an ROC of 0.82 for predicting fluid responsiveness. , Some authors have suggested significant difference in area under the curve (AUC) results between novice sonologists performing manual measurements and calculating IVC collapsibility at the patient's bedside and the same measurements by POCUS experts made at a later time when reviewing DICOM video of these patients. Similarly, other studies have noted significant inter‐rater variability between novice and expert sonologist. , Although not surprising, this raises the question of reliability in clinical situations and also highlights the workload burden imposed on novices if they have to freeze images, move forward and backward 1 frame at a time through a cine loop to find maximal and minimal diameters of an IVC, and then carefully measure at the same location. Automation of the process holds the potential to improve inter‐rater reliability and even automating the steps of performing and documenting the ultrasound examination in evaluating the IVC. Artificial intelligence is rapidly infiltrating modern medicine. Deep learning, a branch of artificial intelligence, is currently the most promising application for medical image analysis and interpretation. Considerable work has been performed in consultative diagnostic imaging including automatic analysis of computed tomography (CT), chest x‐ray (CXR), and magnetic resonance imaging (MRI). , , Deep learning applications can also be found in ultrasound, but have been largely limited to costly imaging platforms. , However, applications focused on POCUS, which is directly used by clinicians in patient management including for immediate decisionmaking about fluid resuscitation and vasopressor use, has had relatively little deep learning application growth commercially and academically. Recently, we have seen the introduction of deep learning applications in POCUS devices, including automated left ventricular ejection fraction (EF) assessment with more automation promised. We developed a deep learning algorithm with temporal tracking to assist in real‐time video interpretation required for ultrasound assessment of IVC collapsibility and possible prediction of fluid responsiveness in critically ill patients or patient fluid status.

METHODS

Study design

This was a study of deep learning algorithm development to automatically assess whether selected ultrasound videos showed IVC collapsibility to be ≥25%. The study was Institutional Review Board‐exempted with all data coming from public domain open access sources with no patient identifiers, without actual patient data being used.

Data

Ultrasound video data of sagittal proximal IVC ultrasound examinations was obtained from public domain open access sources with all patient identifying information removed. Ultrasound video sources included anonymized image bank repositories, internet posted videos, stock videos, ultrasound vendor video and videos covering IVC evaluation, and cardiac evaluation categories. Internet search criteria included “video,” “ultrasound,” “IVC,” “inferior vena cava,” “volume status,” “shock,” “dehydration,” and “resuscitation.” Videos from the internet were downloaded using open source software called Youtube‐DL. Severity of illness related to subjects of the videos was gauged from the associated vignette or history provided. Researchers specifically avoided to more closely mirroring real‐life patient distributions in a clinical setting. Video data types included WMV, MP1, MP2, MOV, AVI, and MP4, and extracted single frames were all JPG. A total of 220 proximal IVC videos were imported into the training dataset. No patient identifiers were present on any of the image sources. Extracted videos included critically ill patients undergoing resuscitation as well as patients who were not critical. IVC status ranged from severely volume overloaded to severe volume depleted. No sample size or power calculations were made for this study. All reviewed and identified videos that were extracted were used for the project.

Data manipulation and labeling

All videos, used for deep learning algorithm training, were kept in their original aspect ratio and size and sorted into 1 of 2 categories. A POCUS expert with 28 years of research, education, and clinical use experience and >200 peer reviewed research manuscript publications, ranked all videos as either collapsing at least 25%, or collapsing <25%. The assessments were made visually using video editing software MicroDicom (Sofia, Bulgaria) that allowed frame‐by‐frame advance and reverse viewing (Figure 1). Diameters were compared to the reference centimeter scales present on ultrasound videos for greater accuracy.
FIGURE 1

Maximal (left) and minimum (right) inferior vena cava (IVC) diameter frames from a proximal IVC video used in deep learning algorithm training

Maximal (left) and minimum (right) inferior vena cava (IVC) diameter frames from a proximal IVC video used in deep learning algorithm training All training videos were then augmented using FFMPEG open access software. This is a common technique in deep learning when limited data are available. The total size of the training video dataset was increased 6‐fold by adding copies of the original videos that were transformed by flipping them horizontally, vertically, and 45° clockwise and counterclockwise. Videos were not otherwise adjusted such as through manipulation of contrast, sharpness, and changing image quality. This approach is also helpful in preparing the algorithm for videos of IVCs that are not perfectly horizontal on the screen and can significantly increase the robustness of an algorithm for interpreting novel imaging data.

Algorithm design

We made use of the Python programming language version 3.72 with Anaconda to manage packages and help in scripting and use of a VGG‐16 convolutional neural network incorporated into a bidirectional long short term memory (LSTM) network. Code for VGG‐16 is available from various public sources including github.com. VGG‐16, which won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) competition in 2014, is an early convolutional neural network using 16 layers and has been shown to be superior for ultrasound deep learning applications in prior work. LSTM refers to a structure that incorporates a convolutional neural network into a structure that tracks temporal changes on ultrasound images and is used in non‐medical circles to identify specific action on sports video and even predict outcomes of movements or actions. The bidirectional aspect means the video is moved forward and backward through the LSTM network. This bidirectionality helps the network to better understand the video context. We used this same basic architecture and adapted it to ultrasound video analysis.

The Bottom Line

Inferior vena cava collapsibility (cIVC) of 25% predicts fluid responsiveness in critically ill patients. The authors of this study developed a deep learning algorithm to assist with real‐time cIVC measurement during POCUS using publicly available videos. Their deep learning algorithm showed good agreement with 3 POCUS experts in determining the degree of cIVC. We trained our LSTM algorithm using a PC with an 11 GB NVIDIA GeForce RTX 2080 Ti GPU and 64 GB of RAM. Researchers manipulated optimizers, learning rates, and batch size during training for optimal training times and accuracies, but avoided exploding gradients that result in training failure. The number of epochs (that is defined as 1 round of training through all of the data) was adjusted for optimal results while avoiding overfitting. Best performance was eventually obtained with 80 Epochs. Batch sizes of 15 videos ultimately proved to result in the best algorithm training performance.

Algorithm validation and testing

The LSTM algorithm automatically performs cross validation after every epoch. Cross validation accuracy, learning, and training losses were used to guide algorithm training adjustments. After results were optimized, and no further adjustments resulted in performance improvements, the algorithm was tested on the 50 newly obtained IVC test videos, which were the same videos sent to POCUS experts for blinded review. No video augmentation was performed on the 50 test video set. Three POCUS experts (16, 12, and 11 years of POCUS experience, respectively) with fellowship training and extensive research, academic, and clinical scanning experience were asked to review the 50 IVC test videos in blinded fashion. Each POCUS expert received a dropbox link containing all 50 videos, randomly arranged and numbered. Experts were asked to quickly view each video and make a decision whether the IVC collapsed 25% or greater, looking at 3 cm distal to the diaphragm/right. The POCUS experts were asked to take a quick look (but were not timed) to better simulate real‐life decisionmaking with POCUS where prolonged image review is prohibitive. Experts recorded their findings on an Excel spreadsheet for each IVC video. POCUS expert ratings were incorporated into 1 database and matched for each video along with the deep learning algorithm predictions on the same videos. Additionally, the 3 POCUS experts were asked to rate the difficulty of interpreting each IVC video to determine if it collapsed 25% or more on a 10‐point Likert scale. Figure 2 summarizes the steps involved with algorithm construction, training, and testing.
FIGURE 2

Diagram of a step‐by‐step approach for algorithm design, training, and testing

Diagram of a step‐by‐step approach for algorithm design, training, and testing

Statistical analysis

Fleiss’ κ was calculated to compare level of agreement between the 3 POCUS experts and deep learning algorithm and POCUS experts. Fleiss’ κ is a measure of inter‐rater agreement used to determine the level of agreement between 2 or more raters. The single POCUS expert who rated degree of collapse for the 220 training IVC videos did not rate the 50 test videos, and there were no comparisons of the 3 POCUS experts and the deep learning to any criterion standard. Because there is no actual disease presence or absence, and the comparison of the deep learning prediction is compared to that of 3 POCUS experts, sensitivity, specificity, and likelihood ratios are not calculated because all the required components (true–positives, true–negatives, false–positives, and false–negatives) are not available. This is a common feature of imaging deep learning studies.

RESULTS

All 3 POCUS experts were able to complete rating each of the 50 test IVC videos. Decisions regarding degree of collapse by each POCUS expert and the deep learning algorithm for each IVC video are listed in Table 1. Summary results relating to video classification into collapse and non‐collapse groups are shown in Table 2. Asked to rate difficulty of interpretation of each video and quality, the experts reported no videos that could not be evaluated because of poor quality. Reported difficulty in IVC video interpretation by POCUS expert reviewer ranged from 1–7, 1–9, and 1–9, respectively. Median difficulty scores rating the difficulty POCUS reviewers had in determining if collapse was >25% or <25% for test IVC videos, and rating for IVC collapse and non‐collapse subgroups are reported in Table 2. Percentage of IVC videos rated as showing 25% or greater collapse for the deep learning and 3 POCUS reviewers are shown in Table 2.
TABLE 1

Three POCUS expert reviewer assessments as well as the DL prediction of the degree of IVC collapse for each test IVC video

IVC video numberPOCUS 1POCUS 2POCUS 3DL algorithm
IVC 1CollapsingCollapsingCollapsingCollapsing
IVC 2CollapsingNot collapsingNot collapsingNot collapsing
IVC 3CollapsingCollapsingCollapsingNot collapsing
IVC 4CollapsingCollapsingCollapsingNot collapsing
IVC 5CollapsingCollapsingCollapsingNot collapsing
IVC 6CollapsingCollapsingCollapsingNot collapsing
IVC 7CollapsingCollapsingCollapsingCollapsing
IVC 8CollapsingCollapsingNot collapsingNot collapsing
IVC 9CollapsingCollapsingCollapsingNot collapsing
IVC 10CollapsingCollapsingCollapsingNot collapsing
IVC 11Not collapsingNot collapsingNot collapsingNot collapsing
IVC 12Not collapsingNot collapsingNot collapsingNot collapsing
IVC 13CollapsingCollapsingCollapsingNot collapsing
IVC 14Not collapsingNot collapsingNot collapsingNot collapsing
IVC 15Not collapsingCollapsingNot collapsingNot collapsing
IVC 16Not collapsingCollapsingCollapsingCollapsing
IVC 17CollapsingCollapsingCollapsingNot collapsing
IVC 18CollapsingNot collapsingNot collapsingNot collapsing
IVC 19CollapsingCollapsingCollapsingNot collapsing
IVC 20CollapsingCollapsingCollapsingNot collapsing
IVC 21CollapsingNot collapsingNot collapsingCollapsing
IVC 22Not collapsingNot collapsingNot collapsingNot collapsing
IVC 23CollapsingCollapsingCollapsingCollapsing
IVC 24CollapsingNot collapsingNot collapsingNot collapsing
IVC 25CollapsingCollapsingCollapsingNot collapsing
IVC 26Not collapsingNot collapsingCollapsingCollapsing
IVC 27CollapsingNot collapsingNot collapsingNot collapsing
IVC 28Not collapsingNot collapsingNot collapsingNot collapsing
IVC 29CollapsingNot CollapsingNot CollapsingNot collapsing
IVC 30Not collapsingNot collapsingNot collapsingNot collapsing
IVC 31CollapsingNot collapsingNot collapsingCollapsing
IVC 32Not collapsingNot collapsingNot collapsingNot collapsing
IVC 33Not collapsingNot collapsingNot collapsingNot collapsing
IVC 34CollapsingNot collapsingNot collapsingNot collapsing
IVC 35Not collapsingNot collapsingNot collapsingNot collapsing
IVC 36Not collapsingNot collapsingNot collapsingNot collapsing
IVC 37Not collapsingNot collapsingNot collapsingNot collapsing
IVC 38CollapsingCollapsingCollapsingCollapsing
IVC 39Not collapsingNot collapsingNot collapsingNot collapsing
IVC 40Not collapsingNot collapsingNot collapsingNot collapsing
IVC 41Not collapsingNot collapsingNot collapsingNot collapsing
IVC 42CollapsingCollapsingCollapsingNot collapsing
IVC 43CollapsingCollapsingCollapsingCollapsing
IVC 44CollapsingNot collapsingCollapsingCollapsing
IVC 45CollapsingCollapsingCollapsingNot collapsing
IVC 46CollapsingCollapsingCollapsingCollapsing
IVC 47CollapsingCollapsingCollapsingCollapsing
IVC 48Not collapsingNot collapsingNot collapsingNot collapsing
IVC 49Not collapsingNot collapsingNot collapsingNot collapsing
IVC 50Not collapsingNot collapsingNot collapsingNot collapsing

DL, deep learning; IVC, inferior vena cava; POCUS, point‐of‐care ultrasound.

TABLE 2

DL prediction and 3 POCUS reviewer assessments of IVC collapsibility for the 50 test IVC videos

POCUS reviewer 1POCUS reviewer 2POCUS reviewer 3DL algorithm
Videos judged as collapsing ≥25%30232312
(60%)(46%)(46%)(24%)
Mean difficulty rating for all IVC videos2.73.03.8
(95% CI = 2.06–3.42)(95% CI = 2.42–3.5)(95% CI = 2.93–4.67)
Mean difficulty rating for IVC videos with collapse ≤25%2.02.83.5
(95% CI = 1.34–2.58)(95% CI = 2.13–3.47)(95% CI = 2.39–4.65)
Mean difficulty rating for IVC videos with collapse <25%3.43.24.1
(95% CI = 2.27–4.53)(95% CI = 2.23–4.17)(95% CI = 2.72–5.41)

DL, deep learning; IVC, inferior vena cava; POCUS, point‐of‐care ultrasound. The 10‐point Likert scale ratings of difficulty interpreting IVC test videos are listed for each of the POCUS reviewers.

Three POCUS expert reviewer assessments as well as the DL prediction of the degree of IVC collapse for each test IVC video DL, deep learning; IVC, inferior vena cava; POCUS, point‐of‐care ultrasound. DL prediction and 3 POCUS reviewer assessments of IVC collapsibility for the 50 test IVC videos DL, deep learning; IVC, inferior vena cava; POCUS, point‐of‐care ultrasound. The 10‐point Likert scale ratings of difficulty interpreting IVC test videos are listed for each of the POCUS reviewers. The LSTM deep learning algorithm was able to provide a prediction for each of the 50 videos. The algorithm took 9:24 min to train on the 220 augmented training videos. It took 30 seconds to review and make predictions on all 50 test IVC videos. Reviewers were not asked to time how long it took them to review and make a decision regarding collapse for the 50 test videos. There was very good agreement between the 3 POCUS experts with κ = 0.65 (95% CI = 0.49–0.81). Agreement between experts and algorithm was good with κ = 0.45 (95% CI = 0.33–0.56).

DISCUSSION

Our results indicate that a deep learning neural network using LSTM and trained on public domain IVC ultrasound videos can achieve good agreement with POCUS experts in visually estimating if patient IVC collapsed 25% or greater. This cutoff is similar to other suggested ones and is supported by recent studies as being useful for determining if septic shock patients will be fluid responsive or not. , Only a small number of studies have been published to date on either deep learning or other automated analysis of the IVC. The earliest ones predate modern deep learning techniques, which are only several years old. Our purpose in this study was not to compare the deep learning algorithm and 3 POCUS experts to any criterion standard, such as the original POCUS reviewer of the 220 training videos for the deep learning algorithm. Instead, our goal was to assess the internal consistency of the 3 POCUS expert reviewers and the deep learning algorithm. IVC collapsibility has been explored in several published research studies on automatic interpretation and analysis. The first commercially available automated IVC analysis application by a POCUS ultrasound vendor was introduced nearly 4 years ago, but was not originally designed using artificial intelligence. The goal of any automation is to enable less experienced health care providers and those with less medical training to be able to assess volume status and potentially fluid responsiveness in sick patients who may be suffering from some type of shock. A group of researchers specifically focused on eliminating the time and labor intensive IVC measurement burden in clinical settings by creating and testing an automated process in 8 pigs. A total data set of 48 IVC evaluations was generated from the animals, and researchers were able to create a method to automatically identify and measure the IVC, offline and not in real‐time. Although this is an important laboratory step forward, the authors describe a lengthy stepwise process required to manipulate the ultrasound cine loops with various filters and image size adjustment, limiting any real‐time clinical application potential. Despite the limited data, this automated method accurately identified 97.9% of pig IVCs. The majority of IVC measurements were within 15% of those made by 2 sonographers. A final limitation was that the methodology required DICOM data and took place offline, unless imbedded on an ultrasound device—that was not tested. An important collaboration between clinicians and engineering experts resulted in the creation of an automated technique for IVC diameter change assessment using a complex pyramidal structure that resulted in good agreement with manual measurements. The authors reported that when measurements between physicians and the algorithm differed, in 95% of these cases the difference was <10%. The complex algorithm performed well when tested in 50 hemodialysis patients. However, it required multiple steps and significant engineering expertise to design and implement. Rather than undertake complex additional steps, which will inherently increase the difficulty of implementing such an algorithm on a wide range of POCUS machines, we sought to use the ability of deep learning algorithms to find novel associations to make predictions based on regression (referring to the type of deep learning methodology used as opposed to image segmentation and actual measurement). Therefore, no actual anatomic localization, border identification, or diameter measurements are required for prediction of outcome. Further, rather than requiring software to extract optimal individual images or breaking ultrasound cine loops into individual images to analyze offline, we explored a real‐time application that could analyze a cine loop immediately on an ultrasound machine or even run in real‐time while a novice sonologists was scanning the patient's IVC. Enabling providers new to POCUS, such as residents, untrained faculty, nurses, emergency medical technicians, and others to accurately assess the IVC through automation may significantly improve patient assessment, management, and access to care, and hasten interventions. Additionally, real‐time feedback will further help novice sonologists improve their skills. A published study on nurses measuring the IVC diameter in the emergency department, following 3.5 hours of didactic and hands‐on training, examined the correlation with a sonographer in both longitudinal and transverse IVC diameter measurements: R = 0.68 and 0.59, respectively, in an all‐volunteer model. Although this study proved that nurses can measure the IVC, correlation with a sonographer was suboptimal, and the training time required might have been better spent focusing on simply on attaining a mid‐sagittal IVC view and holding it steady. As with many medical and non‐medical tasks, deep learning‐based automation should significantly decrease the amount of training required for nurses in this task. Additionally, evidence shows implemented artificial intelligence can significantly improve sonographer intra‐observer reliability during cardiac measurements, suggesting artificial intelligence should accomplish this for novice ultrasound users also. Our efforts differ from prior designs and should offer easier implementation into clinical use when compared to the challenging process of programming an application to extract individual frames, performing multiple measurements and doing so offline. We modeled our approach on a POCUS expert's ability to rapidly view the proximal IVC and determine visually if 25% collapse occurs or not. This will allow the resultant deep learning application to be activated once the imaging window is obtained, hopefully yielding accurate results. Despite a small dataset for deep learning purposes, we were able to reach good agreement with 3 POCUS experts, suggesting that with a larger dataset correlating IVC behaviors on ultrasound and outcomes of fluid responsiveness testing, should be able to equal or surpass the results posted by expert POCUS. By embedding such a deep learning application into an ultrasound machine or simply having the application run on a real‐time video feed from the system and displaying over the machines screen, a novice would only have to obtain an sagittal view of the proximal IVC and let the deep learning application do the rest, providing a nearly instant prediction regarding fluid responsiveness. The type of automation of IVC analysis we targeted would be of greater practical use in aiding novice POCUS providers than the current manual process. Rather than having to acquire a mid‐sagittal image of the proximal IVC, then holding the transducer in the same anatomic position through several respiratory cycles, freezing the image and scrolling through the cine loop to identify the best IVC maximum and minimum diameters, and finally making a total of 4 measurements to assure measurement of the IVC diameter at the same anatomic point, the novice would instead be tasked only with obtaining the image and holding still over the anatomy. Our study had a number of limitations including a very small dataset. Ideally, the dataset would have been in the thousands of videos and would likely have resulted in much better correlation. Additionally, if the videos were in DICOM format, and we were able to obtain accurate distance and diameter measurements for training, the precision of the algorithm in differentiating between 25% collapse or greater versus less would be expected to grow significantly. Thus, the criterion standard used of a single POCUS expert to review and assess degree of IVC collapse itself introduces some amount of error. All of the videos were obtained from public domain sources, and although from actual patients, they varied in quality as did the equipment used to obtain them. It was impossible to obtain accurate patient states for all videos, and it is possible the 220 training and 50 test videos represented an atypical set of images. Data augmentation significantly increased the effective dataset size. However, for optimal results, only 1 type of ultrasound machine would have provided a very large dataset used to train the deep learning algorithm. This is not a realistic approach clinically, because many hospitals and even departments rely on a variety of ultrasound machines, therefore necessitating a robust algorithm trained on different ultrasound devices. Although tested on a separate set of ultrasound videos not previously seen by the artificial intelligence algorithm, we did not have other data from different medical centers nor were we able to test it prospectively in different settings. This would be an important future research topic, but this pilot study lays an important foundation for this approach. In conclusion, a non‐commercial LSTM deep learning algorithm in this study showed good agreement with 3 blinded POCUS expert reviewers in determining degree of IVC collapsibility in ultrasound videos of spontaneously breathing patients. This study indicates that an artificial intelligence algorithm has the potential to improve CIVC measurements.

AUTHOR CONTRIBUTIONS

MB and LNB conceived the study concept, gathered data, performed image classification, and wrote the article. MB and LNB created algorithmic approach and performed algorithm training and testing. SA, EAS, and YTL assisted in project creating, acted as blinded ultrasound reviewers, and helped write the Introduction and Discussion. MB takes the final responsibility of the article.

CONFLICTS OF INTEREST

MB consults with EchoNous Inc., Sonosim Inc. Ethos Medical and 410Medical. None of the companies had influence or contribution to this study and article or knowledge of its performance.
  19 in total

1.  The interrater reliability of ultrasound imaging of the inferior vena cava performed by emergency residents.

Authors:  Arif Akkaya; Murat Yesilaras; Ersin Aksay; Mustafa Sever; Ozge Duman Atilla
Journal:  Am J Emerg Med       Date:  2013-09-05       Impact factor: 2.469

2.  Echography of the inferior vena cava for estimating fluid removal from patients undergoing hemodialysis.

Authors:  T Kusaba; K Yamaguchi; H Oda
Journal:  Nihon Jinzo Gakkai Shi       Date:  1996-03

3.  Point-of-Care Ultrasound Assessment of the Inferior Vena Cava in Mechanically Ventilated Critically Ill Children.

Authors:  Sonali Basu; Matthew Sharron; Nicole Herrera; Marisa Mize; Joanna Cohen
Journal:  J Ultrasound Med       Date:  2020-02-20       Impact factor: 2.153

4.  Continuous Inferior Vena Cava Diameter Tracking through an Iterative Kanade-Lucas-Tomasi-Based Algorithm.

Authors:  Barry Belmont; Ross Kessler; Nikhil Theyyunni; Christopher Fung; Robert Huang; Michael Cover; Kevin R Ward; Albert J Shih; Mohamad Tiba
Journal:  Ultrasound Med Biol       Date:  2018-09-11       Impact factor: 2.998

5.  Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study.

Authors:  Sasank Chilamkurthy; Rohit Ghosh; Swetha Tanamala; Mustafa Biviji; Norbert G Campeau; Vasantha Kumar Venugopal; Vidur Mahajan; Pooja Rao; Prashant Warier
Journal:  Lancet       Date:  2018-10-11       Impact factor: 79.321

6.  Are All Deep Learning Architectures Alike for Point-of-Care Ultrasound?: Evidence From a Cardiac Image Classification Model Suggests Otherwise.

Authors:  Michael Blaivas; Laura Blaivas
Journal:  J Ultrasound Med       Date:  2019-12-24       Impact factor: 2.153

7.  The interrater reliability of inferior vena cava ultrasound by bedside clinician sonographers in emergency department patients.

Authors:  J Matthew Fields; Paul A Lee; Katherine Y Jenq; Dustin G Mark; Nova L Panebianco; Anthony J Dean
Journal:  Acad Emerg Med       Date:  2011-01       Impact factor: 3.451

8.  Automated Echocardiographic Quantification of Left Ventricular Ejection Fraction Without Volume Measurements Using a Machine Learning Algorithm Mimicking a Human Expert.

Authors:  Federico M Asch; Nicolas Poilvert; Theodore Abraham; Madeline Jankowski; Jayne Cleve; Michael Adams; Nathanael Romano; Ha Hong; Victor Mor-Avi; Randolph P Martin; Roberto M Lang
Journal:  Circ Cardiovasc Imaging       Date:  2019-09-16       Impact factor: 7.792

9.  Performance of a 25% Inferior Vena Cava Collapsibility in Detecting Fluid Responsiveness When Assessed by Novice Versus Expert Physician Sonologists.

Authors:  Keith A Corl; Nader Azab; Mohammed Nayeemuddin; Alexandra Schick; Thomas Lopardo; Fatima Zeba; Gary Phillips; Grayson Baird; Roland C Merchant; Mitchell M Levy; Michael Blaivas; Adeel Abbasi
Journal:  J Intensive Care Med       Date:  2019-10-14       Impact factor: 3.510

10.  Fully Automated Echocardiogram Interpretation in Clinical Practice.

Authors:  Jeffrey Zhang; Sravani Gajjala; Pulkit Agrawal; Geoffrey H Tison; Laura A Hallock; Lauren Beussink-Nelson; Mats H Lassen; Eugene Fan; Mandar A Aras; ChaRandle Jordan; Kirsten E Fleischmann; Michelle Melisko; Atif Qasim; Sanjiv J Shah; Ruzena Bajcsy; Rahul C Deo
Journal:  Circulation       Date:  2018-10-16       Impact factor: 29.690

View more
  1 in total

Review 1.  The POCUS Consult: How Point of Care Ultrasound Helps Guide Medical Decision Making.

Authors:  Jake A Rice; Jonathan Brewer; Tyler Speaks; Christopher Choi; Peiman Lahsaei; Bryan T Romito
Journal:  Int J Gen Med       Date:  2021-12-15
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.