Literature DB >> 33788887

Automated system for diagnosing endometrial cancer by adopting deep-learning technology in hysteroscopy.

Yu Takahashi1, Kenbun Sone1, Katsuhiko Noda2, Kaname Yoshida2, Yusuke Toyohara1, Kosuke Kato1, Futaba Inoue1, Asako Kukita1, Ayumi Taguchi1, Haruka Nishida1, Yuichiro Miyamoto1, Michihiro Tanikawa1, Tetsushi Tsuruga1, Takayuki Iriyama1, Kazunori Nagasaka3, Yoko Matsumoto1, Yasushi Hirota1, Osamu Hiraike-Wada1, Katsutoshi Oda4, Masanori Maruyama5, Yutaka Osuga1, Tomoyuki Fujii1.   

Abstract

Endometrial cancer is a ubiquitous gynecological disease with increasing global incidence. Therefore, despite the lack of an established screening technique to date, early diagnosis of endometrial cancer assumes critical importance. This paper presents an artificial-intelligence-based system to detect the regions affected by endometrial cancer automatically from hysteroscopic images. In this study, 177 patients (60 with normal endometrium, 21 with uterine myoma, 60 with endometrial polyp, 15 with atypical endometrial hyperplasia, and 21 with endometrial cancer) with a history of hysteroscopy were recruited. Machine-learning techniques based on three popular deep neural network models were employed, and a continuity-analysis method was developed to enhance the accuracy of cancer diagnosis. Finally, we investigated if the accuracy could be improved by combining all the trained models. The results reveal that the diagnosis accuracy was approximately 80% (78.91-80.93%) when using the standard method, and it increased to 89% (83.94-89.13%) and exceeded 90% (i.e., 90.29%) when employing the proposed continuity analysis and combining the three neural networks, respectively. The corresponding sensitivity and specificity equaled 91.66% and 89.36%, respectively. These findings demonstrate the proposed method to be sufficient to facilitate timely diagnosis of endometrial cancer in the near future.

Entities:  

Year:  2021        PMID: 33788887      PMCID: PMC8011803          DOI: 10.1371/journal.pone.0248526

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Endometrial cancer is the most common gynecologic malignancy, and its incidence has increased significantly in recent years [1]. Patients demonstrating early symptoms of the disease or suffering from low-risk endometrial cancer can be prescribed a favorable prognosis. However, patients diagnosed with endometrial cancer in its later stages have very few treatment or prognosis options available at their disposal [2]. Additionally, patients demonstrating conditions, such as atypical endometrial hyperplasia (AEH), precancerous condition of endometrial cancer, or stage 1A endometrial cancer without muscle invasion, are eligible for progestin therapy. Accordingly, they might potentially be able to preserve their fertility [3]. Therefore, early diagnosis of endometrial cancer assumes paramount importance. Cervical cytology through pap smear is a common screening method employed in cervical cancer diagnosis [4]. However, endometrial cytology is not a reliable screening technique because its underlying procedure comprises a blind test, results of which may lead to a large number of false negatives. Although the standard diagnostic procedure for endometrial cancer involves endometrial biopsy performed via dilation and curettage, a clinically established screening for endometrial cancer does not exist to date [5]. Hysteroscopy is, in general, considered the standard procedure for examining endometrial lesions by directly evaluating the uterine cavity. It is noteworthy that recent studies have suggested that hysteroscopy can be considered an effective technique for accurate endometrial-cancer diagnosis [6,7]. We have previously reported the usefulness of biopsy through office hysteroscopy with regard to endometrial cancer [8]. Artificial intelligence (AI) enables computers to perform intellectual actions, such as language understanding, reasoning, and problem solving, on behalf of humans. Machine learning is a cutting-edge approach for developing AI models based on the scientific study of algorithms and statistical models used by computer systems to perform tasks efficiently. The use of an appropriate AI model also enables computers to learn patterns in available datasets and make inferences from given data without the need for providing explicit instructions [9]. The deep neural network (DNN) facilitates realization of deep-learning concepts. Additionally, it is a machine-learning method that focuses on the use of multiple layers of neural networks [10-12]. From the machine-learning perspective, a neural network comprises a network or circuit of artificial neurons or nodes [13]. Deep learning has garnered much interest in the medical field because deep-learning techniques are particularly suitable for image analysis. They are used for classification, image quality improvement, and segmentation of medical images. Conversely, shallow machine learning is not suitable for image recognition [14]. Recently, several systems developed for use in medical applications, such as image-based diagnosis and radiographic imaging of breast and lung cancers [15,16], have adopted AI models based on the implementation of DNN technology. Numerous examples of such systems employing endoscopic images in the diagnosis of gastric and colon cancer have been reported. However, no such system has been developed with specific focus on endometrial cancer [17,18]. In general, a voluminous amount of data is required for training a model to be highly accurate; this can be possible only if a large number of participants is considered. With the development of deep learning, it is expected that the accuracy rate will be high when the number of samples is large. However, when deep learning is applied to the medical field, some diseases must be analyzed with a small number of samples. Therefore, the challenge for medical AI research is to develop a system analysis method to improve accuracy with a small number of samples. Therefore, the proposed study aims at developing a DNN-based automated endometrial-cancer diagnosis system that can be applied to hysteroscopy. Hysteroscopy has not yet found widespread utilization in diagnostic applications for endometrial cancers. This further limits the availability of training data for DNNs. Thus, the objective of this study is to develop a method that facilitates high-accuracy endometrial-cancer diagnosis, despite the limited number of cases available in the training dataset. In addition, the purpose of this research is to establish a system for shifting to large-scale research in the future. Because no standard method has been established for use in such scenarios to date, this study focuses on the determination of an optimum method. In this study, we have achieved a high accuracy for diagnosis of endometrial cancer by hysteroscopy with such a small sample in using deep learning.

Materials and methods

Dataset overview

The data utilized in this study were extracted from videos of the uterine lumen captured using a hysteroscope. The breakdown of the extracted data is presented in Table 1 and Fig 1. The shortest video lasted 10.5 s, whereas the longest lasted 395.3 s. The corresponding mean and median durations equaled 77.5 s and 63.5 s, respectively. Because the videos were captured using different hysteroscopic systems with no consistency in terms of the resolution and image position, only parts of the captured images were extracted with the resolution reduced to 256 × 256 px for Xception [19] and 224 × 224 px for MobileNetV2 [20] and EfficientNetB0 [21]. Representative hysteroscopic images pertaining to each condition are depicted in Fig 1. The said hysteroscopic data were collected from 177 patients recruited in this study. These patients had a history of hysteroscopy, and they were categorized into five groups—those demonstrating conditions of a normal endometrium (60), uterine myoma (21), endometrial polyp (60), AEH (15), and endometrial cancer (21) (S1 Table). The above-mentioned data collection was performed at the University of Tokyo Hospital between 2011 and 2019 after obtaining prior patient consent and approval from the Research Ethics Committee at the University of Tokyo (approval no. 3084-(3) and 2019127NI-(1)).
Table 1

Images extracted from hysteroscopy videos per each disease category.

Still imageVideo image
Total number411, 800 images177 videos
Clinical diagnosis n (%)
Normal113,357 (27.5%)60 (33.8.%)
Polyp143,449 (34.8%)60 (33.8%)
Myoma45,037 (11.0%)21 (11.8%)
Atypical endometrial hyperplasia42,146 (10.2%)15 (8.4%)
Endometrial cancer67,811 (16.4%)21 (11.8%)
Fig 1

Representative images of detected lesions for conditions of (A) normal endometrium; (B) endometrial polyp; (C) myoma; (D) AEH, and (E) endometrial cancer.

Representative images of detected lesions for conditions of (A) normal endometrium; (B) endometrial polyp; (C) myoma; (D) AEH, and (E) endometrial cancer. The consent was obtained by allowing the patients to opt-out. Patients were identified as those showing symptoms such as abnormal bleeding or menorrhagia, which required them to visit the outpatient department for the diagnosis of intrauterine lesion via hysteroscopy. The pathological diagnosis of AEH and endometrial cancer was obtained by biopsy or surgery. Normal endometrium, hysteromyoma, and endometrial polyp were diagnosed based on endometrial cytology, histology, hysteroscopic findings by a gynecologist, imaging findings such as MRI and ultrasound findings, and clinical course.

Training and evaluation data

The prepared videos were divided into four groups at random—three groups were used for training, and the remaining group was used for evaluation. The four groups were denoted pair-A, pair-B, pair-C, and pair-D and used for cross validation. S2 Table presents the number of training and evaluation videos for each pair. The accuracy of the trained model was evaluated based on image and video units. Owing to the limited number of cases available for this study, we defined two classes—"Malignant" and "Others"—for training and prediction. The "Malignant’ class included AEH and cancer, whereas the "Others" class included uterine myoma, endometrial polyps, and normal endometrium. As listed in S3 Table, the "Malignant" class comprised 36 videos and 109,957 images, whereas the "Others" class comprised 141 videos and 301,843 images. The overall architecture of the model developed in this project is depicted in Fig 2.
Fig 2

Overall architecture of the model developed in this project.

Training data

The training data pertaining to the malignant class were distributed into the following two sets (Fig 3A).
Fig 3

(A) Schematic of the training method: The training data pertaining to the malignant class were separated into two sets, Set X and Set Y. (B) Schematic of the evaluation method: image by image. (C) Schematic of the evaluation method: video unit. During image-by-image evaluation, 100 images that clearly included the lesion site were extracted from the hysteroscopic video of each patient diagnosed with a malignant tumor (Continuity analysis).

(A) Schematic of the training method: The training data pertaining to the malignant class were separated into two sets, Set X and Set Y. (B) Schematic of the evaluation method: image by image. (C) Schematic of the evaluation method: video unit. During image-by-image evaluation, 100 images that clearly included the lesion site were extracted from the hysteroscopic video of each patient diagnosed with a malignant tumor (Continuity analysis). Set X: comprising all frames included in the video stream. Set Y: comprising images excluding the outside of the uterine cavity, such as the cervical and extrauterine images from Set X. The number of frames within each set is listed in S3 Table.

Evaluation methods

In this study, the accuracy of the trained model was evaluated in two ways—image-by-image evaluation and video-unit evaluation. During image-by-image evaluation, 100 images that clearly included the lesion site were extracted from the hysteroscopic video of each patient diagnosed with a malignant tumor (Fig 3B). For patients diagnosed with benign and normal tumors, all frames were used during evaluation. In contrast, during video-unit evaluation, the judgment was made depending on the number of consecutive frames classified as "Malignant" in a given video stream (Fig 3C) (Continuity analysis). The threshold value of 50 was set for the number of consecutive frames in accordance with the results of a pre-study we performed, as described in Fig 4A. The threshold was taken from the points where the malignant score intersects with the other scores rather than the point where the average of two scores was the best, because the threshold should be set lower to reduce oversight cases in the actual clinical devices (Fig 4A).
Fig 4

(A) Trend depicting accuracy displacement of malignant and benign diagnoses in accordance with threshold value for continuity analysis. (B) Comparison between learning times required by the three neural networks. The physical time depends on the computer specifications and image size; however, the ratio of the learning time required by each network is independent of such conditions.(C) Average accuracy values obtained via image-by-image-based predictions grouped in terms of dataset and network type. (D) Average accuracy values obtained via video-unit-based predictions grouped in terms of dataset and network type.

(A) Trend depicting accuracy displacement of malignant and benign diagnoses in accordance with threshold value for continuity analysis. (B) Comparison between learning times required by the three neural networks. The physical time depends on the computer specifications and image size; however, the ratio of the learning time required by each network is independent of such conditions.(C) Average accuracy values obtained via image-by-image-based predictions grouped in terms of dataset and network type. (D) Average accuracy values obtained via video-unit-based predictions grouped in terms of dataset and network type.

Neural network types

As already stated, three different neural networks—Xception, MobileNetV2, and EfficientNetB0—were adopted in this study to classify the images extracted from the video stream. These networks can exhibit relatively high accuracy with smaller size datasets and less expensive learning costs. We built these models using Keras implemented on TensorFlow and then trained them on an Intel core i7-9700 CPU + Nvidia GTX 1080ti GPU. The number of parameters used with each network is shown in S4 Table. The time spent to learn 3,000,000 images is shown in Fig 4B. The network structure of Xception is shown in S5 Table. The most unique feature of Xception is that it divides the normal convolutional network layer into micro-networks called Inception modules as much as possible and replaces them with “Depthwise Separable Convolution.” The “Depthwise Separable Convolution” network structure divides the normal convolutional network into two network segments, Depthwise Convolution and Pointwise Convolution [19]. The network structure of MobileNetV2 is shown in S6 Table. The most unique feature of MobileNet is the adoption of the network layers called “Inverted Residual” widely to almost every network layer to reduce the total number of parameters [20]. The network structure of EfficientNetB0 is shown in S7 Table. The most unique feature of EfficientNet is the introduction of compound coefficients based on how the depth, width, resolution, etc. of the network within a convolutional network affect the performance of the model [21].

Model generation—execution of training

Owing to the nature of neural networks, even when the same type of neural network is trained using the same dataset, each model yields a different accuracy. Therefore, in this study, we trained three types of DNN models six times using two datasets (Set X, Set Y), which were grouped into four training and evaluation pairs—A, B, C, and D. Thus, 144 (3 × 6 × 2 × 4) trained models were acquired.

Results

Results of image by image evaluation

In this study, we first evaluated the accuracies of the predicted results obtained using each of the above-mentioned 144 models to each individual image. Subsequently, we calculated the average accuracy values by dividing the results into two groups based on the applicable data class and neural network type. Comparisons between the average prediction accuracies obtained for each dataset and network type are presented in Figs 4C and S1A and S8 Table. As can be realized, the difference between the average accuracy values (0.7891 and 0.8093, respectively) obtained for datasets X and Y equaled 0.0201, whereas that between the accuracy values obtained using the different network types equaled 0.0047 (0.7969 (minimum) and 0.8016 (maximum)) (S8 Table). As observed in this study, MobileNetV2 demonstrated the shortest learning time, whereas Xception required the longest learning duration—approximately thrice that required by MobileNetV2, as described in Fig 4B.

Results of video-unit-based evaluation: Continuity analysis

As already stated, the continuity analysis method for use in hysteroscopy applications has been developed in this study to increase the diagnostic accuracy realized when performing video-unit-based evaluations. As mentioned in the Materials and methods section, hysteroscopy video samples were considered representative of malignant tumors when 50 or more consecutive image frames extracted from them were classified as "Malignant." Comparisons between the average prediction accuracies obtained for each dataset and network type are presented in Figs 4D and S1B and S9 Table. As can be seen, the difference between the average accuracy values (0.8394 and 0.8913, respectively) obtained for datasets X and Y equaled 0.0512, whereas that between accuracy values obtained using the different network types equaled 0.0052 (0.8622 (minimum) and 0.8675 (maximum)) (Figs 4D and S1B and S9 Table).

Evaluation of accuracy improvements realized by combining multiple models

Finally, we evaluated the improvement in diagnostic accuracy realizable by using a combination of multiple DNN models. The evaluation was performed using 72 models (6 iterations × 4 data pairs × 3 model types) trained using Set Y. The video-unit-based continuity-analysis method was used owing to its demonstrated superior performance compared to the image-by-image-based technique. The results of this evaluation (Fig 5 and Table 2) revealed that the combination of 72 models could classify cancers and AEH as part of the malignant group accuracies of 0.8571 and 1.000, respectively. Likewise, the diagnostic accuracies for myomas, endometrial polyps, and normal endometrium equaled 0.8571, 0.8500, and 0.9500, respectively. The overall average accuracy equaled 0.9029 with corresponding sensitivity and specificity values of 91.66% (95% confidence interval (CI) = 77.53–98.24%) and 89.36% (95% CI = 83.06–93.92%), respectively (Table 2). In addition, the value of F-score was 0.757. These results confirm the realization of superior diagnostic accuracy when using the combination of prediction models compared to their standalone utilization.
Fig 5

Average diagnostic accuracies for different conditions obtained using combination of 72 trained deep neural network models.

Table 2

Diagnosis results obtained using combination of 72 trained deep neural network models.

TruthPredictionTotalCorrectSensitivitySpecificityF-scoreAccuracyAverage
MalignantOthers
CancerMalignant18321180.8571
AEHMalignant15015151
MyomaOthers31821180.85710.9029
PolypOthers95160510.85
NormalOthers35760570.95
Total481291771590.91670.8940.78570.8983
Correct33126
Precision0.68750.9767

AEH: Atypical endometrial hyperplasia.

AEH: Atypical endometrial hyperplasia.

Discussion

In this study, we aimed to develop a DNN-based automated system to detect the presence of endometrial tumors in hysteroscopic images. As observed in this study, an average diagnostic accuracy exceeding 90% was realized when using the combination of 72 trained DNN models. Overall, we were able to realize a relatively high diagnostic accuracy, despite the consideration of only a limited number of endometrial cancer and AEH cases. As described in the Introduction section, several deep-learning models for use in image-recognition applications have been developed in recent years. Additionally, their utilization in medical applications has been thoroughly investigated. For example, Esteva et al. [22] developed a deep-learning algorithm trained on a dataset comprising more than 129,000 images of over 2,000 different skin diseases. Subsequently, they evaluated whether their proposed classification system could successfully distinguish skin-cancer cases from those corresponding to benign skin diseases. They observed that their proposed system could demonstrate diagnostic performance on par with that proposed by a group of clinical specialists [22]. Automated systems that perform disease diagnoses by applying deep-learning models to endoscopic images, such as those captured by gastrointestinal endoscopes and cystoscopes, have been developed in recent years [17,18]. Although colorectal neoplastic polyps represent the precancerous lesions of colorectal cancer, their presence can be typically diagnosed by an endoscopist with the naked eye. However, the presence of these polyps can remain undetected in cases where they are either very small or possess shapes that make it difficult to identify them. Yamada et al. [18] developed a convolutional neural network-based deep-learning model that they applied to endoscopic images captured for approximately 5,000 cases; their proposed analysis yielded a polyps and precancerous-lesion detection rate of 98%. In general, the application of deep-learning techniques to image-recognition problems requires collection of 100,000–1,000,000 images to constitute a viable training dataset. However, as described earlier, in the medical field it can be difficult to obtain such a large number of samples depending on diseases and circumstances. Because the diagnosis of cancer by hysteroscopy is not a common method, it is difficult to obtain a large number of samples from a single institution at present. Therefore, in recent AI research in the medical field, a major focus is to achieve a high accuracy rate with a small sample size; there are some reports that address this. For example, Sakai et al. [23] extracted small regions from a small number of endoscopic images obtained during the early stages of gastric cancer. Data expansion technology was utilized to increase the number of images to approximately 360,000. The application of a convolutional neural network to the said image dataset yielded positive and negative predictive values of 93.4% and 83.6%, respectively. A major limitation of this study is that the video stream contained a significant number of frames that did not capture the lesions to be identified [23]. Therefore, we deleted all frames that did not capture lesions in the extracted image in Set Y. However, even frames that do not depict lesions might include malignant-tumor-specific features, such as cloudy uterine luminal fluid. Moreover, even when the degree of cloudiness is too small to be recognized by the naked eye, it can be accurately recognized by computers. Therefore, we divided the learning data into two datasets—Set X and Set Y. As described in the Results section, the results obtained using Set Y yielded a higher diagnostic accuracy compared to Set X. This suggests that the diagnostic accuracy can be improved by exclusively analyzing the lesion sites instead of all extracted images comprising the dataset. Moreover, given the limited use of hysteroscopy in medical practice and the need for consideration of several training cases to leverage the existing deep-learning models for analysis of medical images, we developed a continuity-analysis method based on a combination of neural networks. The proposed method demonstrates the realization of high diagnostic accuracies, despite the use of a limited training dataset. It is noteworthy that accuracies of 90% or more can be obtained with such a small sample size. The proposed system is our original idea and is the most significant aspect of this research. The method can also be applied to other types of medical images with fewer samples, as well as hysteroscopic images. While gastrointestinal endoscopy is commonly used in the diagnosis of gastric and colorectal cancers, in general, hysteroscopy is seldom used in the diagnosis of endometrial cancer. However, our previous study [8] demonstrates the usefulness of hysteroscopy in the diagnosis of endometrial cancer. Therefore, if a hysteroscopy-based automated system employing deep-learning models is established for clinical diagnosis of endometrial cancer, an increase in the use of hysteroscopes, can be expected as well. As already mentioned, early diagnosis of endometrial cancer can help patients retain their fertility, and it may even eliminate the need for post-therapy, which involves the use of anticancer drugs and radiation therapy, despite a surgery being performed [1,3,24]. The diagnostic system presented in this paper demonstrates the potential to be an effective system for accurate diagnosis of endometrial cancer in future. In the future, a large-scale study will be conducted using the algorithm established in this study. Therefore, the current study is a pilot to determine whether large-scale research is possible. Notably, implementation of the proposed system in its entirety is necessary to improve the positive and negative predictive values to around 100%. To facilitate high-accuracy diagnosis, it is necessary to (1) use a large number of images as well as add notations to all existing and new images and (2) develop a high-accuracy engine. Another limitation of this study is that although the use of the combinational model facilitated realization of a high diagnostic accuracy, the capacity was large when considering medical device development. Thus, the development of a more compact system must be pursued to accommodate a large number of cases. However, as mentioned before, it is difficult to significantly increase the number of hysteroscopic images in a single facility, and as future study, we aim to increase the number of samples by using this system in a multi-facility joint research collaboration. To the best of our knowledge, this study represents the first attempt toward the diagnosis of endometrial cancer using a combination of deep learning and hysteroscopy. Although two studies [25,26] concerning hysteroscopy and deep learning have been previously reported, they exclusively concern uterine myomas and in vitro fertilization, respectively, and they have not yet been used in endometrial-cancer diagnosis. As described in the Materials and methods section, three neural networks—Xception [19], MobileNetV2 [20], and EfficientNetB0 [21]—were used in this study to classify frame images extracted from video samples. These networks were selected because they are computationally inexpensive and demonstrate high accuracy, thereby facilitating real-time diagnosis while incurring low manufacturing costs. Therefore, it is important to clarify the relationship between the execution speed and neural network accuracy. From the viewpoint of the future development of deep-learning-based medical devices, it is necessary to compare real-time and post-hysteroscopy analyses. Additionally, we examined the images for which the deep-learning algorithms considered in this study could not perform an accurate diagnosis. The following two features were identified—(1) the flatness of the tumor and (2) difficulty in tumor identification due to excessive bleeding. The issues can be resolved by increasing the number of images in the training dataset. However, the size of the tumor cannot be considered a cause of error. In the future, when considering a large number of cases, it is necessary to perform subgroup analysis in accordance with the patient’s age, stage of the disease, histology, etc. Moreover, it is necessary to make a comparison with hysteroscopic specialists.

Conclusion

The challenge in medical AI research is to develop a system analysis method for improving the accuracy with a small number of samples. It is noteworthy that a high accuracy for diagnosis of endometrial cancer can be obtained with such a small sample in this study and we believe that the capability of the basic system has been established in this study. The accuracy rate of conventional diagnostic techniques, such as pathological diagnoses by curettage and cytology, is low, and screening for endometrial cancer has not been established. In the future, multi-institutional joint research should be conducted to develop this system. If this system is properly developed, it can be utilized for the screening of endometrial cancer. (A) Diagnostic accuracy realized when applying the neural networks on individual datasets. Image-classification accuracy was compared using dataset–neural-network combination. (B) Diagnostic accuracy realized when employing proposed continuity analysis using dataset–neural-network combination. (TIF) Click here for additional data file.

Stages and histological types endometrial cancer identified in patients recruited in this study.

(DOCX) Click here for additional data file.

Training and evaluation data in this study.

(DOCX) Click here for additional data file.

Datasets used in this study.

(DOCX) Click here for additional data file.

Number of parameters of each network.

(DOCX) Click here for additional data file.

Network structure of EfficientNet B0.

(DOCX) Click here for additional data file.

Network structure of MobileNet V2.

(DOCX) Click here for additional data file.

Network structure of Xception.

(DOCX) Click here for additional data file.

Average accuracies obtained through image-by-image-based predictions grouped in terms of dataset and network types.

(DOCX) Click here for additional data file.

Average accuracies obtained through video-unit-based predictions grouped in terms of dataset and network types.

(DOCX) Click here for additional data file. 7 Dec 2020 PONE-D-20-34280 Automated system for diagnosing endometrial cancer by adopting deep-learning technology in hysteroscopy PLOS ONE Dear Dr. Sone, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jan 21 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Tao Song Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for stating the following in the Competing Interests section: "Kenbun Sone has a joint research agreement with Predicthy LLC. The other authors have no competing interests to disclose" We note that one or more of the authors are employed by a commercial company: Predicthy LLC. 2.1. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. 2.2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests 3. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This work aims to establish deep learning models for classifying the presence of endometrial tumors in hysteroscopic images. And an average diagnostic accuracy exceeding 90% was realized when using the combination of 72 trained DNN models. However, I have the following concerns: 1)I am a bit curious why they use this deep learning architecture for endometrial tumors detection, rather than shallow machine learning models. 2)There are several errors in this manuscript, such as “The corresponding sensitivity and specificity equaled 91.66% and 89.36, respectively”. Is it 89.36%? The authors should double check the manuscript. 3)The manuscript should give the overall model architecture. 4)The metric method for the model is too simple, the author should add more metric method. Please refer to several literatures, such as: Pang Shanchen, Ding Tong, Qiao Sibo, Meng Fan, Wang Shuo, Li pibao, WangXun . A novel YOLOv3-arch model for identifying cholelithiasis and classifying gallstones on CT images,2019, Plos one, 6(14):e0217647.DOI: 10.1371 Wang Shudong, Dong Liyuan, Wang Xun, Wang Xingguang. Classification of Pathological Types of Lung Cancer from CT Images by Deep Residual Neural Networks with Transfer Learning Strategy. Open Medicine, 2020, 15(1): 190-197. Shanchen Pang, Yaqin Zhang, Mao Ding, Xun Wang, Xianjin Xie. A Deep Model for Lung Cancer Type Identification by Densely Connected Convolutional Networks and Adaptive Boosting. IEEE Access 2020,8: 4799-4805. Shanchen Pang, Fan Meng, Xun Wang, et al. VGG16-T: A Novel Deep Convolutional Neural Network with Boosting to Identify Pathological Type of Lung Cancer in Early Stage by CT Images, International Journal of Computational Intelligence Systems. Vol.13(1), pp. 771-780, 2020. Reviewer #2: In the paper, authors present an artificial-intelligence-based system to detect the regions affected by endometrial cancer automatically from hysteroscopic images. The diagnosis accuracy is increased. However, there are some details that can be improved. The models used in the paper are not presented well. The set of threshold value is 50, maybe you can explain some details about that. The writing of the paper should be taken care. For example, the text size on the tables, the text-transform on subtitle of page 7. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 12 Jan 2021 Reviewer #1: This work aims to establish deep learning models for classifying the presence of endometrial tumors in hysteroscopic images. And an average diagnostic accuracy exceeding 90% was realized when using the combination of 72 trained DNN models. However, I have the following concerns: Comment1 I am a bit curious why they use this deep learning architecture for endometrial tumors detection, rather than shallow machine learning models. Response 1 We appreciate your critical comments and useful suggestions. Deep learning is highly anticipated in the medical field because deep learning techniques are particularly suitable for image analysis. They can be used for classification, image quality improvement, and segmentation of medical images. In contrast, shallow machine learning is not suitable for image recognition. We have added this information to the revised manuscript considering your comment (Lines 66-69). Comment2 There are several errors in this manuscript, such as “The corresponding sensitivity and specificity equaled 91.66% and 89.36, respectively”. Is it 89.36%? The authors should double check the manuscript. Response 2 We appreciate your critical comments and useful suggestions. It is 89.36%(Lines36). We have corrected the oversight. Comment3 The manuscript should give the overall model architecture. Response 3 We appreciate your critical comments and useful suggestions. We have added the overall architecture of the model (Figure2) in accordance with your suggestion. Comment4 The metric method for the model is too simple, the author should add more metric method. Please refer to several literatures, such as: Pang Shanchen, Ding Tong, Qiao Sibo, Meng Fan, Wang Shuo, Li pibao, WangXun . A novel YOLOv3-arch model for identifying cholelithiasis and classifying gallstones on CT images,2019, Plos one, 6(14):e0217647.DOI: 10.1371 Wang Shudong, Dong Liyuan, Wang Xun, Wang Xingguang. Classification of Pathological Types of Lung Cancer from CT Images by Deep Residual Neural Networks with Transfer Learning Strategy. Open Medicine, 2020, 15(1): 190-197. Shanchen Pang, Yaqin Zhang, Mao Ding, Xun Wang, Xianjin Xie. A Deep Model for Lung Cancer Type Identification by Densely Connected Convolutional Networks and Adaptive Boosting. IEEE Access 2020,8: 4799-4805. Shanchen Pang, Fan Meng, Xun Wang, et al. VGG16-T: A Novel Deep Convolutional Neural Network with Boosting to Identify Pathological Type of Lung Cancer in Early Stage by CT Images, International Journal of Computational Intelligence Systems. Vol.13(1), pp. 771-780, 2020. Response 4 We appreciate your critical comments and useful suggestions. We have added the metric methods in accordance with your comments. F-score and Precision have been added to Table 2. In addition, the description and structure of each network are also given (Tables S4, S5, S6, S7, Lines 164-179). Reviewer #2: In the paper, authors present an artificial-intelligence-based system to detect the regions affected by endometrial cancer automatically from hysteroscopic images. The diagnosis accuracy is increased. However, there are some details that can be improved. Comment1 The models used in the paper are not presented well. Response 1 We appreciate your critical comments and useful suggestions. We have added the overall architecture of the model (Fgure2) to provide further details of the model used. In addition, the description and structure of each network are also given (Tables S4, S5, S6, S7, Lines 164-179). Comment2 The set of threshold value is 50, maybe you can explain some details about that. Response 2 We appreciate your critical comments and useful suggestions. The threshold was taken from the points where the malignant score intersects with the other scores rather than the point where the average of two scores was the best, because the threshold should be set lower to reduce oversight cases in the actual clinical devices. We have added this information to the revised manuscript considering your comment (Lines 152-154). Comment3 The writing of the paper should be taken care. For example, the text size on the tables, the text-transform on subtitle of page 7. Response 3 We appreciate your critical comments and useful suggestions. We have revised the manuscript in accordance with your suggestion and PLOS ONE's style requirements. Submitted filename: Response to Reviewers .docx Click here for additional data file. 1 Mar 2021 Automated system for diagnosing endometrial cancer by adopting deep-learning technology in hysteroscopy PONE-D-20-34280R1 Dear Dr. Sone, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Tao Song Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thanks for your efforts, all comments have been addressed by the authors, so, I recommand to accpet the manuscript. Reviewer #2: In the paper, authors present an artificial-intelligence-based system to detect the regions affected by endometrial cancer automatically from hysteroscopic images. The diagnosis accuracy is increased. The authors replied well to the suggestions I proposed. It can be accepted. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No 5 Mar 2021 PONE-D-20-34280R1 Automated system for diagnosing endometrial cancer by adopting deep-learning technology in hysteroscopy Dear Dr. Sone: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Tao Song Academic Editor PLOS ONE
  22 in total

Review 1.  Deep learning.

Authors:  Yann LeCun; Yoshua Bengio; Geoffrey Hinton
Journal:  Nature       Date:  2015-05-28       Impact factor: 49.962

2.  Digital Image Analysis with Fully Connected Convolutional Neural Network to Facilitate Hysteroscopic Fibroid Resection.

Authors:  Péter Török; Balázs Harangi
Journal:  Gynecol Obstet Invest       Date:  2018-07-05       Impact factor: 2.031

3.  Treatment efficiency of comprehensive hysteroscopic evaluation and lesion resection combined with progestin therapy in young women with endometrial atypical hyperplasia and endometrial cancer.

Authors:  Bingyi Yang; Yuhui Xu; Qin Zhu; Liying Xie; Weiwei Shan; Chengcheng Ning; Bingying Xie; Yue Shi; Xuezhen Luo; Hongwei Zhang; Xiaojun Chen
Journal:  Gynecol Oncol       Date:  2019-01-21       Impact factor: 5.482

4.  Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images.

Authors:  Toshiaki Hirasawa; Kazuharu Aoyama; Tetsuya Tanimoto; Soichiro Ishihara; Satoki Shichijo; Tsuyoshi Ozawa; Tatsuya Ohnishi; Mitsuhiro Fujishiro; Keigo Matsuo; Junko Fujisaki; Tomohiro Tada
Journal:  Gastric Cancer       Date:  2018-01-15       Impact factor: 7.370

5.  Neural networks and physical systems with emergent collective computational abilities.

Authors:  J J Hopfield
Journal:  Proc Natl Acad Sci U S A       Date:  1982-04       Impact factor: 11.205

6.  Segmentation of the uterine wall by an ensemble of fully convolutional neural networks.

Authors:  Peter Burai; Andras Hajdu; Felipe-Riveron Edgardo Manuel; Balazs Harangi
Journal:  Annu Int Conf IEEE Eng Med Biol Soc       Date:  2018-07

7.  3D Deep Learning from CT Scans Predicts Tumor Invasiveness of Subcentimeter Pulmonary Adenocarcinomas.

Authors:  Wei Zhao; Jiancheng Yang; Yingli Sun; Cheng Li; Weilan Wu; Liang Jin; Zhiming Yang; Bingbing Ni; Pan Gao; Peijun Wang; Yanqing Hua; Ming Li
Journal:  Cancer Res       Date:  2018-10-02       Impact factor: 12.701

8.  Usefulness of biopsy by office hysteroscopy for endometrial cancer: A case report.

Authors:  Kenbun Sone; Satoko Eguchi; Kayo Asada; Futaba Inoue; Yuichiro Miyamoto; Michihiro Tanikawa; Tetsushi Tsuruga; Mayuyo Mori-Uchino; Yoko Matsumoto; Osamu Hiraike-Wada; Katsutoshi Oda; Yutaka Osuga; Tomoyuki Fujii
Journal:  Mol Clin Oncol       Date:  2020-05-27

9.  Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy.

Authors:  Masayoshi Yamada; Yutaka Saito; Hitoshi Imaoka; Masahiro Saiko; Shigemi Yamada; Hiroko Kondo; Hiroyuki Takamaru; Taku Sakamoto; Jun Sese; Aya Kuchiba; Taro Shibata; Ryuji Hamamoto
Journal:  Sci Rep       Date:  2019-10-08       Impact factor: 4.379

10.  The Role of Hysteroscopy in Evaluating Postmenopausal Asymptomatic Women with Thickened Endometrium.

Authors:  Giuseppe Trojano; Gianluca Raffaello Damiani; Vita Caroli Casavola; Rossella Loiacono; Antonio Malvasi; Antonio Pellegrino; Valeria Siciliano; Ettore Cicinelli; Maria Giovanna Salerno; Lorella Battini
Journal:  Gynecol Minim Invasive Ther       Date:  2018-02-16
View more
  2 in total

1.  Deformation Analysis and Research of Building Envelope by Deep Learning Technology under the Reinforcement of the Diaphragm Wall.

Authors:  Lijuan Wang; Qihua Zhao
Journal:  Comput Intell Neurosci       Date:  2022-09-14

2.  Preoperative prediction by artificial intelligence for mastoid extension in pars flaccida cholesteatoma using temporal bone high-resolution computed tomography: A retrospective study.

Authors:  Masahiro Takahashi; Katsuhiko Noda; Kaname Yoshida; Keisuke Tsuchida; Ryosuke Yui; Takara Nakazawa; Sho Kurihara; Akira Baba; Masaomi Motegi; Kazuhisa Yamamoto; Yutaka Yamamoto; Hiroya Ojiri; Hiromi Kojima
Journal:  PLoS One       Date:  2022-10-03       Impact factor: 3.752

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.