Literature DB >> 33974652

COVID-19 diagnosis from CT scans and chest X-ray images using low-cost Raspberry Pi.

Khalid M Hosny1, Mohamed M Darwish2, Kenli Li3, Ahmad Salah1,3.   

Abstract

The diagnosis of COVID-19 is of vital demand. Several studies have been conducted to decide whether the chest X-ray and computed tomography (CT) scans of patients indicate COVID-19. While these efforts resulted in successful classification systems, the design of a portable and cost-effective COVID-19 diagnosis system has not been addressed yet. The memory requirements of the current state-of-the-art COVID-19 diagnosis systems are not suitable for embedded systems due to the required large memory size of these systems (e.g., hundreds of megabytes). Thus, the current work is motivated to design a similar system with minimal memory requirements. In this paper, we propose a diagnosis system using a Raspberry Pi Linux embedded system. First, local features are extracted using local binary pattern (LBP) algorithm. Second, the global features are extracted from the chest X-ray or CT scans using multi-channel fractional-order Legendre-Fourier moments (MFrLFMs). Finally, the most significant features (local and global) are selected. The proposed system steps are integrated to fit the low computational and memory capacities of the embedded system. The proposed method has the smallest computational and memory resources,less than the state-of-the-art methods by two to three orders of magnitude, among existing state-of-the-art deep learning (DL)-based methods.

Entities:  

Mesh:

Year:  2021        PMID: 33974652      PMCID: PMC8112662          DOI: 10.1371/journal.pone.0250688

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


1 Introduction

COVID-19 pandemic affects the lifestyle of the entire world. New challenges are raised for human beings to use the existing knowledge to fight COVID-19 disease. One of these challenges is COVID-19 disease diagnosis using images of chest X-ray [1]. COVID-19 chest radiographs outline bilateral air-space consolidation, as described in the disease characteristics [2]. As DL-based methods are successfully utilized to solve different problems [3, 4], there are several attempts to use chest X-ray and CT scan images to detect COVID-19 cases [5, 6]. For instance, Apostolopoulos and Mpesiana [7] utilized a DL model to classify the X-ray images of patients into one of three classes: bacterial pneumonia, COVID-19 disease, and normal cases. Apostolopoulos and his coauthor used the deep transfer learning approach with four architectures, namely, VGG-19 [8], MobileNet [9], Inception [10], and Xception [11]. Their proposed method has the highest accuracy (98.75%) using the VGG-19 model. Generally, the DL-based classification methods achieve the highest reported accuracy rates. Despite their classification high accuracy rates, running the deep learning models require very expensive computational resources with high specifications. This high-cost processing process might be affordable for large hospital in first-world countries, but hospitals in developing countries and rural areas do not have such expensive computational resources. To reduce the computational cost, Howard et al. [9] build a deep learning model that consumes fewer resources while sacrificing the accuracy rates. Despite the trial of Howard and his colleagues, successful DL-based classification models still require extremely expensive computational machines with high configuration. Recently, orthogonal moments were utilized to extract features form color images and successfully used in various applications such as recognition of bacterial species [12]. Since medical images have fine details; thus, the task of extracting these features requires highly accurate descriptors. The recent fractional-order descriptors [13] enable the proposed system to extract high-accurate global features from the input CT scan or X-ray images. These fractional-order descriptors have many characteristics as follows: Their orthogonality enables the representation of medical images without information redundancy. These descriptors are invariant with rotated, scaled and translated images, which improves the classification rates. There is significant robustness against common noise, such as speckle. The MFrLFM descriptors have much faster computation times than other moments. The computational challenges of DL-based methods and the success of orthogonal moments in classification problems motivate the authors to develop a cost-effective diagnosis system of COVID-19 cases (i.e., less than 100 USD), which classifies input chest scan images such as X-ray and CT into COVID-19 or other lung disease with high classification accuracy rates. The main contributions of this work are as follows. The proposed work is the first system to utilize the Linux embedded system to diagnose COVID-19 cases from the CT scan or X-ray images to the best of the authors’ knowledge. Besides, the proposed system can run on any embedded system that supports running the Python code. The proposed system consists of two separate classifiers, one used for classifying the chest X-ray images and another model to classify the CT scan images. The proposed system is designed to be a memory-efficient classification model, as state-of-the-art methods are DL-based methods with huge memory requirements. Thus, the proposed classifier model’s main impact is that it becomes possible to obtain a high accuracy rates under a limited memory condition for predicting COVID-19 cases from chest CT and X-ray images. The proposed system is the first system to utilize MFrLFMs moments for global features extraction from chest CT scans or X-ray images. Besides, the Local Binary Pattern (LBP) is utilized for extracting the local features of the input images. The remainder of the paper is organized as follows. Section 2 exposes the required background of the used techniques and platform. Section 3 discusses the related work. Section 4 describes the proposed system. Section 5 evaluates the proposed system performance. Finally, the work is concluded in Section 6.

2 Preliminaries

2.1 Local binary patterns

The LBPs algorithm has two advantages: robustness to monotonic grayscale changes and low computational cost [14]. The effective performance of the LBPs operator is thoroughly discussed in [15]. LBPs have been utilized in various application domains including medical images classification, texture classification, and facial micro-expression recognition [16]. The LBP algorithm’s basic idea is to assign a certain value, called a code, to each pixel. This pixel’s value (i.e., code) encodes the local features of the 3 × 3 neighborhood window of the eight neighbor cells, as explained in [15]. In a 3 × 3 window, the value of the central pixel is considered the threshold. If the value of any neighbor pixel is less than the threshold, then this neighbor pixel is set to zero; otherwise, the neighbor pixel value is set to one. For example, the threshold of the 3 × 3 window, which is a portion of the image, in Fig 1(a), is 131. In Fig 1(b), each neighbor pixel is set to zero or one depending on the threshold value (i.e., 131). Then, the weight of each pixel is multiplied by the pixel value (i.e., zero or one). The LBP code/value, which is assigned to the window central pixel, is the summation of the multiplied value of all of the eight neighbors.
Fig 1

An example of LBP code calculation of a single window.

((a)) 3 × 3 Sample window, ((b)) The calculation of the LBP code of the input windows.

An example of LBP code calculation of a single window.

((a)) 3 × 3 Sample window, ((b)) The calculation of the LBP code of the input windows.

2.2 Multichannel fractional-order Legendre-Fourier moments

The RGB color image defined using the f(r, θ) intensity function is represented in three primary channels as f(r, θ) = (f(r, θ), f(r, θ), f(r, θ)) [17]. The MFrLFMs are: where C denotes each primary channel (R-, G- or B-); p and q are the moment order and repetition, respectively; |p| = 0, 1, 2, 3, ……∞, |q| = 0, 1, 2, 3, ……∞. The function is where the fractional-order Legendre polynomials L(α, r) are: Because direct computation using Eq 3 is time-consuming, the three-term recurrence relation is utilized as an alternative. Since, Eq 4 shows that the rotation does not affect the magnitude values of MFrLFMs. The MFrLFMs scale invariants forms are: where coefficients C and d are [18]: A highly accurate kernel-based computational framework [19] is used to compute MFrLFMs as follows: The interpolated function is calculated from the intensity functions of the original image. This task can be achieved using the cubic interpolation, as explained in [20]. Based on Eq 8, the radial and polar kernels are: Eq 9 shows that kernel J(θ) is exactly evaluated as follows: An accurate numerical integration approach is used for calculating the integration in Eq 11.

2.3 Raspberry Pi: A Linux embedded system

Raspberry Pi is a single-board computer or a Linux embedded system; it is an open-source ecosystem. It is a cost-effective, lightweight, and portable computer. Raspberry Pi has been utilized in several machine learning applications such as computer vision and image classification [21]. Because Raspberry Pi hardware supports Linux OS, we can benefit from the Python programming language and its powerful packages, especially the Scikit-learn package [22]. Thus, Raspberry Pi hardware can run many machine learning tasks. Another advantage of the Raspberry Pi model is that one of its versions has a multi-core CPU, which enables the acceleration of the running programs by providing parallel implementations of the utilized algorithms. In [23], the authors discussed the methodology of task division on a Raspberry Pi hardware using OpenMP [24] and MPI [25]. Then, the authors utilized this parallel implementation over a cluster of Raspberry Pi devices to address the problem of edge detection. Another example of the parallel implementation of Raspberry Pi devices is reported in [26]. The authors proposed using a cluster of Raspberry Pi 2 to accelerate the 3D wavelet transform and make it portable. A user can realize the overall performance of the Raspberry Pi 4 model B as the performance of an entry-level x86 PC, as shown in Fig 2. Raspberry Pi 4 model B utilizes a 64-bit CPU with four cores. The Raspberry Pi 4 model comes with three different options of main memory (i.e., RAM), namely, 1 GB, 2 GB, and 4 GB. For the display options, Raspberry Pi 4 Model B supports a dual-display option (i.e., two micro-HDMI ports). The quality of the display is as high as 4K video resolution.
Fig 2

Raspberry Pi 4 model B hardware design.

In addition, Raspberry Pi 4 model B supports different connectivity methods: wireless connection via a dual-band 2.4/5.0 GHz wireless LAN port, Gigabit Ethernet, and Bluetooth 5.0. These features allow one to connect the Raspberry Pi model to any other device, which supports IoT applications. In addition, the USB 3.0, Raspberry Pi model has three USB ports: one port for power connection and two ports for attaching four different peripherals (e.g., mouse and keyboard).

3 Related work

There is much-conducted research that addressed COVID-19 diagnosis from chest CT scans or X-ray images with machine learning techniques. These efforts can be classified based on which deep architecture was utilized by the proposed work. In the following, we discuss representative research works based on the utilized deep architectures. The DL-based models of COVID-19 diagnosis from chest CT scans or X-ray images are considered the mainstream. Several classification models are proposed, while the main difference is the utilized deep architecture (e.g., Residual Network (ResNet), VGG, Dense Convolutional Network (DenseNet), etc.). Convolutional Neural Networks (CNNs) [27] is considered the most used deep architecture for image classification. In [28], the authors proposed a CNN-based model for detecting COVID-19 cases from chest X-ray images. They proposed two models; the first one is a binary classifier with two possible outcomes, COVID-19 and Non-COVID-19. The second proposed model is a multi-class classifier model with three possible outcomes, Pneumonia, COVID-19, and Non-COVID-19. Their proposed model classification accuracy rates are 98% and 87%, respectively. In [29], Abd Elaziz et al. utilized two classifier models and two different chest X-ray image datasets to detect COVID-19 cases. Then, they proposed several CNN-based methods for the purpose of comparison. The proposed model accuracy rates were 96% and 98% for the first and second datasets, respectively. In addition, the authors in [30] compared several CNN-based COVID-19 detection models. The ResNet architecture [31] has a significant performance in image classification on several image datasets. In [32], the authors utilized the deep transfer learning approach to train a ResNet architecture for the sake of automatic COVID-19 detection from chest X-ray images. The authors utilized a dataset of 350 normal, 350 Pneumonia, and 210 COVID-19 chest X-ray images. The classification accuracy rate is 94.28%. The DenseNet architecture is proposed in [33]. In DenseNet, a layer receives inputs from all previous layers; meanwhile, the same layer passes on its feature-maps to all of the following layers. As CT scan images play a vital role in COVID-19 cases automatic detection, several works utilized CT scan images [34-36]. The authors in [37] proposed using deep transfer learning on DenseNet-201 architecture to classify the suspected case as COVID-19 or normal using the patient’s CT scan image. They trained the proposed classifier model using a dataset consisting of 2,492 CT scans. The achieved classification accuracy rate is 96%. In [38], the authors proposed using a portable on-device system to detect COVID-19 patients based on the chest X-ray images automatically. The proposed system can follow-up on the case progression as well. The authors utilized the DenseNet-121 architecture with the help of deep transfer learning to build the classifier model. The highest reported classification accuracy by the proposed system is 88%. In [39], the authors proposed a 3D deep CNN-based model to recognize COVID-19 cases using CT volumes automatically. The authors proposed generating 3D lung masks using the pre-trained UNet [40], and then these generated masks are classified. The obtained classification accuracy is 90%. In the same context, several research works proposed different tasks on COVID-19 CT scan images. For instance, the authors in [41, 42] proposed two segmentation methods for removing the noise data from the input image, as a pre-processing step for the classification task. These proposed segmentation methods eased the classification task and resulted in improving the classification accuracy rates. The VGG deep architecture achieves high classification accuracy rates despite its huge memory requirements. In [43], the authors utilized a dataset of 592 CT scan images with two classes COVID-19 and normal. Then, they proposed the CTnet-10 model, a binary classifier model. This proposed model’s classification accuracy rate is 82.1% while utilizing the pre-trained VGG-19 model for the classification task yields an accuracy of 94.5%. Another VGG-based model is proposed in [44]. The authors proposed a multi-class classifier to classify a chest X-ray image as COVID-19, pneumonia, or normal. The utilized dataset consists of 360 images. They proposed creating feature maps from the X-ray images, and then the vectorized version of these feature maps are classified using the VGG-16 architecture. They utilized the deep transfer learning approach by using the saved VGG-16 weights as trained on the ImageNet dataset. Besides, they proposed adding an output layer for the three possible classification outcomes. The classification accuracy rate is 91%.

4 Proposed system

The proposed system consists of four main phases, as shown in Fig 3. The first phase includes extracting the local features using the LBPs algorithm. The second phase includes global features using MFrLFMs. In the third phase, the local and global features are combined and then a feature selection method is applied to select the most significant features. Finally, the fourth phase includes a binary classifier that takes the selected local and global features as an input to classify the input image as COVID-19 disease or other diseases.
Fig 3

Flowchart of x-ray image and CT scan classification models.

4.1 Local features using the LBPs

The LBP feature vector is computed as a 1 × N vector, where N is the number of extracted local features. The LBPs algorithm partitions the input image into non-verlapping windows. A wider window size corresponds to less computational complexity and fewer details of the collected local features. In the proposed system, the number of neighbors P is set to 8. Thus, the total number of extracted local features is N = (P × P − 1) + 3 = (8 × 7) + 3 = 59 features.

4.2 Global feature extraction using MFrLFMs

The sequential computations of MFrLFMs are inconvenient for multicore CPU without loop fusion, since the computations consist of four nested loops. Thus, we utilized the loop fusion technique to the outermost two loops to parallelize the sequential computations of MFrLFMs. Thus, the iterations of this fused loop can be independently computed. Since Raspberry Pi has at most four cores, the loop fusion of the outermost two loops provides a sufficient number of independent iterations. The iteration number of the two fused loops of MFrLFM computation is mapped to the original loop iteration numbers as shown in Eq 12 for two loops. where i is the iteration number of the four fused loops, i ∈ [0, (pmax + 1) × (qmax + 1)] for a two-loop fusion. Algorithm 1 lists the parallel implementation of the MFrLFM computations. In Algorithm 1 line 1, the algorithm divides the iterations of the outer loop over only p parallel resources, i.e., Raspberry Pi CPU cores. This task can easily be accomplished using the OpenMP directive #pragma omp parallel for num_threads(p). This OpenMP evenly divides (pmax + 1) × (qmax + 1) iterations over the available p threads/cores. In Algorithm 1 line 2, the for loop represents two fused loops of kernels p and q. Iterator i goes through (pmax + 1) × (qmax + 1) iterations; each iteration represents unique p and q values. Thus, the variable i should be mapped to the corresponding p and q values, as listed in lines 4 and 5, using Eq 12. Line 6 resets the accumulative variables of each M moment. The for loop in Line 7 goes through all of the M image rings. Similarly, the for loop in Line 8 goes through each sector r. Line 9 computes the kernel value. To compute the kernel value, two terms should be multiplied; the radial kernel value is accessed using the p and ring values, and the repeating kernel is accessed using the ring, sec, and q values. Lines 10-12 compute the moment of the three channels, i.e., red, green, and blue. At each of these three lines, the image pixel is accessed by the term r_image[ring][sec] using the ring and sec values and multiplied by the kernel value, as computed in Line 9. Finally, the M moment is computed by multiplying the value computed within the loop by a constant. Algorithm 1 consists of three nested loops. The time complexity of the first loop is . The second and third loops iterate over each pixel of the N × N pixels of the input image. The time complexity of the second and third inner loops is O(N2). Thus, the time complexity of Algorithm 1 is the multiplication of time complexity of these three loops. Using p parallel resources, the time complexity of Algorithm 1 (i.e., MFrLFMs) is . The time complexity of computing the LBP algorithm is N2. There are N2 pixels per the input image; for each pixel, a binary patter of size eight is generated, where each neighbor contributes by one bit. Thus, the time complexity of computing the N2 LBP codes is O(8 × N2) = O(N2). As the LBPs can be calculated independently, the LBPs algorithms can easily be parallelized by dividing the N2 LBP codes computation over p parallel resources. Thus, the final time complexity of the local feature extraction phase is O(N2/p). The proposed systems’ overall time complexity equals the time complexity of the summation of local and global feature extraction time complexities. Thus, the proposed system overall time complexity is . The space complexity of the MFrLFMs algorithm is O(pmax × qmax), as the algorithm stores the computed moments in a 2D matrix of pmax rows and qmax columns regardless the image size, i.e., the value of N. On the other hand, the space complexity of the local feature extraction phase by the LBP algorithm is O(N2). This is because the LBPs algorithm stores a code for each pixel of the N2 image pixels. Thus, the overall space complexity of the proposed method is O(N2 + pmax × qmax) = O(N2), as pmax ≪ N and qmax ≪ N. Algorithm 1 The parallel algorithm of MFrLFMs computations. 1 Divide the following for iterations over p cores 2 for i = 0: 3 (pmax + 1) × (qmax + 1) do 4 p = i/(qmax + 1) 5 q = i mod (qmax + 1) 6 r = g = b = 0 7 for ring = 1: M do 8 for sec = 1: S × (2 × i + 1) do 9 kernel_val = I[p][ring] × Iq[ring][q][sec] 10 r += r_image[ring][sec] × kernel_val 11 g += g_image[ring][sec] × kernel_val 12 b += b_image[ring][sec] × kernel_val end for end for 13 Red_M[p][q] = r × ((2 × p) + 1)/(2 × PI) 14 Green_M[p][q] = g × ((2 × p) + 1)/(2 × PI) 15 Blue_M[p][q] = b × ((2 × p) + 1)/(2 × PI) end for

4.3 Feature selection and classification

The last step to prepare the data for the classification task is to select the most significant features to classify the images. The number of extracted local features of each input image is 59, as discussed in 4.1. The number of extracted global features of each input image is (pmax + 1) × (qmax + 1) features. For example, if pmax = qmax = 30, then each image is represented by 961 global features. Thus, the total number of local and global features for an image when pmax = qmax = 30 is 1,020 features. In other words, each image is represented by 1,020 decimal values. We proposed applying a feature selection technique to remove any irrelevant, redundant, and noise features. The feature selection reduces the classifier training time and classifier prediction time, since extracting fewer features reduces the feature extraction phase time. To achieve this goal, we proposed using Sequential Feature Selector (SFS) greedy search technique. This greedy approach has k iterations. The SFS method adds one feature at each iteration, which is the most significant feature; finally, The SFS algorithm selects the most k significant features. If the SFS algorithm finds a subset of features with fewer than k features, this feature subset is reported. Thus, the number of selected features can be smaller than or equal to k. The SFS is executed one time; thus, it has no effect on the proposed method run-time nor the proposed method time complexity. Once we selected the most significant k features for all input images using the SFS technique, the dataset is ready to train the classifier. The proposed method is applicable for any binary classifier, COVID-19 or non-COVID-19 input image, where the input image can be a chest X-ray or a CT scan image. We proposed two separate classifier models for each of the two image types.

5 Experimental results

5.1 Dataset

The utilized dataset consists of eight lung diseases from eight chest X-ray dataset [45]: 1) Atelectasis; 2) Cardiomegaly; 3) Effusion; 4) Infiltration; 5) Mass; 6) Nodule; 7) Pneumonia; 8) Pneumothorax. Each lung disease has 212 images. These images of the eight lung diseases are collected in one image class, which is called Non-COVID-19 diseases. The second class consists of 212 images of chest X-ray of COVID-19 patients [46]. This data collection approach results in unbalanced classes, as the first class has 212 × 8 = 1, 696 images and the second class has 212 images. In addition, we used a dataset of CT scans of COVID-19 patients and other lung diseases. The second dataset consists of 2,842 images classified in two classes: COVID-19 class with 1,252 images and non-COVID-19 class with 1,230 images [47]. Fig 4 shows samples of these two datasets.
Fig 4

Sample of the two datasets: (a) chest X-ray images [45]; (b) CT scans [47].

The utilized datasets are split into 80% and 15% as training and test sets, respectively. These two sets are used to train and test the proposed models and the comparison method. Besides, the cross-validation technique was utilized, where the number of folds was five. In other words, the dataset was divided into five different ways. Finally, the hyperparameters of these methods were set to the default values during the training phase for the methods of comparison.

5.2 Setup

Experiments are performed on a Raspberry Pi 4 Model B with 4-cores CPU. The utilized OS is 64-bit Linux. The implementations were written in the C++ and the Python programming languages. C++ is used to implement the local- and global-feature extraction algorithms, and Python is used to implement the image classifier. We used the standard OpenMP thread library [24] for CPU multi-core implementation. The reported results are the average of running each experiment three times. Fig 5 shows the result of the proposed system on Raspberry Pi 4 Model B, where the input image is classified as a chest X-ray of a COVID-19 patient.
Fig 5

Proposed system on Raspberry Pi 4 Model B.

The MFrLFM radial and repeating kernel order is set to 31. Thus, the number of global features is 961. The number of local features is 59. The total number of utilized features is 1,020. After the SFS feature selection method has been applied, the chest X-ray image classifier and CT scan classifier are trained on 26 global plus 15 local features (i.e., 41 features).

5.3 Results

Table 1 lists three different accuracy metrics to evaluate the proposed methods, including the accuracy, AUC, and F1-score. The proposed method achieved comparable results with comparison to other deep learning methods in terms of accuracy, AUC, and F1-score metrics. Table 2 lists the required memory and prediction time of the proposed trained classifier. Table 3 lists memory requirements of the proposed method and state-of-the-art models. As listed in Table 3, the proposed system has the least memory requirements and prediction time in comparison to the other deep-learning-based methods.
Table 1

Accuracy of the proposed models.

dataAccuracy(%)F1-scoreAUC
chest X-ray99.3±0.2%93.1±0.2%94.9±0.1%
CT scans93.2±0.3%92.1±0.3%93.2±0.3%
Table 2

Required memory and the prediction time in seconds of the two proposed models on Raspberry Pi.

dataMemoryPrediction time (s)
chest X-ray3MB10
CT scans3MB10
Table 3

The required memory of the propped model and state-of-the-art models on Raspberry Pi.

MethodMemory
The proposed system1MB
VGG19 [8]1,406MB
MobileNet v2 [9]13MB
Inception-3 [10]232MB
ResNet-50 [48]101MB
To evaluate the ROC AUC values, Fig 6 depicts the receiver operating characteristic (ROC) curve, and Fig 7 depicts the precision curve of the proposed method. Figs 6 and 7 show the efficiency of the proposed system to classify the input chest X-ray CT scan as a COVID-19 disease or other disease.
Fig 6

ROC curve of the proposed X-ray image classifier model.

Fig 7

ROC curve of the proposed CT scan classifier model.

Finally, the precision, recall, and confusion matrices are examined for the two proposed models. Figs 8 and 9 show the precision-recall scores of the proposed two models. Besides, the confusion matrices are depicted in Figs 10 and 11 for the two proposed classifiers.
Fig 8

Precision-recall score curve of the proposed X-ray image classifier model.

Fig 9

Precision-recall score curve of the proposed CT scan classifier model.

Fig 10

Confusion matrix of the proposed X-ray classifier model.

Fig 11

Confusion matrix of the proposed CT scan images classifier model.

The results above outline the memory requirements gap between of the proposed classifiers and state-of-the-art methods; the proposed models require memory spaces less the existing methods by 2-3 orders of magnitude, as listed in Table 3. Thus, the main goal of this research is achieved, as a small-sized (i.e., 3 MB) model for COVID-19 diagnosis can fit the low-memory embedded system. In addition, the proposed models maintain the accuracy rates of state-of-the-art methods.

6 Conclusion

In this work, we proposed two low-cost image classifier models that can operate on a Linux-embedded system, i.e., Raspberry Pi, to automatically detect COVID-19 cases on two types of imagery data, namely, chest X-ray and CT scan images. To our knowledge, this is the first time to achieve this task. The proposed system consists of several steps. First, the proposed methods extract the local features using LBP and then extract the global features using MFrLFMs moments from the input image. Second, the combined local and global features represent the input chest X-ray or CT scan image’s final features. Finally, a classifier is trained to distinguish COVID-19 cases from the chest X-ray or CT scan images of other lung diseases. The proposed classification models require the smallest amount of memory (approximately 3 MB), which makes these models suitable for computationally limited hardware. The two proposed models are evaluated on a chest X-ray dataset of 1,926 images and a CT scan dataset of 2,482 images; each dataset has two classes (i.e., COVID-19 and other lung diseases). The proposed system has comparable scores on the evaluation metrics with state-of-the-art methods. At the same time, their computational and memory requirements are less than those of state-of-the-art DL-based methods by 2-3 orders of magnitude. As future work, the proposed system can be extended to classify more lung diseases. This can be achieved by proposing a multi-class classifier and utilizing the proper dataset. 1 Mar 2021 PONE-D-21-04444 COVID-19 Diagnosis from CT Scans and Chest X-ray Images using Low-cost Raspberry Pi PLOS ONE Dear Dr. Hosny, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. I found this manuscript well written and interesting. As you will infer from below that there was a disagreement among the reviewers regarding enthusiasm for this work. Reviewer 1 was of the view that manuscript partly describes a technically sound piece of scientific research and recommended major revision. However,  Reviewer 2 and Reviewer 3 made certain suggestions for improvement and were of the view that your work describe technically sound piece of scientific research and recommended minor revision. After thorough consideration of comments of reviewers, my decision is "minor revision". Please incorporate comments raised by both reviewers. Please submit your revised manuscript by Apr 15 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Gulistan Raja Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for stating the following in the Acknowledgments Section of your manuscript: "Kenli Li was supported in part by the National Natural Science Foundation of China under Grant 61702170." a) We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: "The author(s) received no specific funding for this work." b) Please provide an amended statement that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now.  Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement. Please include your amended statements within your cover letter; we will change the online submission form on your behalf. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: As far as one can see, the experimental work seems to have been carried out properly and the results are carefully presented, both in tables and in graphs, to compare this and earlier methods. However, the paper has following concern: 1. I dont understand exactely, why authors are fousing on Linux-embedded system, i.e., Raspberry Pi, to visually detect COVID-19 in chest X-ray and CT scans. Irrespective of platform, ultimately we can think of prediction accuracy with platform independent for better prospective. 2. Can the authors show the impacts of various proposed features to establish the proposal? 3. The training conditions when comparing with other methods should be discussed more specifically. The paper lacks this part. 4. Do the authors take different combinations between training and testing samples in experimentations to show the merit of the proposed method? Proper justification should be added for better clarity. 5. The title of the paper is very specific. I'll suggest modifying it to make it more generic using the term "directional codes". 6. The title of the paper is very specific. I'll suggest modifying it to make it more generic using the term "directional codes". 7. Whilst the English is generally quite good, there are quite a few minor grammatical errors, and a careful read through is needed to eliminate these. 8. More experimental analysis need to be done for better claity to the end reader. 9. The objective and motivation need to address properly in abstract section. Current form of abstract will not accept by research community. 10. The major contribution of the paper should be heighlited immediately after literature kind review. Hope these above points will be more beneficial to the author for improvement of the paper. Reviewer #2: This paper design a Raspberry Pi Linux embedded system for COVID-19 diagnosis from CT Scans and Chest X-ray Images. The cost of this system is the smallest among the deep learning-based models. My major concerns are list below: -In the introduction section, should discuss some recent state-of-the-art models so that the readers can know more about the new techniques in this field. There are some literature that may be useful to further improve the quality of this section. [ref1]Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19, IEEE reviews in biomedical engineering, 2020; [ref2]Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images, IEEE transactions on medical imaging, 2020; [ref3]Joint prediction and time estimation of COVID-19 developing severe symptoms using chest CT scan, MIA, 2021; [ref3]Severity assessment of COVID-19 using CT image features and laboratory indices, Physics in Medicine & Biology, 2020; [ref4]Adaptive feature selection guided deep forest for covid-19 classification with chest ct, JBHI, 2020; [ref5]JCS: An Explainable COVID-19 Diagnosis System by Joint Classification and Segmentation, TIP, 2021; -As seen in Fig.3, the proposed system belongs to the multi-modality system, thus, the authors should discuss more these works. Reviewer #3: This work presents a novel approach for rapid, on-device COVID-19 detection using Raspberry Pi. Despite the plethora of works on this topic, this one clearly stands-out due to the low computational requirements and the Raspberry Pi deployment. The experimental section is a bit short, but the results are convincing. I recommend acceptance, although a few things should be addressed: Main points: - Is this the first scientific report of using Raspberry Pi to diagnose COVID from medical images? If yes, please state so and if not, please cite relevant work. I quickly searched but couldn’t find anything very similar. I also recommend discussing the work by [2] since it also deals with on-device inference. - Can you please comment on the overall time and space complexity?The elaboration about complexity in MFrLFMs is great (although there, at least a comment on space complexity would also be useful). It would be great to see a similar elaboration on the LBP method? I think if the entire pipeline (Fig3) can be expressed in terms of computational complexity in O-notation, this could be a major finding and contribution that might even be worth mentioning in the abstract. Also the time complexity could be briefly compared with a standard MLP/CNN to solidify how important this contribution is. - You performed a binary classification. Although it’s questionable how practically relevant this is, it’s beyond the scope of this work. But could you please briefly comment whether/how this could be extended to a multi-class classification. - Table 1: I understand that these results are based on three repeated runs. Could you please at least show the performance with one more digit precision and also indicate the standard deviation. Ideally, a cross validation should be performed to strengthen this finding. - Please have a look at what “data availability” for this journal means. I understand the imaging data is public, but the requirements write e.g.: “For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available.” I therefore had to click “No” at one of the questions. Line 7 - 32: Correct me if I’m wrong, but this seems to be a random excerpt of the myriad of publications about COVID-19 detection with DL. I think these lines can be removed and replaced by a few, generic sentences about the efforts in the field. There were hundreds of publications on this in 2020. Have a look at one of the many review papers, e.g. here is one with a meta-analysis [1]. Line 44: What’s the application, pls fix the end of the sentence. Line 77: That’s a self-referencing definition. Please try to explain on a bit higher level (i.e., what’s the purpose of the code?) The explanation below is good, but this line is confusing. L78: typo: windows L187: bad cross-ref L188: space missing after equation\\nL210: 212 what? apples? :D Fig 4: Do you have permission from the copyright holders to print these images? Please doublecheck and cite. L214: I have troubles understanding why this is imbalanced. The CXR dataset has 212 COVID-19 samples and 212 non-COVID samples (from the 8 diseases), no? Please clarify. [1] Born, J, et al. "On the Role of Artificial Intelligence in Medical Imaging of COVID-19." medRxiv (2020). [2] Li, Xin, Chengyin Li, and Dongxiao Zhu. "COVID-MobileXpert: On-device COVID-19 patient triage and follow-up using chest X-rays." 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2020. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 30 Mar 2021 PLOS ONE REPLY TO COMMENTS Ref. No.: PONE-D-21-04444 Paper Title: COVID-19 Diagnosis from CT Scans and Chest X-ray Images using Low-cost Raspberry Pi. Dear Editors and Reviewers, Thank you for your valuable comments and feedback on our paper, which helped us improve its presentation and quality. We have carefully addressed all of your comments in the revised manuscript. We hope that you will be satisfied with the response provided by us. Sincerely, The authors. Reviewer #1 Comments Comment: far as one can see, the experimental work seems to have been carried out properly and the results are carefully presented, both in tables and in graphs, to compare this and earlier methods. However, the paper has following concern: Response: Thank you for this valuable comment. Comment 1: I dont understand exactely, why authors are fousing on Linux-embedded system, i.e., Raspberry Pi, to visually detect COVID-19 in chest X-ray and CT scans. Irrespective of platform, ultimately we can think of prediction accuracy with platform independent for better prospective. Response 1: The authors are thankful to the reviewer for pointing out this issue. The proposed system is suitable for any computing device with the required environments (i.e., C++ and Python). The authors focused on validating their method on a Raspberry Pi embedded system module because Raspberry Pi modules are cheap and portable embedded systems; thus, developing countries can utilize the proposed work in remote areas to detect COVID-19 low budget. Also, the prediction accuracy is the same across different platforms. In other words, running the proposed machine learning model on any computing device/embedded system, the obtained accuracy will be the same. The main challenge was to propose a very lightweight (i.e., the proposed classification model is 3 MB) and efficient prediction model to fit embedded systems. We outlined this issue on page 2, lines 45-48. Comment 2: Can the authors show the impacts of various proposed features to establish the proposal? Response 2: The authors are thankful to the reviewer for pointing out this issue. This paper's main proposed feature is to extract the features from an X-ray or CT scan image and then classify this image under the limited memory space condition. Unlike deep learning methods, the proposed system is deployed using only 3MB, which is far less than the state-of-the-art methods requiring 100s of MB. Thus, the proposed work's main impact is that it becomes possible to predict COVI-19 cases from CT and X-ray images on an embedded system with minimal memory. In the revised version of the manuscript, we added the following sentence, on page 2, lines 51-53. "The main impact of the proposed classifier model that it becomes possible to obtain a high level of accuracy rates under a limited memory condition predicting COVI-19 cases from CT and X-ray images." Comment 3: The training conditions when comparing with other methods should be discussed more specifically. The paper lacks this part. Response 3: The authors apologize for such inconvenience. In the revised manuscript, we discussed this issue in Section 5.1, on page 11, lines 297-302. Comment 4: Do the authors take different combinations between training and testing samples in experimentations to show the merit of the proposed method? Proper justification should be added for better clarity. Response 4: The authors are thankful to the reviewer for pointing out this issue. Yes, the authors utilized the cross-validation technique, where the proposed method is trained on five different combinations of datasets. This issue is discussed in Section 5.1 in the revised manuscript, on page 11, lines 297-302. Comment 5: The title of the paper is very specific. I'll suggest modifying it to make it more generic using the term "directional codes". Response 5: The authors are thankful to the reviewer for pointing out this issue. We prefer to use the same title. The authors emphasize using the Raspberry Pi model due to its widespread usage at low-cost. Thus, we believe mentioning the term "Raspberry Pi" in the title should attract readership and ease the task of reproducing the proposed experiments. Comment 6: The title of the paper is very specific. I'll suggest modifying it to make it more generic using the term "directional codes". Response 6: The authors are thankful to the reviewer for pointing out this issue. We prefer to use the same title. The authors emphasize using the Raspberry Pi model due to its widespread usage at low-cost. Thus, we believe mentioning the term "Raspberry Pi" in the title should attract readership and ease the task of reproducing the proposed experiments. Comment 7: Whilst the English is generally quite good, there are quite a few minor grammatical errors, and a careful read through is needed to eliminate these. Response 7: The authors are thankful to the reviewer for pointing out this issue. The paper is thoroughly revised, and all the typos and grammatical errors are addressed in the revised manuscript. We attached a certficate of reviewing the manuscript from the professional edidting service www.aje.com. Comment 8: More experimental analysis need to be done for better claity to the end reader. Response 8: The authors are thankful to the reviewer for pointing out this issue. In the revised manuscript, the authors extended the results (e.g., Figs. 10 and 11 were added to depict the proposed classifiers' confusion matrices). The obtained results are discussed in more detail; please refer to Section 5.3, on page 12, lines 330-339 and pages 15 and 16. Comment 9: The objective and motivation need to address properly in abstract section. Current form of abstract will not accept by research community. Response 9: The authors are thankful to the reviewer for pointing out this issue. The abstract section is rewritten as suggested. Comment 10: The major contribution of the paper should be heighlited immediately after literature kind review. Response 10: The authors are thankful to the reviewer for pointing out this issue. As suggested, the revised manuscript includes the major contributions; please refer to page 2, lines 43-57. Comment: Hope these above points will be more beneficial to the author for improvement of the paper. Response: The authors are thankful to the reviewer for pointing out this issue. All of the points were very helpful and helped the authors to enhance the paper quality. Reviewer #2 Comments Comment: This paper design a Raspberry Pi Linux embedded system for COVID-19 diagnosis from CT Scans and Chest X-ray Images. The cost of this system is the smallest among the deep learning-based models. My major concerns are list below: Response: The authors are thankful for reviewer#2. Comment 1: In the introduction section, should discuss some recent state-of-the-art models so that the readers can know more about the new techniques in this field. There are some literature that may be useful to further improve the quality of this section. [ref1]Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19, IEEE reviews in biomedical engineering, 2020; [ref2]Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images,IEEE transactions on medical imaging, 2020; [ref3]Joint prediction and time estimation of COVID-19 developing severe symptoms using chest CT scan, MIA, 2021; [ref3]Severity assessment of COVID-19 using CT image features and laboratory indices, Physics in Medicine & Biology, 2020; [ref4]Adaptive feature selection guided deep forest for covid-19 classification with chest ct, JBHI, 2020; [ref5]JCS: An Explainable COVID-19 Diagnosis System by Joint Classification and Segmentation, TIP, 2021; Response 1: The authors are thankful to the reviewer for pointing out these remarkable articles. As suggested, we added a new section (Section 3) for the literature review where we discussed the suggested related articles on pages 6 and 7. Comment 2: As seen in Fig.3, the proposed system belongs to the multi-modality system, thus, the authors should discuss more these works. Response 2: The authors are thankful to the reviewer for pointing out this issue. We apologize for not clearly discuss this issue. The proposed system consists of two separate classifier models. Thus, there is no multi-modality in the proposed system. The user will input a chest X-ray image to the X-ray classifier or a CT scan image to the CT image classifier. Both models are designed the same way but separately. We have modified Fig. 3 to illustrate this idea. Besides, we mentioned that the two models are separate in the list of contributions in the Introduction Section, on page 2, lines 46-48 and on page 10, lines 283-284. Reviewer #3' Comments Comment: This work presents a novel approach for rapid, on-device COVID-19 detection using Raspberry Pi. Despite the plethora of works on this topic, this one clearly stands-out due to the low computational requirements and the Raspberry Pi deployment. The experimental section is a bit short, but the results are convincing. I recommend acceptance, although a few things should be addressed: Response: The authors are thankful for reviewer#3. Comment 1: Is this the first scientific report of using Raspberry Pi to diagnose COVID from medical images? If yes, please state so and if not, please cite relevant work. I quickly searched but couldn't find anything very similar. I also recommend discussing the work by [2] since it also deals with on-device inference. Response 1: The authors are thankful to the reviewer for pointing out this issue. Yes, the proposed work is the first work to use Raspberry Pi to diagnose COVID-19. As suggested, we stated that in both the Introduction and Conclusion sections. Besides, we discussed ref [2] in the revised manuscript on page 2, lines 43-48, and on page 12, lines 343, respectively. Besides, we discussed Ref [2] on page 6, lines 165-170. Comment 2: Can you please comment on the overall time and space complexity? The elaboration about complexity in MFrLFMs is great (although there, at least a comment on space complexity would also be useful). It would be great to see a similar elaboration on the LBP method? I think if the entire pipeline (Fig 3) can be expressed in terms of computational complexity in O-notation, this could be a major finding and contribution that might even be worth mentioning in the abstract. Also the time complexity could be briefly compared with a standard MLP/CNN to solidify how important this contribution is. Response 2: The authors are thankful to the reviewer for pointing out this issue. As suggested, the time and space complexities of the proposed work are discussed in detail a the end of Section 4.2, on pages 9-10, lines 238-260. Fig. 3 has two main compute-intensive tasks, which are local and global feature extraction. Thus, the time complexity and space complexity of Fig. 3 can be reduced to the sum of these two tasks. Regarding the time complexity of the MLP/CNN, several factors control this process, including the number of layers, the number of neurons per layer. Besides, the model hyperparameters' value, such as the early stopping, can dramatically change the time complexity of the MLP/CNN model. Thus, it would be challenging to compute the existing MLP/CNN models' exact time complexity. For space complexity, Table 3 in the revised manuscript shows each model's space requirements, which reflects the space complexity of the proposed method and state-of-the-art methods. Comment 3: You performed a binary classification. Although it's questionable how practically relevant this is, it's beyond the scope of this work. But could you please briefly comment whether/how this could be extended to a multi-class classification. Response 3: The authors are thankful to the reviewer for pointing out this issue. In the revised manuscript, we discussed this issue in conclusion about future work, on page 16, lines 356-358. Comment 4: Table 1: I understand that these results are based on three repeated runs. Could you please at least show the performance with one more digit precision and also indicate the standard deviation. Ideally, a cross validation should be performed to strengthen this finding. Response 4: The authors are thankful to the reviewer for pointing out this issue. As suggested, the required standard deviation is included in Table I. Besides, the authors already performed cross-validation. While this is not mentioned in the initial submission, it is discussed in the revised manuscript. Comment 5: Please have a look at what "data availability" for this journal means. I understand the imaging data is public, but the requirements write e.g.: "For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available." I therefore had to click "No" at one of the questions. Response 5: The authors are thankful to the reviewer for pointing out this issue. Comment 6: Line 7 - 32: Correct me if I'm wrong, but this seems to be a random excerpt of the myriad of publications about COVID-19 detection with DL. I think these lines can be removed and replaced by a few, generic sentences about the efforts in the field. There were hundreds of publications on this in 2020. Have a look at one of the many review papers, e.g. here is one with a meta-analysis [1]. Response 6: The authors are thankful to the reviewer for pointing out this issue. As suggested, the authors summarized the literature work in a few generic sentences. A new section (Section 3) of the literature review is added on pages 6 and 7. Comment 7: Line 44: What's the application, pls fix the end of the sentence. Line 77: That's a self-referencing definition. Please try to explain on a bit higher level (i.e., what's the purpose of the code?) The explanation below is good, but this line is confusing. L78: typo: windows L187: bad cross-ref L188: space missing after equation\\n L210: 212 what? apples? :D Response 7: The authors are thankful to the reviewer for pointing out all of these typos. We apologize for these typos. All of the typos are corrected. Comment 8: Fig 4: Do you have permission from the copyright holders to print these images? Please doublecheck and cite. Response 8: The authors are thankful to the reviewer for pointing out this issue. We made sure that the dataset is public and cited the dataset in the figure caption. Comment 9: L214: I have troubles understanding why this is imbalanced. The CXR dataset has 212 COVID-19 samples and 212 non-COVID samples (from the 8 diseases), no? Please clarify. Response 9: We apologize for not writing this part. The class of non-COVID19 contains 212*8 = 1,696 images. On the other hand, the COVID-19 class has 212 images only. Thus, the two classes are unbalanced. We rewrote the unbalanced dataset in L214 in a better manner on page 10-11, lines 287-293. [1] Born, J, et al. "On the Role of Artificial Intelligence in Medical Imaging of COVID-19." medRxiv (2020). [2] Li, Xin, Chengyin Li, and Dongxiao Zhu. "COVID-MobileXpert: On-device COVID-19 patient triage and follow-up using chest X-rays." 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2020. Submitted filename: response letter.pdf Click here for additional data file. 13 Apr 2021 COVID-19 Diagnosis from CT Scans and Chest X-ray Images using Low-cost Raspberry Pi PONE-D-21-04444R1 Dear Dr. Hosny, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Gulistan Raja Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: No Reviewer #3: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: The authors have adequately addressed my concerns and I recommend accepting the new version of manuscript. Reviewer #3: Thanks for addressing the concerns sufficiently. Congratulations to the interesting and valuable project. Here are my last thoughts: - Fig 6 - 11 are really big and quite similar. I strongly recommend aggregating all of them into a single figure with subfigures. - More of a recommendation: Since this is the first report of a raspberry Pi system for COVID detection from CXR, I would emphasize this in the abstract also. - the data underlying the findings has to be made available (see guidelines) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No Reviewer #3: Yes: Jannis Born 30 Apr 2021 PONE-D-21-04444R1 COVID-19 Diagnosis from CT Scans and Chest X-ray Images using Low-cost Raspberry Pi Dear Dr. Hosny: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Gulistan Raja Academic Editor PLOS ONE
  18 in total

Review 1.  Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19.

Authors:  Feng Shi; Jun Wang; Jun Shi; Ziyan Wu; Qian Wang; Zhenyu Tang; Kelei He; Yinghuan Shi; Dinggang Shen
Journal:  IEEE Rev Biomed Eng       Date:  2021-01-22

2.  Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images.

Authors:  Deng-Ping Fan; Tao Zhou; Ge-Peng Ji; Yi Zhou; Geng Chen; Huazhu Fu; Jianbing Shen; Ling Shao
Journal:  IEEE Trans Med Imaging       Date:  2020-08       Impact factor: 10.048

3.  Adaptive Feature Selection Guided Deep Forest for COVID-19 Classification With Chest CT.

Authors:  Liang Sun; Zhanhao Mo; Fuhua Yan; Liming Xia; Fei Shan; Zhongxiang Ding; Bin Song; Wanchun Gao; Wei Shao; Feng Shi; Huan Yuan; Huiting Jiang; Dijia Wu; Ying Wei; Yaozong Gao; He Sui; Daoqiang Zhang; Dinggang Shen
Journal:  IEEE J Biomed Health Inform       Date:  2020-08-26       Impact factor: 5.772

4.  Automated detection of COVID-19 cases using deep neural networks with X-ray images.

Authors:  Tulin Ozturk; Muhammed Talo; Eylul Azra Yildirim; Ulas Baran Baloglu; Ozal Yildirim; U Rajendra Acharya
Journal:  Comput Biol Med       Date:  2020-04-28       Impact factor: 4.589

5.  Clinical Characteristics of Coronavirus Disease 2019 in China.

Authors:  Wei-Jie Guan; Zheng-Yi Ni; Yu Hu; Wen-Hua Liang; Chun-Quan Ou; Jian-Xing He; Lei Liu; Hong Shan; Chun-Liang Lei; David S C Hui; Bin Du; Lan-Juan Li; Guang Zeng; Kwok-Yung Yuen; Ru-Chong Chen; Chun-Li Tang; Tao Wang; Ping-Yan Chen; Jie Xiang; Shi-Yue Li; Jin-Lin Wang; Zi-Jing Liang; Yi-Xiang Peng; Li Wei; Yong Liu; Ya-Hua Hu; Peng Peng; Jian-Ming Wang; Ji-Yang Liu; Zhong Chen; Gang Li; Zhi-Jian Zheng; Shao-Qin Qiu; Jie Luo; Chang-Jiang Ye; Shao-Yong Zhu; Nan-Shan Zhong
Journal:  N Engl J Med       Date:  2020-02-28       Impact factor: 91.245

6.  Early diagnosis of COVID-19-affected patients based on X-ray and computed tomography images using deep learning algorithm.

Authors:  Debabrata Dansana; Raghvendra Kumar; Aishik Bhattacharjee; D Jude Hemanth; Deepak Gupta; Ashish Khanna; Oscar Castillo
Journal:  Soft comput       Date:  2020-08-28       Impact factor: 3.732

7.  COV19-CNNet and COV19-ResNet: Diagnostic Inference Engines for Early Detection of COVID-19.

Authors:  Ayturk Keles; Mustafa Berk Keles; Ali Keles
Journal:  Cognit Comput       Date:  2021-01-06       Impact factor: 4.890

8.  Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks.

Authors:  Ioannis D Apostolopoulos; Tzani A Mpesiana
Journal:  Phys Eng Sci Med       Date:  2020-04-03

9.  Whether the weather will help us weather the COVID-19 pandemic: Using machine learning to measure twitter users' perceptions.

Authors:  Marichi Gupta; Aditya Bansal; Bhav Jain; Jillian Rochelle; Atharv Oak; Mohammad S Jalali
Journal:  Int J Med Inform       Date:  2020-11-10       Impact factor: 4.046

10.  Joint prediction and time estimation of COVID-19 developing severe symptoms using chest CT scan.

Authors:  Xiaofeng Zhu; Bin Song; Feng Shi; Yanbo Chen; Rongyao Hu; Jiangzhang Gan; Wenhai Zhang; Man Li; Liye Wang; Yaozong Gao; Fei Shan; Dinggang Shen
Journal:  Med Image Anal       Date:  2020-10-10       Impact factor: 8.545

View more
  4 in total

1.  CXGNet: A tri-phase chest X-ray image classification for COVID-19 diagnosis using deep CNN with enhanced grey-wolf optimizer.

Authors:  Anandbabu Gopatoti; P Vijayalakshmi
Journal:  Biomed Signal Process Control       Date:  2022-06-06       Impact factor: 5.076

2.  A deep and handcrafted features-based framework for diagnosis of COVID-19 from chest x-ray images.

Authors:  Ferhat Bozkurt
Journal:  Concurr Comput       Date:  2021-11-19       Impact factor: 1.831

3.  Marine Data Prediction: An Evaluation of Machine Learning, Deep Learning, and Statistical Predictive Models.

Authors:  Ahmed Ali; Ahmed Fathalla; Ahmad Salah; Mahmoud Bekhit; Esraa Eldesouky
Journal:  Comput Intell Neurosci       Date:  2021-11-27

Review 4.  Role of Artificial Intelligence in COVID-19 Detection.

Authors:  Anjan Gudigar; U Raghavendra; Sneha Nayak; Chui Ping Ooi; Wai Yee Chan; Mokshagna Rohit Gangavarapu; Chinmay Dharmik; Jyothi Samanth; Nahrizul Adib Kadri; Khairunnisa Hasikin; Prabal Datta Barua; Subrata Chakraborty; Edward J Ciaccio; U Rajendra Acharya
Journal:  Sensors (Basel)       Date:  2021-12-01       Impact factor: 3.576

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.