Literature DB >> 35845582

An Improved Machine Learning Model for Diagnostic Cancer Recognition Using Artificial Intelligence.

N Arivazhagan1, J Venkatesh2, K Somasundaram3, K Vijayalakshmi4, S Sathiya Priya5, M Suresh Thangakrishnan6, K Senthamilselvan7, B Lakshmi Dhevi8, D Vijendra Babu9, S Chandragandhi10, Fekadu Ashine Chamato11.   

Abstract

In the medical field, some specialized applications are currently being used to treat various ailments. These activities are being carried out with extra care, especially for cancer patients. Physicians are seeking the help of technology to help diagnose cancer, its dosage, its current status, cancer classification, and appropriate treatment. The machine learning method developed by an artificial intelligence is proposed here in order to effectively assist the doctors in that regard. Its design methods obtain highly complex cancerous inputs and clearly describe its type and dosage. It is also recommending the effects of cancer and appropriate medical procedures to the doctors. This method ensures that a lot of doctors' time is saved. In a saturation point, the proposed model achieved 93.31% of image recognition, 6.69% of image rejection, 94.22% accuracy, 92.42% of precision, 93.94% of recall rate, 92.6% of F1-score, and 2178 ms of computational speed. This shows that the proposed model performs well while compared with the existing methods.
Copyright © 2022 N. Arivazhagan et al.

Entities:  

Year:  2022        PMID: 35845582      PMCID: PMC9283038          DOI: 10.1155/2022/1078056

Source DB:  PubMed          Journal:  Evid Based Complement Alternat Med        ISSN: 1741-427X            Impact factor:   2.650


1. Introduction

There are a lot of very complex and unsolvable problems in the medical world today and delays in the treatment of certain diseases and their treatment due to low accuracy from diagnosis to calculation. Cancerous tumors are currently the most important of these diseases. Statistics warn that 8 lakh people are newly diagnosed with cancer every year in India alone [1]. If a small tumor appears on the body, the suspicion that it is a cancer will haunt the mind. Many factors such as changing lifestyle, western diet, smoking, alcohol consumption, obesity, use of pesticides, and descent cause high blood pressure, diabetes, heart attack, and cancer; of which, cancer is important. Cancer is a condition in which cells in the body grow out of control. It initially develops invisibly and can grow abnormally over time, endangering life [2]. It is common for all cancers other than leukemia to develop into tumors. Cancerous tumors grow in the mouth, nose, throat, stomach, esophagus, intestines, liver, lungs, cervix, testicles, brain, and blood. Skin cancer is no exception. A cancerous tumor not only affects the affected organ but also other organs and affects the overall function of the body throughout the day. Cancer does not kill in the first few days. It grows over the years and manifests itself in many symptoms that alert us and only then causes danger. By then, we can escape the grip of cancer if we stay awake. The main cause of cancer is smoking [3]. Toxic substances such as the polycycline aromatic hydrocarbons in tobacco, tar, nicotine, carbon monoxide, ammonia, and phenol continue to bind to body cells, causing genetic modification [4]. Then, the cells undergo excessive growth and cause cancer. If any foreign substance persists in the body for years, it will affect the part of the body on which it resides [5]. Toxins in tobacco can cause cancer of the mouth, tongue, chin, throat, and esophagus, and alcohol can cause cancer of the liver, stomach, intestines, and rectum. People who eat low-fiber foods are more likely to get colon cancer [2]. Synthetic dyes, fragrances, and sweeteners are added to hotel dishes to attract the eye and enhance the taste [6-8]. The chemicals aniline, oxime, and amide in them affect the properties of our genes and promote the formation of cancer [9]. Excessive exposure to ultraviolet rays from sunlight can cause skin cancer. X-rays and radiation can cause leukemia and skin cancer [10]. Chemicals used in agriculture can lead to cancer. Workers in the manufacture of metals such as nickel, lead, brass, iron, aluminum, acid, paint, dye, rubber workers, and chemicals such as benzene, arsenic, cadmium, chromium, can also get cancer of the skin, lungs, and larynx [11]. Different types of cancer exhibit different types of behavior. For example, lung cancer and skin cancer are two very different diseases [12]. They develop at different rates and respond to different treatments. This is why people with cancer need treatment that targets their type of cancer. A tumor is an abnormal accumulation or volume of cells [13]. However, not all tumors are cancerous. Noncancerous tumors are called benign. Benign tumors can cause infections—they can grow very large and compress healthy organs and tissues [14]. But they cannot grow (attack) into other tissues. As well as they are unable to penetrate into other parts of the body. As well as these cannot spread to other parts of the body. These tumors are rarely life-threatening [15]. Sascan's multispectral camera helps to screen and detect cancer cells in the mouth. This is a real-time solution that does not require drilling. This camera captures the inside of the mouth with different wavelengths of light. It then uses a mechanical learning algorithm to study the abnormal condition to predict the stage of the cancer [16]. The device also guides specialists to find the right tissue for the biopsy. The presence of cancer reduces the risk of misdiagnosis and ensures early detection. Thus, the onset of the disease can be predicted [17]. This battery-powered portable device can be used by primary health care centers or nonprofit organizations that run screening camps. Sascan conducts clinical studies in various areas to gather the vast amount of data needed to further modernize its algorithm. This technology can also be used to screen for other types of cancer. The biggest challenge in diagnosing cancer is that the results of the study are not always final and conclusive [18]. Phase II and III counseling are therefore also required before an accurate diagnosis can be made and treatment initiated. For a cancer that spreads rapidly, treatment can be expensive even if it is delayed for two weeks. ExoCan's technology-based testing can help diagnose the disease by examining a patient's blood, saliva, or urine. The cost of this method of analysis is less than that of conventional methods. Results will be available in a couple of days. This evaluation, which is currently being updated, will soon be implemented on a large scale. Its efficiency is better than conventional tests in terms of diagnostic ability and speed. ExoCan currently collects and analyzes samples of 500 patients a day. The use of exosomes in fluid-based biopsies is growing, albeit new, where no punctures are required to diagnose cancer [19]. Fewer than five companies worldwide operate on it. But this technology has become more and more popular. Exosome Diagnostics, which operates in this segment, was acquired by Bio-Techne Corporation for $ 250 million. ExoCan relies on government subsidies and revenue from the sale of a portion of its technology to R & D customers. It is engaged in the task of introducing its research large-scale and raising investment to grow to the next level. ExoCan's study does not require a large-scale setup. No complicated instrumentation or medical expert is required. So it can be easily used in small laboratories in remote areas. This simplifies the process of diagnosing cancer and makes it cheaper. Theranosis depends on the type of liquid biopsy that detects live cancer cells in the bloodstream. Its innovative technology captures tumor cells in the bloodstream. This chip is innovatively designed. Its structure is similar to that of real blood flow. Abnormal cells that are different from normal blood cells can be easily differentiated and examined. The data captured by the ultrasonic microscope camera will then be analyzed with an artificial intelligence-based algorithm. This allows physicians to identify patients who are appropriate for specific treatment and immunotherapy. Immunotherapy is new in the treatment of cancer. The drug has recently been approved by the US FDA. It is also available in India. Theranosis has completed experimental studies with its prototype within the company. This will encourage the plans for large-scale clinical validation. The next step is to bring its solutions to major cancer hospitals within a year. Researchers were also provided with information on how to retrieve data on the health sector over performing several diagnostic approaches [20-22], as well as how to ensure total retrievability. The visual image frames are segmented with the different modules in a visual element. These elements are analyzed based on the edge-based boundary detection. These boundary modules are helpful to identify the tumor location and size of the tumor [23]. The upcoming sections properly organized with the earlier study, proposed method, results and discussion, and conclusion.

2. Related Works

Khan et al. [1] discussed the various machine learning techniques to identify the cancer tumors. Over the past few years, the various techniques reported through artificial intelligence have led to various advances in the medical field. The mechanical learning methods have many applications in the medical field and have a wide range of applications ranging from diagnosis to classification, helping to deal with more complex problems such as cancer today. Prabukumar et al. [2] proposed a parallel, improved algorithm designed in a modern way. Its basis was the accuracy of the images in its entry-level sequence and define the boundaries of the lung tumor. Thus, their algorithm was able to achieve 96.5% accuracy. He et al. [3] introduced computer-assisted diagnostics (CADe) algorithm and computer-assisted detection (CADx) algorithms to find cancer tumors. In order to diagnose cancerous tumors through medical methods, it is necessary to explain the methods of imaging well. For this, they used computer-aided detection methods. The various result steps generated due to these types of applications make it easier for clinicians to make the right decisions. Ayadi et al. [4] has introduced a computer-aided design method based on the convolution neural network that measures brain function. The model proposed in this method used the 18-layer CNN. The classification based on its checks has reached an accuracy of 83.06%. Yaqub et al. [5] discussed about the state-of-the-art CNN optimizer for brain tumor image. Now, most people have improved the machine learning system and discovered the functions of the brain. By improving, this could lead to clearer conclusions about brain tumor classification. The ongoing technological advances were constantly explored by them. Prabukumar et al. [2] developed a method of classifying lung tumors based on a neatly developed hybrid algorithm. In this method, different types of blocks of complex tumor were analyzed and its types were separated. For this, they used the fuzzy C-means (FCM) method of measurement. Thus, the geometric structure of the tumor and the complex properties of its location were accurately calculated. Its accuracy was 98.5%. Mzoughi et al. [6] together checked samples of brain tumors using a deeply designed artificial intelligence system. This method utilized 3D MRI images based on volumetric operations. The size and type of brain tumor were determined from the images thus obtained. This type of determination method has achieved 85.48% accuracy in rating. Kong et al. [7] further simplified the computation of tumors. Evolving technologies are increasingly making it easier to calculate and classify tumors. And the rise of IoT-based achievements has created a major industrial revolution in this modern age and has made the series of health structures even more special. Narmatha et al. [8] developed a hybrid fuzzy brain-storm optimization algorithm. This algorithm was designed, and the MRI scan images were classified based on the brain tumor. the improved various methods accurately calculated the location and shape of tumors based on brain function and its measurements. The tumor accuracy is 94.21%.

3. Proposed Methodology

The proposed machine learning-based cancer detection (MLCD) method provides better results. Its characteristics have been further enhanced to make the accuracy of the currently proposed efficient image analysis method much higher. The first one to sound like this is the basic classification of data for its convolution modules. Image capture, which is basically a large volume in this classification, is divided into square groups rather than its enhancement processes. Its convolution functions are clearly illustrated in Figure 1. The convolution image module is designed to enhance its character based on the inputs given first. The development of categorical analysis methods is integrated with the kernel image module. That means convolution and kernel modules are ready to receive the new image module that comes with the input. The convolution kernel module image here works to fix some of the pixel errors in the resulting image blocks. The classification analysis results of this process are clearly illustrated in Figure 2, and also the proposed model is shown in Figure 3.
Figure 1

Modified convolution layer.

Figure 2

Complication tumor-level identification.

Figure 3

Building the proposed model.

The proposed algorithm designed for automation consists of the following three modules. Its primary description and its design modules are shown in Figure 3.where in equation (1), I is the point utility, x is the quantity of image clusters, y is the quantity of image blocks, c( is the bth case of ath image cluster, and d is the centric of ath image cluster. The various cluster head connections are connected to various image. The calculation of K-means clustering calculation is expressed in the following equation (2):where x and y are the Euclidean constants of the vector values x and y. Then, based on that, the proposed algorithm performs as below equation (3). The training module and its flow graph are presented in Figure 4, and also its validation and testing phase is shown in Figure 5.where f (a) and f (p) are the earlier probabilities group and forecaster, respectively.
Figure 4

Machine learning-based training module.

Figure 5

Image validation and reconstruction.

Its primary design is to obtain modules and forms based on a variety of formats with more modules. Prior to this, it was designed to perform computational methods such as preparation and input image-based design methods. In this method, the input modules are first separated into separate rectangular groups as clearly shown in Figures 4 and 5. Each format has its own pixel blocks that contain data as separate classifications for creation and upgrade operations.

4. Results and Discussion

The proposed machine learning-based cancer detection (MLCD) was compared with the existing computer-assisted diagnostics (CADe) algorithm, computer-assisted detection (CADx) algorithm, computer-aided image (CAIS) algorithm, and CNN optimizer algorithm (CNNOA). The following parameters are used to evaluate cancer image detection: image accuracy, input image recognition, input image rejection, image precision, image recall, and image F1-score. Before understanding the quality rate of the parameters, it is necessary to know about the following: Positive-T (TP): perfectly predicted values equal to or above the calibration level Negative-T (TN): negative predicted values below the calibration level Positives-F (FP): the exact values are in the calibration level, and the predicted samples are in the same level Negative-F (FN): the exact values are in the calibration level, but the predicted samples are in a different level

4.1. Measurement of Input Image Recognition

In general, input recognition is the process of effectively managing the excess information in a database. Due to its efficient use, only the segmented data present in the image database are used. Unnecessary segmented data will not be allowed to enter [16]. Thus, the blocking storage of the unsegmented data was restricted. Most storage space is handled efficiently if unwanted data are not stored. Then, the unsegmented data blocking of a system iswhere Ij is the total number of input commands entered in the system. Table 1 presents the comparison of measurement of image recognition between existing CADe, CADx, CAIS, CNNOA, and proposed MLCD.
Table 1

Measurement of input image recognition.

No. of samplesInput image recognition (%)
CADeCADxCAISCNNOAMLCD
10074.9978.7775.5881.7295.44
20074.6677.2774.9979.8594.43
30073.3276.1674.0179.0294.27
40072.1875.7872.878.1193.31
50071.1374.7771.6677.1993.74
60070.4273.8470.5575.8692.54
70069.1272.8469.8574.7892.38

4.2. Measurement of Input Image Rejection

The input image rejection management is the efficient handling of excess data provided. That is, how to quickly take action on information through artificial intelligence and implement it immediately. To the extent that it has its potential, the results will be correct [17]. The data that was too much of the data given at the specified time may not even is processed. Thus, artificial intelligence management calculates how much data are left. The efficiency measurement of this method refers to the fact that less data are not executed at that particular time. Table 2 presents the comparison of measurement of image rejection between existing CADe, CADx, CAIS, CNNOA, and proposed MLCD.
Table 2

Measurement of input image rejection.

No. of samplesInput image rejection (%)
CADeCADxCAISCNNOAMLCD
10025.0121.2324.4218.284.56
20025.3422.7325.0120.155.57
30026.6823.8425.9920.985.73
40027.8224.2227.221.896.69
50028.8725.2328.3422.816.26
60029.5826.1629.4524.147.46
70030.8827.1630.1525.227.62

4.3. Measurement of Image Accuracy

Image accuracy is the parameter that describes the ratio of perfectly predicted input images from the given samples to the total number of collected image samples. When the rate of image accuracy is high, then the given output image sample has a high quality rate [18]. Table 3 demonstrates the various measurement comparisons of image accuracy values between the existing CADe, CADx, CAIS, CNNOA, and proposed MLCD.
Table 3

Measurement of accuracy.

No. of samplesAccuracy measurement (%)
CADeCADxCAISCNNOAMLCD
10077.2981.0772.1878.9896.35
20076.9679.5771.5977.1195.31
30075.6278.4670.6176.2895.18
40074.4878.0869.475.3794.22
50073.4377.0768.2674.4594.65
60072.7276.1467.1573.1293.41
70071.4275.1466.4572.2593.3

4.4. Measurement of Image Precision

Image precision measurement is the ratio of the positive true samples to the total true samples. The total true samples are calculated by the sum of positive true samples and false positive samples. Table 4 demonstrates the various measurement comparisons of image precision values between the existing CADe, CADx, CAIS, CNNOA, and proposed MLCD.
Table 4

Measurement of precision.

No. of samplesPrecision measurement (%)
CADeCADxCAISCNNOAMLCD
10076.0388.8179.7487.4295.61
20074.487.0778.168694.32
30073.9284.7375.9684.7493.31
40072.6383.9274.3382.7592.42
50070.5281.6373.1980.2892.05
60069.0379.770.9978.8491.01
70067.2277.9769.8477.1290.24

4.5. Measurement of Image Recall

Image recall measurement is the ratio of the positive true samples to the sum of positive true samples and false negative true samples. Table 5 demonstrates the various measurement comparisons of image recall values between the existing CADe, CADx, CAIS, CNNOA, and proposed MLCD.
Table 5

Measurement of recall rate.

No. of samplesRecall rate (%)
CADeCADxCAISCNNOAMLCD
10085.9284.7179.5886.4195.61
20084.4382.7477.1684.2195.62
30083.6381.6176.7583.4194.42
40081.380.4275.1582.7493.94
50080.2980.0372.8381.3192.51
60079.6578.5171.5880.2291.35
70078.9978.2768.8579.7490.58

4.6. Measurement of Image F1-Score

F1-score is measured by the average sample values of image precision and image recall of the samples [19]. Table 6 demonstrates the various measurement comparisons of image F1-score values between the existing CADe, CADx, CAIS, CNNOA, and proposed MLCD.
Table 6

Measurement of F1-score.

No. of samplesF1-score (%)
CADeCADxCAISCNNOAMLCD
10077.4188.3482.0990.6195.45
20077.5288.3282.2690.8895.95
30077.5487.4481.5390.5895.83
40074.4484.6178.1987.0792.6
50073.2483.2977.4685.7592.22
60072.6382.4676.5785.2191.65
70072.2282.0676.4984.9191.95

4.7. Measurement of Recognition Duration

Measurement duration is nothing but the time taken to calculate the prediction of two different images. Table 7 demonstrates the various measurement comparisons of image recognition duration between the existing CADe, CADx, CAIS, CNNOA, and proposed MLCD.
Table 7

Measurement of recognition duration.

No. of samplesRecognition duration (ms)
CADeCADxCAISCNNOAMLCD
10013360827713260144492676
20012583772012855140652510
30011806716312450136812344
40011029660612045132972178
50010252604911640129132012
6009475549211235125291846
7008698493510830121451680
In a saturation point, the proposed model achieved 93.31% image recognition, 6.69% image rejection, 94.22% accuracy, 92.42% precision, 93.94% recall rate, 92.6% F1-score, and 2178 ms of computational speed. The segmentation process performed well. This shows that the proposed model clearly identified the tumor location and the size of the tumor. Hence, the proposed model performs better than the existing models.

5. Conclusion

In the above are the results of defining and analyzing image blocks based on the given prototype images. The various image blocks given based on these classifications are further subdivided into pixel enhancement and enhancement functions. Based on this work, the blocks of different types of groups are selected at the right turn and its results are selected for improvement. The correct analytical methods for these exams are the general illustrated calculations of its comparison as categorized above. The categories of classification show that the proposed algorithm has the best accuracy. The proposed machine learning-based cancer detection (MLCD) was compared with the existing computer-assisted diagnostics (CADe) algorithm, computer-assisted detection (CADx) algorithms, computer-aided image (CAIS) algorithm, and CNN optimizer algorithm (CNNOA). The data for input classification and rejection of input images are also given above. It is thus clear that the performance of the proposed algorithm is superior to the performance of other algorithms. It is clear that the various improvements on which it is based are designed to be advanced in the way it performs various jobs in the medical field.
  10 in total

Review 1.  A survey on deep learning in medical image analysis.

Authors:  Geert Litjens; Thijs Kooi; Babak Ehteshami Bejnordi; Arnaud Arindra Adiyoso Setio; Francesco Ciompi; Mohsen Ghafoorian; Jeroen A W M van der Laak; Bram van Ginneken; Clara I Sánchez
Journal:  Med Image Anal       Date:  2017-07-26       Impact factor: 8.545

2.  Deep Multi-Scale 3D Convolutional Neural Network (CNN) for MRI Gliomas Brain Tumor Classification.

Authors:  Hiba Mzoughi; Ines Njeh; Ali Wali; Mohamed Ben Slima; Ahmed BenHamida; Chokri Mhiri; Kharedine Ben Mahfoudhe
Journal:  J Digit Imaging       Date:  2020-08       Impact factor: 4.056

3.  The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository.

Authors:  Kenneth Clark; Bruce Vendt; Kirk Smith; John Freymann; Justin Kirby; Paul Koppel; Stephen Moore; Stanley Phillips; David Maffitt; Michael Pringle; Lawrence Tarbox; Fred Prior
Journal:  J Digit Imaging       Date:  2013-12       Impact factor: 4.056

4.  Brain tumor classification using deep CNN features via transfer learning.

Authors:  S Deepak; P M Ameer
Journal:  Comput Biol Med       Date:  2019-06-29       Impact factor: 4.589

5.  Detection of tumors on brain MRI images using the hybrid convolutional neural network architecture.

Authors:  Ahmet Çinar; Muhammed Yildirim
Journal:  Med Hypotheses       Date:  2020-03-24       Impact factor: 1.538

Review 6.  State of the Art: Machine Learning Applications in Glioma Imaging.

Authors:  Eyal Lotan; Rajan Jain; Narges Razavian; Girish M Fatterpekar; Yvonne W Lui
Journal:  AJR Am J Roentgenol       Date:  2018-10-17       Impact factor: 3.959

7.  State-of-the-Art CNN Optimizer for Brain Tumor Segmentation in Magnetic Resonance Images.

Authors:  Muhammad Yaqub; Feng Jinchao; M Sultan Zia; Kaleem Arshid; Kebin Jia; Zaka Ur Rehman; Atif Mehmood
Journal:  Brain Sci       Date:  2020-07-03

8.  Glioma Grading on Conventional MR Images: A Deep Learning Study With Transfer Learning.

Authors:  Yang Yang; Lin-Feng Yan; Xin Zhang; Yu Han; Hai-Yan Nan; Yu-Chuan Hu; Bo Hu; Song-Lin Yan; Jin Zhang; Dong-Liang Cheng; Xiang-Wei Ge; Guang-Bin Cui; Di Zhao; Wen Wang
Journal:  Front Neurosci       Date:  2018-11-15       Impact factor: 4.677

9.  A Deep Siamese Convolution Neural Network for Multi-Class Classification of Alzheimer Disease.

Authors:  Atif Mehmood; Muazzam Maqsood; Muzaffar Bashir; Yang Shuyuan
Journal:  Brain Sci       Date:  2020-02-05

10.  Machine learning with autophagy-related proteins for discriminating renal cell carcinoma subtypes.

Authors:  Zhaoyue He; He Liu; Holger Moch; Hans-Uwe Simon
Journal:  Sci Rep       Date:  2020-01-20       Impact factor: 4.379

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.