Literature DB >> 35463737

Novel DLSNNC and SBS based framework for improving QoS in healthcare-IoT applications.

Parma Nand1.   

Abstract

Health care system is intended to enhance one's health and as a result, one's quality of life. In order to fulfil its social commitment, health care must focus on producing social profit to sustain itself. Also, due to ever increasing demand of healthcare sector, there is drastic rise in the amount of patient data that is produced and needs to be stored for long duration for clinical reference. The risk of patient data being lost due to a data centre failure can be minimized by including a fog layer into the cloud computing architecture. Furthermore, the burden of such data produced is stored on the cloud. In order to increase service quality, we introduce fog computing based on deep learning sigmoid-based neural network clustering (DLSNNC) and score-based scheduling (SBS). Fog computing begins by collecting and storing healthcare data on the cloud layer, using data collected through sensors. Deep learning sigmoid based neural network clustering and score based Scheduling approaches are used to determine entropy for each fog node in the fog layer. Sensors collect data and send it to the fog layer, while the cloud computing tier is responsible for monitoring the healthcare system. The exploratory findings show promising results in terms of end-to-end latency and network utilization. Also, the proposed system outperforms the existing techniques in terms of average delay.
© The Author(s), under exclusive licence to Bharati Vidyapeeth's Institute of Computer Applications and Management 2022.

Entities:  

Keywords:  Clustering; Entropy; Fog computing; Neural network; Score-based scheduling

Year:  2022        PMID: 35463737      PMCID: PMC9020430          DOI: 10.1007/s41870-022-00922-z

Source DB:  PubMed          Journal:  Int J Inf Technol        ISSN: 2511-2104


Introduction

Internet of Things principles can improve patients' health and welfare by increasing the availability and quality of care, as well as significantly reducing treatment expenses and frequent travel [1]. The Internet of Medical Things (IoMT) is a digital healthcare system that connects patients to medical resources and services [2]. Wireless sensor networks are becoming more pervasive and easy-to-use enabling technology for structural health monitoring than current wired systems [3]. Patients can use smart wearable devices with sensors that come with smartphones to gather data about their health status such as heart rate, glucose level and blood pressure [4]. The analysis and processing of data is done by cloud servers. Moreover, cloud computing is the most likely practical approach for connecting IoT with healthcare [5]. Patient data may be used not only to monitor a patient's present health, but also to forecast future medical concerns using cloud big data storage and machine learning techniques [6]. But patient's physical condition changes over time, demanding quick action to monitor remote patients. And cloud mechanism lacks to handle the real time application and cannot meet the requirements of quality-of-service (QoS). There is need of system that can continually and quickly monitor the report on the patient's condition [7]. The introduction of fog computing in healthcare applications is to bridge the gap between IoT devices and analytics [8]. Fog computing is a distributed computing platform for managing applications and services at the network edge [9]. The probability of a mistake and the delay increases as the volume of data transmitted over the network grows. Data packet loss and transmission latency are directly proportional to the amount of data transported by IoT devices to the cloud. The Edge or Fog paradigm overcomes problems like latency by placing small servers known as edge servers in close proximity to end user devices [10]. A fog-based IoT system comprises of three layers: device, fog, and cloud. It has been hailed as a promising paradigm for lowering networking infrastructure and processor energy consumption while offering cloud-like health monitoring services [11]. The number of fog-based applications is expanding and is thought to outweigh IoT apps in near future [12]. IoT technology in healthcare can enhance the quality as well as affordability of medical treatment by automating formerly manual activities [13]. Fog provides storage and processing capabilities more accessible to end-users. Fog can capture, analyse, and store massive amounts of data in real-time [14]. Because medical sensors collect data on a frequent basis, real-time analysis performance might be enhanced, enabling intelligent data analysis and decision-making based on end-user rules and network resources. [15, 16]. The following are the main contributions of this work: Fog computing uses deep learning sigmoid based neural network clustering and score based scheduling to calculate entropy values for each fog node and thus to improve the quality of service in fog based architecture. The manuscript's structure is organized as follows: Sect. 2 examines the existing literature on the proposed strategy. Section 3 provides a brief overview of the proposed system, Sect. 4 explores the exploratory findings, and Sect. 5 concludes the article.

Related works

The quality of service is determined by resource allocation and load balancing in cloud/fog computing. Fog-based architectures have been proposed by many researchers for a variety of applications. Table 1 presents an overview of existing Fog literature surveys relevant to our work.
Table 1

Summary of existing techniques in Fog computing

Existing workQuality attributesTechniqueFeaturesTool usedFindings
Hussein et al. [17]Communication cost, response timeAnt colony optimization (ACO) and particle swarm optimization (PSO)

Formal model for task offloading is provided

Two nature inspired metaheuristic optimization algorithms named ACO and PSO are used based on formal model

Comparison is done between both algorithms

MATLABResults of the experiments reveal that the proposed ACO task offloading method improves IoT application response times and successfully balances workloads among fog nodes
Gavaber et al. [18]Bandwidth, delayBandwidth and delay efficient placement (BADEP), uncritical module placement (UMP)

Division of modules into two types: critical and non-critical

Implementation of BADEP algorithm for critical type and UMP for non-critical type

iFogSimSimulation results suggest that the proposed strategy reduces delay by 9 and network usage by 13 percentage
Vedaei et al. [19]Energy consumption, bandwidthAdaptive-network-based fuzzy inference system (ANFIS)

Real-time monitoring and notifying patient's health using ML algorithm

Utilization of a fuzzy decision-making system on the fog server

Training memberships and rule defining using ANFIS

Raspberry Pi ZeroThe system might aid users in keeping track of their daily activities and lowering their risk of contracting the Coronavirus
Saidi et al. [20]Latency, energy consumptionParticle swarm optimization (PSO) algorithm

Comparison of task generated in fog and cloud computing

Comparison based on performance metrics, Workflow scheduling and comparison is done using PSO algorithm

FogWorkflowSim toolkitDesign of efficient fog based framework for remote monitoring system of elderly people
Li et al. [21]Resource utlization and user satisfactionFuzzy c-means algorithm, particle swarm optimization (PSO)

Standardization and normalization of resource attributes, clustering of fog resources is done using fuzzy clustering with PSO

Matching of classified resource with that of user request done using weighted matching method

MATLABThe suggested approach may more quickly match user requests with relevant resource categories, resulting in higher user satisfaction
Aburukba et al. [22]LatencyGenetic algorithm

Scheduling of IoT request using customized genetic algorithm

Two important parameters: population size (POP) and the maximum number of iterations (MAX) impacts directly on the quality of solution

Discrete event simulatorSuggested technique has a 21.9 to 46.6 per cent lower total latency than other algorithms
Kishor et al. [23]LatencyIntelligent multimedia data segregation (IMDS) scheme using machine learning

Data is divided into k-chunks using k-fold random forest technique

k−1 chunks are used for training purpose and left for testing the model

Python 3.7By reducing latency and network utilization, the suggested approach improves the quality of service in e-healthcare
Akintoye et al. [24]Cost, latency, energy consumptionHungarian algorithm based binding policy (HABBP), genetic algorithm based virtual machine placement (GABVMP)

Linear-programming problem of task allocation done using HABBP algorithm

Load balancing is done using GABVMP algorithm

CloudSimAccording to the simulation findings, the GABVMP outperformed the two greedy heuristics
Shahid et al. [25]Energy consumption, latencyPopularity-based cache mechanism

Random distribution for the popular content

Filtration method id used for caching of popular content on active node

Load-balancing algorithm to increase efficiency of overall system

pythonIn terms of energy usage and delay, the suggested approach's evaluation shows better results
Mahmud et al. [26]LatencyModule placement and module forwarding algorithm

Prioritization of application for allocating fog nodes is done using module placement algorithm

Optimization of resources is done using module forwarding technique

iFogSimFor applications with tight deadlines, the suggested management approach overcomes latency in service delivery
Summary of existing techniques in Fog computing Formal model for task offloading is provided Two nature inspired metaheuristic optimization algorithms named ACO and PSO are used based on formal model Comparison is done between both algorithms Division of modules into two types: critical and non-critical Implementation of BADEP algorithm for critical type and UMP for non-critical type Real-time monitoring and notifying patient's health using ML algorithm Utilization of a fuzzy decision-making system on the fog server Training memberships and rule defining using ANFIS Comparison of task generated in fog and cloud computing Comparison based on performance metrics, Workflow scheduling and comparison is done using PSO algorithm Standardization and normalization of resource attributes, clustering of fog resources is done using fuzzy clustering with PSO Matching of classified resource with that of user request done using weighted matching method Scheduling of IoT request using customized genetic algorithm Two important parameters: population size (POP) and the maximum number of iterations (MAX) impacts directly on the quality of solution Data is divided into k-chunks using k-fold random forest technique k−1 chunks are used for training purpose and left for testing the model Linear-programming problem of task allocation done using HABBP algorithm Load balancing is done using GABVMP algorithm Random distribution for the popular content Filtration method id used for caching of popular content on active node Load-balancing algorithm to increase efficiency of overall system Prioritization of application for allocating fog nodes is done using module placement algorithm Optimization of resources is done using module forwarding technique The support for real-time applications is a major reason for the fog computing architecture emergence. There are several QoS metrics to consider, including latency, bandwidth, energy consumption reduction, and cost minimization for the successful development of fog-based system.

Proposed methodology

The three tiers of computing is cloud computing, fog computing, and sensors, which all communicate with one another. The primary purpose of the proposed technique is to present three-tier architecture for context and latency-sensitive monitoring systems. In this paper, we propose that fog computing can be utilized to assist in the monitoring of patients' healthcare data, ensuring that data is gathered and evaluated efficiently. Sensors are used to collect data from patients at first. Both external and internal data are recorded by these sensors. The role of sensors is to gather and transmit all data to the fog computing layer. Fog computing then uses Deep Learning Sigmoid based Neural Network Clustering and Score based Scheduling to get the entropy value for each fog node. This layer analyses the data and information collected by the edge devices. The layer functions similarly to the server. In addition, the cloud-computing tier constantly checks the health monitoring system as shown in Fig. 1.
Fig. 1

Proposed methodology to improve quality of service in healthcare system

Proposed methodology to improve quality of service in healthcare system To resolve jobs in a more qualified manner or to implement a range of strategies in order to reach a better result, the neural network must always learn. When it receives new information from the system, it learns how to respond to a new circumstance. A deep neural network is a sort of machine learning in which the system uses numerous layers of nodes to extract high-level functions from input data. It requires converting numerical data into a more abstract and artistic form. Convolution, Sigmoid-based normalisation, pooling, and a fully connected layer are among the suggested DLSNN layers that solve CNN problems. Figure 2 depicts the deep learning sigmoid neural network clustering topology.
Fig. 2

Architecture of the DLSNN clustering

Architecture of the DLSNN clustering

Deep learning sigmoid neural network clustering (DLSNNC)

A sigmoid function is a mathematical function with a distinctive "S"-shaped curve, sometimes known as a sigmoid curve. Equation (1) represents the sigmoid function,where (sig) is the input and f is the output. The output of a sigmoid function is used in DLSNN normalisation. The measurement of haphazardness used to describe the texture of the input fog node data is entropy (E). The entropy of the ith data is calculated by the condition (2)where, and are the coefficients of co-occurrence matrix of enhanced node, is the component in the co-occurrence matrix at the coordinates , and is the dimension of the co-occurrence matrix. The weight and biases of the preceding layers in the structure design are used by the DLSNNC classifier to reach a conclusion. The model is then improved with conditions (3) and (4) for each layer independently.where means the weight, means the bias, means the layer number, means the regularization parameter, means the learning rate, means the total amount of sensor data sets, means a momentum, means the upgrading phase, and means the cost function. The DLSNN Cluster contains various kinds of layers are according to the subsequent, Step 1: Convolutional layer: Using a condition, this layer completes the convolution of the input data with the kernel (5).where represents reproduced segmented data, represents the filter, and represents the number of components in and the output vector is . Step 2: Sigmoid-based normalization layer: The technique of linearly modifying data to fit it within a given range is known as normalisation. The Z-score normalisation method is used to standardise data by changing it linearly. The formula for Z-score normalisation is shown in Eq. (6): Here, is the normalized output, f is the sigmoid function value, is the mean value of the convolutional layer output data, and is the standard deviation of the values in the convolutional layer output. The convolutional layer output is normalized using the sigmoid function by using the Eq. (6). The Sigmoid-based normalised output from this layer is sent into the pooling layer. This layer is expected to contribute to the pooling layer by providing value-based normalised data support. Step 3: Pooling layer: The down-sampling layer is another name for this layer. To save computing effort and minimise overfitting, the pooling stage reduces the size of output neurons from the convolution layer. The max-pooling algorithm selects only the highest value in each data map, resulting in fewer output neurons. Pooling layers are typically used after convolution layers to assist simplify the information in the convolution layer's output. Step 4: A fully connected layer: The actuation work computes a probability distribution of the classes. Thus, the output layer uses the softmax function to find a preceding layer outcome that fits the most clustered data.where , which represents the resultant cluster. Here, the DLSNNC is adapted with the sigmoid function-based normalization to direct the over-fitting in layers and conclusions in the important clustering of sensor data to the fog-cloud computing layers.

Score based scheduling algorithm

Our major purpose is to design workflow tasks that contain patients' health-care data. Initially, the task request is produced and separated into numerous task requests so that execution durations may be reduced at a reasonable cost while staying within the user-specified deadline. The Score based workflow task scheduling algorithm selects only those task requests that match the minimal threshold of workflow tasks for scheduling. In [27], there is an existing scheduling algorithm. The flow chart in Fig. 3 describes our proposed Score based workflow task scheduling system.
Fig. 3

Flowchart of the proposed SBS algorithm

Flowchart of the proposed SBS algorithm The steps of the Score based Scheduling algorithm are described below: The following are the steps of the score-based scheduling algorithm: Step 1: Submit the workflow task list, which includes patient healthcare information. (T = T1, T2, T3, ….,Tn). Step 2: Contact the data centre to learn about the virtual resources that are available. VM = VM1, VM2, VM3, and so on, up to VMn. Step 3: Assign a user-defined deadline constraint D in the form of sub-deadlines for various task requests to the whole workflow application. Step 4: Using the components' minimum sub-scores, determine the VMs' scores value (SV) where X-defines the observed value, - mean of the sample task, -standard deviation task. Step 5: Repeat steps 6, 7, and 8 if the task list contains the tasks to schedule; otherwise, return to the task mapping. Step 6: Select the lowest-scoring VM from the VM list that meets the task's threshold. The task threshold (p) is determined by the length of the instructions. Step 7: The job is assigned to the selected VM if it can finish the work within the specified deadline; else, the assignment is sent to the next lowest-scoring VM from the list of resources. Step 8: From the list, choose the next assignment. If all jobs have been scheduled, their mapping to VMs will be completed.

Result and discussion

The implementation of our proposed effective DLSNN Clustering and Score-based scheduling for cloud IoT applications is done in PYTHON and by using an online cloud healthcare dataset. Different execution estimations such as latency and network are estimated to explore the performance of the proposed work. Finally, the average delay is estimated and compared using existing FCFS [28], SJF [28], and BMO [29] to prove the relevance of the proposed approach.

Latency

There will be data flow between the various tiers in our fog computing solution in health informatics. In many circumstances, the amount of information and thus the amount of time required will differ. As a result, the latency varies. As shown in equation, latency is the difference between the time of commencement and the time of completion of service (8), Here, L denotes latency, ST denotes the requested task's start time, PT denotes the requested task's processing time, TQT denotes the transmission and queuing time prior to the requested task, and IT is the desired job's initiation time. In the data ranges 500, 1000, 1500, 2000, 2500, and 3000, the latency comparison between cloud and mixed cloud and fog computing layers is shown in Table 2. In addition, Fig. 4 depicts a latency comparison graph in the cloud, as well as both cloud and fog computing layers.
Table 2

Latency comparison of cloud and fog and cloud

DataCloudFog and cloud
500250110
1000581138
15001280134
20001678295
25002210482
30002791479
Fig. 4

Latency of fog compared to Fog + cloud system

Latency comparison of cloud and fog and cloud Latency of fog compared to Fog + cloud system

Network usage

The second evaluation constraint is network usage (). As the number of devices on the network grows, so does network usage, resulting in network congestion. As a result of the congestion, the application running on the Cloud network performs poorly. By dispersing the load across intermediary fog devices, fog computing aids in the reduction of network congestion. Equation is used to calculate network utilization (9),where N is the total number of tasks, is the latency, and is the network size of ith task. Table 3 states the Network usage in the cloud and both cloud and fog computing layers along with the network usage in GB mentioned and the data usage 500–3000. Figure 5 stated the network usage comparison graph in the cloud and both cloud and fog computing layers.
Table 3

Network usage of cloud and fog and cloud

DataCloudFog and cloud
500284180
1000377290
1500499321
2000537438
2500578488
3000687521
Fig. 5

Network utilization in fog compared to fog + cloud system

Network usage of cloud and fog and cloud Network utilization in fog compared to fog + cloud system

Average delay

The ratio of average delays state the difference between starting execution time ST and the ending execution time ET for the request tasks noted in Eq. (10), Table 4 states the Average Delay in FCFS, SJF, and BMO comparing with the proposed technique with data usage 500–3000. Figure 6 stated the average delay when the average waiting time increases the average delay also increases comparison graph in FCFS, SJF, BMO and comparing with the proposed technique decreases the average delay.
Table 4

Performance measure of average delay

Average waiting time in (ms)FCFSSJFBMOProposed
500176129221110
1000254192321143
1500296218378154
2000429278398187
2500578398467231
3000595486521243
Fig. 6

Average delay of proposed approach compared to existing techniques

Performance measure of average delay Average delay of proposed approach compared to existing techniques

Conclusion and future work

We propose a fog-cloud computing technique for health monitoring systems in this paper. The purpose of the study reported in this paper is to improve service quality. In this work, DLSNN Clustering and Score-based Scheduling are used to improve prediction. According to simulation results, proposed solution improve Quality-of-Service in the cloud/fog computing environment in terms latency and network consumption. Additionally, the proposed technique outperforms existing approach in terms of average delay. Different encryption techniques can be incorporated with the implementation of proposed architecture to improve the security of the system.
  5 in total

1.  An E-health system for monitoring elderly health based on Internet of Things and Fog computing.

Authors:  Hafedh Ben Hassen; Wael Dghais; Belgacem Hamdi
Journal:  Health Inf Sci Syst       Date:  2019-10-24

Review 2.  Internet of Health Things: Toward intelligent vital signs monitoring in hospital wards.

Authors:  Cristiano André da Costa; Cristian F Pasluosta; Björn Eskofier; Denise Bandeira da Silva; Rodrigo da Rosa Righi
Journal:  Artif Intell Med       Date:  2018-06-02       Impact factor: 5.326

3.  Methods of Resource Scheduling Based on Optimized Fuzzy Clustering in Fog Computing.

Authors:  Guangshun Li; Yuncui Liu; Junhua Wu; Dandan Lin; Shuaishuai Zhao
Journal:  Sensors (Basel)       Date:  2019-05-08       Impact factor: 3.576

4.  Improving Quality-of-Service in Cloud/Fog Computing through Efficient Resource Allocation.

Authors:  Samson Busuyi Akintoye; Antoine Bagula
Journal:  Sensors (Basel)       Date:  2019-03-13       Impact factor: 3.576

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.