Literature DB >> 36120700

A Closed-Loop Method for Multiperiod Intelligent Information Processing with Cost Constraints under the Fuzzy Environment.

Ming Fu1, Lifang Wang2, Xueneng Cao1, Bingyun Zheng1, Xianxian Zhou1, Shishu Yin1.   

Abstract

From trivial matters in life to major scientific projects related to the fate of mankind, decision-making is everywhere. Whether high-quality decisions can be made often directly affects the development of affairs, especially when sudden disasters occur. As the basis of decision-making, data are crucial. The continuously probabilistic linguistic set, a data structure of the fuzzy mathematics, is selected in the paper to collect original data after careful comparisons, because this data structure can fully consider the hesitation of decision-makers and the fuzziness of complex problems. Although all alternatives are costly, the costs of different alternatives still vary greatly; obviously, the low-cost alternative is better than others when the same predetermined goal can be achieved, which is one of the research objectives and characteristics of this paper. Different from other researchers who only take the cost as one of the decision-making indicators, the algorithm proposed in the paper pays much more attention on the cost reduction. When dealing with an emergency, it is often difficult to solve the problem by taking measures only once; usually, multiple rounds of measures are needed. Each round of decision-making has both connections and differences, and the multiround decision-making model is proposed and built in the paper. Different from traditional linear structures, the model mainly adopts the closed-loop structure, which divides the whole process into multiple sub-decision-making points, the severities measured at the current time point will be compared with the values estimated at the latter time point, and then, the differences will be input into the system, the corresponding automatic adjustment modules will be activated immediately according to the values. The accuracy of the system can be verified and adjusted in time by the closed-loop control module. Finally, several experiments are carried out and the results show that the algorithm proposed in the paper is more effective and the cost is lower.
Copyright © 2022 Ming Fu et al.

Entities:  

Mesh:

Year:  2022        PMID: 36120700      PMCID: PMC9473872          DOI: 10.1155/2022/3871129

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

People are always faced with all kinds of decision-making problems, how to make an appropriate decision in time is a scientific problem and has become one of research hotspots in the academic circle. There are several different descriptions for the definition of decision-making. Simon believes that decision-making is essentially management [1]; Mikesell and Griffin, management professors, point out that decision-making is a process in which an appropriate alternative will be selected from multiple alternatives [2]; the American scholars Ebers and Maurer believe that decision-making should also include all activities, which must be carried out before making the final decision [3]. Generally speaking, decision-making is regarded as the process in which individuals or groups make appropriate decisions for specific goals. Decision-making problems can be roughly divided into three categories from the perspective of known conditions: (1) the deterministic decision-making problem, such problems have clear alternatives and expected results; (2) the risky decision-making problem, the predetermined goal is clear; however, there are many paths to the goal, every path has certain risks and uncertainties, and fortunately, the probabilities can be roughly calculated; (3) the uncertain decision-making problem, it is similar to the risky decision-making problem; however, the probabilities can only be estimated, and even worse, there may be certain deviations in the estimated values. The problem studied in this paper belongs to the third category, which has many uncertainties and is the most complex of the three categories. Information collection is a basic and key step of the decision-making; however, most information provided by interviewees is uncertain, vague, and hard to be denoted mathematically, how to scientifically record uncertain information is the first problem to be solved. In 1965, the Professor Zadeh has put forward the concept of the fuzzy set, which provided a new idea for solving such problems [4], the main contribution is that the concept of the membership degree has been proposed; subsequently, the theory has been widely recognized and developed rapidly, and various forms have been expanded, such as the interval-valued fuzzy set [5], the n-type fuzzy set [6], the intuitionistic fuzzy set [7], the interval intuitionistic fuzzy set [8], the hesitant fuzzy set [9], and the probabilistic linguistic set [10]. The main features of these fuzzy sets can be briefly summarized as follows: the membership degrees are described by interval values in the interval-valued fuzzy set; the membership degrees are represented by sets in the n-type fuzzy set; both the membership degree and the nonmembership degree can be considered in the intuitionistic fuzzy set; beyond that, the hesitation degrees, which are denoted by interval values, are included in the interval intuitionistic fuzzy set. The hesitations of the decision-makers can be described in the hesitant fuzzy set; in addition, the structure is concise and efficient, and therefore, the theory of the hesitant fuzzy set has become one of research hotspots in recent years. The probabilistic linguistic set is developed on the basis of the hesitant fuzzy set, and it adds occurrence probabilities to membership degrees, so as to increase further descriptions for membership degrees. Mathematics is recognized as one of the best analytical tools. In order to use mathematical tools to carry out researches, scholars have put forward several basic mathematical concepts for fuzzy sets. Xia and Xu first gave the mathematical definition of the hesitant fuzzy set [11], and Liao and Xu defined some special hesitant fuzzy sets from the perspective of solving practical problems [12], such as the empty set O, the complete set E, and the meaningless set Θ. Unfortunately, fuzzy sets cannot be added, subtracted, multiplied, and divided directly; for this reason, several basic operation methods for fuzzy sets are proposed by scholars. Torra defined the complement, union, and intersection operations for hesitant fuzzy elements [13]. Xu and Xia conducted further researches and proposed the addition, multiplication, number multiplication, and power operations for hesitant fuzzy elements [14]; on this basis, Liao and Xu proposed the definitions of subtraction and division [15]. In addition, fuzzy elements cannot be compared directly like real numbers. Therefore, Xia and Xu have put forward the concept of the score value, which provides a method for comparing different fuzzy elements; however, when the score values are equal, it needs to be further judged with the help of the variance values [16], which was proposed by Liao et al. Unfortunately, the basic operation methods mentioned above can only meet simple aggregation requirements and would be unable to finish the calculation when a large number of fuzzy elements participate. Therefore, researchers proposed several effective fuzzy aggregation operators. Xia and Xu proposed the hesitant fuzzy-weighted averaging (HFWA) operator and the hesitant fuzzy hybrid averaging (HFHA) operator in the paper listed in the Reference [11] mentioned above, considering the importance of location and data simultaneously. Liao and Xu defined a series of new hesitant fuzzy mixed integration operators and studied their boundaries and relationships [17]. Zhu and Xu proposed the hesitant fuzzy Bonferroni average operator and the weighted hesitant fuzzy Bonferroni average operator from the perspective of logical relationships, and studied their monotonicity, commutativity, and boundedness [18]. In particular, due to its outstanding structure, the theory of the probabilistic hesitation fuzzy set has been developing rapidly. Zhang et al. studied the preference relationships, ranking methods, basic operation rules, and aggregation operators [19]. Hao et al. studied the basic properties of the probabilistic dual hesitant fuzzy sets and proposed the entropy measurement methods, the comparison methods, and the aggregation operators [20], such as the weighted average operator and the geometric average operator. On this basis, Garg and Kaur studied the distance measurement methods of probabilistic dual hesitant fuzzy sets [21]. Ye proposed the correlation coefficients of probabilistic hesitant fuzzy sets in discrete and continuous cases, respectively [22]. Li and Wang proposed the concept of the probability hesitation fuzzy likelihood [23]. These theories have built a solid foundation for the probabilistic hesitation fuzzy theory. Scholars have also conducted in-depth discussions on decision-making methods. The main idea can be simply summarized as using operators to aggregate estimation data and then rank alternatives according to the score values. These methods can be roughly divided into two categories: (1) optimize the aggregation operators and (2) innovate decision-making methods. For the first category, Jiang and Ma proposed the probability hesitation fuzzy frank-weighted average operator and the probability hesitation fuzzy frank-weighted geometric operator, and then discussed the relationships between them [24]. Zhao et al. considered the psychological preferences of decision-makers and proposed the probabilistic hesitant fuzzy Einstein aggregation operator [25]. Shao et al. proposed the probabilistic hesitation fuzzy priority integration operator after considering the internal correlations of indicators [26]. Li et al. proposed a new probabilistic hesitant fuzzy priority aggregation operator, which can make full use of the priority relationships among indicators [27]. For the second category, on the one hand, several commonly used methods in the field of the decision-making have been extended to the probabilistic hesitation fuzzy environment, such as the TOPSIS method, the QUALIFLEX method, and the LINMAP method; on the other hand, other theories or methods are introduced into the probabilistic hesitation fuzzy environment and make the theory more diversified. Zhou and Xu introduced several financial concepts into fuzzy sets and then applied the hybrid algorithm to the practice of the stock investment decision-making [28]. Tian et al. established a consensus process based on the probability hesitation fuzzy preference relationships and the prospect theory, and then applied it to financial venture investment [29]. Wu et al. introduced the GM (1,1) model of the grey theory and applied it to coal mine safety production [30]. Guo et al. introduced time series analysis and established a time series prediction model based on hesitation probability fuzzy sets [31]. For this article, we not only optimize the aggregation operators but also innovate decision-making methods; by comparison, the main work of the paper is to innovate the decision-making methods, and especially, the closed-loop control model is combined with the fuzzy decision-making algorithm.

2. The Basic Theories

This section will briefly introduce some important basic theories, which will be used in the following chapters, and it is helpful for other researchers to better understand the algorithm proposed in this paper.

2.1. The Continuously Probabilistic Linguistic Set

The continuously probabilistic linguistic set is an extended form of the probabilistic linguistic set, which overcomes the disadvantage of the limited number of possible values in the probabilistic linguistic set. The definition of the continuously probabilistic linguistic set (CPLS) can be mathematically described by the following equation: In the above definition, the evaluation value is recorded by the symbol γ and its corresponding probability is recorded by the symbol p; the restraint condition γ ∈ [0,1] points out the range of evaluation values, and the greater the value of the γ, the higher the evaluation acquired from experts; similarly, the restraint condition p ∈ [0,1] points out the range of probability values, and the greater the value of the p, the greater the occurrence probability of the corresponding evaluation value; the pair of the symbol γ|p can be called the continuously probabilistic linguistic element (CPLE); the restraint condition l=1,2, ⋯, m indicates the value range of the l, and the symbol m indicates the total number of evaluation values in the CPLS; the restraint condition ∑p=1 indicates that the sum of all the probability values in any CPLS must equal to 1. Unlike real numbers, CPLSs cannot be directly compared with each other, how to compare CPLSs is a difficult problem in front of researchers. The score function, which is first proposed by Farhadinia, can handle this problem effectively [32], and the calculation results are real numbers; therefore, they are easy to compare with each other. The definition of the score function can be mathematically described as equation (2). Generally, the score value of the CPLS represents the final evaluation result. It is also necessary to briefly introduce several other commonly used calculation formulas of CPLSs, which are listed as follows: We can find that only one CPLS is involved in the first and the second calculation formulas; while there are two CPLSs involved in the third and the fourth calculation formulas, more calculation formulas can be obtained according to these four basic formulas.

2.2. The Collaborative Decision-Making Problem

The definition of the collaborative decision-making can be simply described as a process in which several experts try to find the most appropriate alternative from multiple alternatives according to values of key indicators [33]. The experts can be denoted as E={E1, E2, ⋯, E}, and the alternatives can be denoted as A={A1, A2, ⋯, A} mathematically. The emergency decision-making is an important branch of collaborative decision-making problems, and they have many similarities [34], while there are great differences in complexity between them. The main difference is that the emergency decision-making problem has strict restrictions on the time, and the information acquired by experts is limited; even worse, it is always difficult for experts to evaluate alternatives with single values, and they often hesitate among multiple values. Fortunately, the introduction of the continuously probabilistic linguistic set can handle this problem efficiently [35], and all the possible evaluation information for an alternative given by experts can be recorded, which avoids the loss of the original information. A simple example is given to illustrate the above theory. Supposing dangerous chemicals suddenly leak on the highway and the emergency threatens the safety of people around and causes damage to the surrounding environment. Several experts are urgently summoned to find solutions for the incident, and then, they are asked to assess each solution within a limited time. It is assumed that there are three experts and four alternatives available to handle this incident, which can be denoted as A={A1, A2, A3, A4} and E={E1, E2, E3}, respectively. The CPLSs mentioned above can be used to record all the original evaluation information. Supposing the evaluation information given by the third expert for the second alternative is denoted as L23={0.3|0.4, 0.36|0.42, 0.38|0.18}, the values in the set {0.3, 0.36, 0.38} are evaluation values, and the values in the set {0.4, 0.42, 0.18} are the corresponding probability information, the calculation process of the score value is S23=0.3 × 0.4+0.36 × 0.42+0.38 × 0.18=0.3396. The situation of emergencies always changes dynamically over time [36]; therefore, decisions need to be made according to the actual situations at different stages, and these problems will be discussed in detail in the next chapter of this paper.

2.3. The Information Aggregation Operators

The scattered information given by experts separately must be aggregated and obtained the final evaluation value for each alternative [37]. At present, there are several different aggregation methods [38], and the dynamic hesitant probability fuzzy weighted arithmetic (DHPFWA) operator is selected in this paper after comparisons because of its simple and intuitive characteristics. Supposing a total of k experts have, respectively, given their evaluation information for the alternative A, which can be denoted mathematically as L={L, L, ⋯, L}, the weights of experts can be denoted as ω=(ω1, ω2, ⋯, ω), which can be obtained according to their past experiences and authorities in this field; the greater the value is, the more important the evaluation information given by the expert is [39]; and the weights satisfy the constraints, which are ω ∈ (0,1) and ∑ω=1. Equation (3) gives the specific calculation method of the DHPFWA operator.where l1=1,2 ⋯ , m1, l2=1,2 ⋯ , m2, l=1,2 ⋯ , m, and we must point out that the values of m1, m2, ⋯, m are not necessarily equal to each other, which means that the total number of elements in different CPLSs can be completely unequal with each other. Let us give a simple example to illustrate the above theories, supposing the CPLSs L={034|036,038|035,040|029}, L={032|1} and L={035|07,039|03} are the evaluation information for the alternative A given by three experts, respectively. We can find that a total number of elements in the three CPLSs are m1=3, m2=1, and m3=2, respectively, and they are totally different from each other. Now further assume that the weights of the three experts are ω=(0.32, 0.27, 0.41), and the aggregated value of the three CPLSs can be calculated according to equation (3), which is shown as follows: We can find that the aggregated value is also in the form of CPLS and cannot be compared with other values directly [40], the score value can be further calculated according to equation (2) mentioned in Section 2.1, which is shown as follows: The form of the score value is very simple, and it is a real number, which is easy to be compared with other values and perform algebraic operations.

2.4. The Decision-Making Problem with Cost Constraints

Obviously, the cost is one of the most important constraints in the decision-making process, which cannot be ignored [41]. Although every alternative for dealing with emergencies is costly, while there are still wide gaps among different alternatives. The more rigorous the alternative is designed; usually, the better effect can be acquired, while the disadvantage is also obvious, which often have a great adverse impact on the local economy and increase burdens on the people and the government [42]. The costs include not only economic costs but also casualties, labour costs, environmental pollution, and expected income loss and so on; particularly, the casualties are the most important cost and must be seriously considered in the decision-making process [43]. Through the above analysis, we believe that the most appropriate alternative is not necessarily the one that just has the best effect, the cost and the effect must be considered comprehensively, which is more in line with the actual situation [44]. The main idea of dealing with the decision-making problem with cost constraints can be briefly described as follows: first, we reorder all the alternatives according to their costs, which can be denoted as A={A1, A2, ⋯, A}; the estimated costs of these alternatives can be denoted as Δη={Δη01, Δη12, ⋯ ,Δη}, in which the symbol Δη indicates the estimated cost from the time point t to the time point t; the estimated effects acquired by implementing these alternatives can be denoted as Δτ={Δτ01, Δτ12, ⋯ ,Δτ}, and similarly, the symbol Δτ indicates the estimated effect acquired from the time point t to the time point t. We give the definition of the effect per cost (EPC), which can be described as ψ={ψ, i=1,2, ⋯, k}, ψ=Δτ/Δη. The definition of the EPC firstly proposed in the paper can consider the cost and effect comprehensively, and we believe that the most appropriate alternative in the current time point is the one that has the lowest EPC.

2.5. The Closed-Loop Control System

The closed-loop control system is a concept of the automatic control theory in the engineering technology. Its principle can be briefly described as follows: part or all of the output signals will be sent back to the input of the system, the differential signals between the original input signals and the feedback signals will be calculated, and then, they will be input into the system to automatically adjust relevant parameters [45], which is helpful to avoid the system from deviating from the predetermined goal. We find that there are always differences between the values estimated at the previous time point and the values measured currently, the closed-loop control system provides a way to solve this problem, and we try to construct a closed-loop control system in the decision-making field [46]. Specifically speaking, we calculate the differences of the values estimated at the previous time point and the values measured currently and then input the differences into the decision-making system; thus, the relevant parameters of the system will be automatically adjusted in time according to the differences, and this is helpful to improve the evaluation accuracy of the system [47]. This is also one of the important improvements between the algorithm proposed in the paper and other decision-making methods.

3. The Closed-Loop Method of Collaborative Decision-Making

In this section, we will introduce the algorithm proposed in this paper in detail and build the mathematical model.

3.1. Mathematicize the Decision-Making Problem

Usually, it is impossible to achieve the expected goal by taking measures only once for dealing with emergencies, we need to adjust measures in time with the development of the situation. First of all, we make the following assumptions: the initial time point is denoted as T0, and the time point of achieving the expected goal is denoted as T, and all time points are recorded in the set T={T0, T1, ⋯ , T}. All the time intervals are recorded in the set ΔT={ΔT01, ΔT12, ⋯, ΔT}, and they can also be called periods. Generally, they are equal to each other, while, in some special cases, such as, when a major unexpected event occurs suddenly, a new time point must be inserted immediately. The experts invited to deal with the emergency are denoted as E={E1, E2, ⋯, E}, and their corresponding weights are denoted as ω={ω1, ω2, ⋯ω}; the alternatives proposed by experts at the time point T are denoted as A={A1, A2, ⋯, A}; the values of the parameter i(i=0,1, ⋯k) indicate different time points; and the values of the n(i=1,2, ⋯k) are not necessarily equal to each other. The experts will measure the current severity of the emergency according to the information acquired at each time point, these measurements will be denoted as τ={τ0, τ1, ⋯ ,τ}, and each value τ in the set τ is in the form of CPLS.

3.2. The Subtraction between Any Two CPLSs

In order to build the feedback network, first of all, we need to calculate the differences between the estimated values made at the previous time point and the values measured at the current time point. Both data are in the form of CPLSs, and therefore, the subtraction between any two CPLSs must be required [48]; however, this theory is rarely mentioned by other researchers, and for this reason, the paper proposes a subtraction method between any two continuously probabilistic linguistic sets, which is shown as equation (4). We suppose the L and the L are two ordinary continuously probabilistic linguistic sets. We find that the calculation result obtained by equation (4) is also a set, which can be called a special continuously probabilistic linguistic set. The main difference is that the values satisfy the constraint condition, which is −1 ≤ γ − γ ≤ 1 in the subtraction set, while the values satisfy the constraint condition, which is 0 ≤ γ ≤ 1 in any ordinary continuously probabilistic linguistic set. It can be further illustrated by a simple example, supposing that there are two ordinary CPLSs, which are recorded as L={04|02,041|08} and L={0.38|0.3, 0.41|0.1, 0.43|0.6}, respectively, and the subtraction result can be calculated according to equation (4) and the result is as follows: We can find that some values are greater than zero, while other values are less than zero, which is different from the definition of the ordinary continuously probabilistic linguistic set. The sum of probabilities is also equal to one, which is the same with the ordinary continuously probabilistic linguistic set. However, the above result is still not intuitive enough to reflect the differences; therefore, the score value of the special continuously probabilistic linguistic set needs to be further calculated. We must point out that the method mentioned in equation (2) is still applicable to the calculation of the special continuously probabilistic linguistic set, and the result is called as the special score value. The only difference is that the value range is 0 ≤ S(L) ≤ 1 for any ordinary CPLS, while the value range will be −1 ≤ S(L) ≤ 1 for the special CPLS. For example, the special score value of the above example can be calculated according to equation (2), and the result is as follows: When the score value is less than zero, it indicates that the value measured currently is better than the value estimated at the previous time point; when the score value is greater than zero, it indicates that the value measured currently is worse than the value estimated at the previous time point; when the score value is equal to zero, it indicates that the value measured currently is exactly equal to the value estimated at the previous time point; however, this ideal situation is almost impossible to happen.

3.3. The Method of Obtaining the Most Appropriate Alternative

Let us illustrate the algorithm proposed in the paper in the chronological order. At the initial time point T0, the current severity of the emergency measured by experts is denoted as τ0, the specific form can be denoted as τ0={τ01, τ02, ⋯, τ0}, each value τ0 given by the corresponding expert E is in the form of the continuously probabilistic linguistic set, and the specific form of the τ0 is further described in Table 1. The alternatives proposed at the time point T0 are denoted as A0={A01, A02, ⋯, A0}, and the estimated severities for the next time point T1 are denoted as τ1′={τ11/′, τ11/′, ⋯, τ1}. When using different alternatives, each value τ1 in the set τ1′ is also a set, which can be denoted as τ1={τ10, τ11, ⋯, τ1}; for example, the symbol τ1 indicates the severity that the expert E estimated at the time point T1 by using the alternative A, and the specific form of the τ1′ is further described in Table 2. We must point out that all the elements in Table 2 are also in the form of the continuously probabilistic linguistic set. Each value τ1(i=1,2, ⋯, n0) consists of the elements in the corresponding row of Table 2. For the sake of simplicity, the specific forms of the elements in Table 2 are not given and they are similar to the elements in Table 1.
Table 1

The current severity of the emergency at the initial time point.

Experts E 1 E 2 E m
Measured values τ 0 1={γl01|pl01} τ 0 2={γl02|pl02} τ 0 m ={γl0m|pl0m}
Table 2

The estimated severities at the time point T1.

Experts estimated severities E 1 E 2 E m
τ 1 1/′ τ 11 1/′ τ 12 1/′ τ 1m 1/′
τ 1 2/′ τ 11 2/′ τ 12 2/′ τ 1m 2/′
τ 1 n 0/′ τ 11 n 0/′ τ 12 n 0/′ τ 1m n 0/′
All the scattered information provided by experts can be aggregated by the DHPFWA operator, and then, the score value can be further calculated, and these theories have already been introduced in Section 2.3. Equations (5) and (6) are specific expansion forms for this problem. The calculation result of the DHPFWA(τ0) is in the form of the continuously probabilistic linguistic set, and the symbol m indicates the total number of elements in the DHPFWA(τ0). Similarly, the score values of the estimated severities at the time point T1 can also be calculated, which can be denoted as S′(T1)={S(τ11/′), S(τ12/′), ⋯, S(τ1)}; then, all the estimated effects can be calculated according to equation (7). Each value in the set Δτ01 satisfies the constraint, which is −1 ≤ Δτ01 ≤ 1; when the value is negative, it indicates that the emergency has become worse after using the corresponding alternative A; when the value is positive, it indicates that the emergency has been alleviated after using the corresponding alternative A; and when the value is zero, it indicates that the emergency has not changed after using the corresponding alternative A. The cost of each period is recorded in the set Δη={Δη01, Δη12, ⋯, Δη}. The symbol Δη indicates the estimated cost from the time point T to the time point T, because different alternatives A={A1, A2, ⋯, A} for dealing with the emergency will produce different costs, and the Δη is also a set, which can be denoted as Δη={Δη1, Δη2, ⋯, Δη}. For the first period, the effect per cost ψ01 of using different alternatives can be calculated according to equation (8); obviously, the result is a set. The most appropriate alternative at this time point is the one that has the lowest EPC, which is shown as follows: Similarly, the most appropriate alternative at other time points can be obtained by this method.

3.4. The Construction of the Closed-Loop System

The most appropriate alternative A found in the previous step will be implemented immediately. The current severity of the emergency will be measured again at the time point T1, which can be denoted as τ1. The τ1 is a set that contains several values {τ11, τ12, ⋯, τ1} given by different experts, respectively, according to the information acquired at the time point T1, and the specific form of the τ1 is further described in Table 3.
Table 3

The current severity of the emergency at the first time point.

Experts E 1 E 2 E m
Measured values τ 1 1={γl11|pl11} τ 1 2={γl12|pl12} τ 1 m ={γl1m|pl1m}
The differences between the values estimated at the initial time point T0 and the values measured at the first time point T1 will be calculated, and the calculation method is shown in equation (10), and its specific form is further described in Table 4.
Table 4

The differences between the estimated values and measured values.

Experts E 1 E 2 E m
Differences d 1 1=τ11j/′τ11 d 1 2=τ12j ′ − τ12 d 1 m =τ1mj/′τ1m
We must point out that all the τ1(i=1,2, ⋯, m) and the τ1(i=1,2, ⋯, m) are in the form of CPLS; therefore, each calculation equation d1=τ1 − τ1(i=1,2, ⋯, m) is a subtraction between CPLSs, and they must be calculated according to equation (4) mentioned in Section 3.2. All the differences d1(i=1,2, ⋯, m) are also in the form of CPLS, and they will be aggregated according to equation (5) and equation (6) to obtain the total difference of the first period, which can be denoted as S(d1). The flow chart of the closed-loop submodule is shown in Figure 1. At this time, the system will enter the automatic adjustment stage. Four parameters that are denoted as λ1, λ2, ε, and ς will be set in advance, and the inequalities −1 ≤ λ1 ≤ −ε ≤ 0 ≤ ε ≤ λ2 ≤ 1 and 0 ≤ ς ≤ 1 hold. The smaller the value of ε is set, the higher the system accuracy is required; the larger the value of λ1 is set, the easier it is for the system to conduct conservative evaluation; the larger the value of λ2 is set, the easier it is for the system to conduct optimistic evaluation; and the greater the value of ς is set, the easier the predetermined goal can be achieved. If the inequality |S(d1)| ≤ ε holds, it indicates that the system works well and no adjustment is required; if the inequalities |S(d1)| > ε and λ1 ≤ S(d1) ≤ λ2 hold, it indicates that only minor adjustments are needed and the automatic adjustment method will be activated immediately; if the inequality λ2 < S(d1) ≤ 1 holds, it indicates that the system is too optimistic, experts are not fully aware of the severity and the development trend of the accident, and the system can be adjusted from two aspects: the first suggestion is that experts must propose more stringent alternatives, and the other suggestion is that experts should reduce the estimated values; if the inequality −1 ≤ S(d1) < λ1 holds, it indicates that the system is too pessimistic, the alternative used has achieved better results than expected. Similarly, the system can also be adjusted from two aspects: the first suggestion is that the experts can propose looser alternatives with lower costs, and the other suggestion is that the experts should appropriately raise the estimated values.
Figure 1

The flow chart of the closed-loop submodule.

3.5. The Automatic Adjustment Algorithm

The symbol ε mentioned above is called the acceptable threshold. In this section, we propose an automatic adjustment algorithm for the estimated values and its specific steps are listed as follows:

Step 1 .

Appropriate values will be set for the system parameters λ1, λ2, and ε according to the actual situation of the emergency.

Step 2 .

Calculate the total differences of the current period S(d)={S(d)|i=1,2, ⋯, k} by using the method mentioned in Section 3.4.

Step 3 .

Let us take the first period as an example to illustrate the algorithm, suppose the inequality λ1 ≤ S(d1) ≤ λ2 holds, and the inequality |S(d1)| ≤ ε does not hold.

Step 4 .

It can be divided into two categories according to the value of the S(d1). When the inequality λ1 ≤ S(d1) < −ε holds, first of all, the maximum value must be found from all the estimated values, supposing the symbol γ represents the maximum value, then increase m × |S(d1)| to the value, and the symbol m represents the total number of experts. On the other hand, when the inequality ε < S(d1) ≤ λ2 holds, similarly, the maximum value should decrease m × S(d1). We can summarize that the adjustment method can be unified for both categories after the above analysis, which can be shown as follows:

Step 5 .

Similarly, the total difference S′(d1) can be calculated again according to the updated estimated values, and the step 3 and the step 4 will be repeated until the inequality |S(d1)| ≤ ε holds.

Step 6 .

The qualified estimated values will be obtained after several rounds of automatic adjustments. The automatic adjustment algorithm has two advantages: the first advantage is that the algorithm is efficient and highly automated, and another advantage is that the original estimated information given by experts is minimally modified compared with other algorithms. The flow chart of the automatic adjustment submodule is shown in Figure 2.
Figure 2

The flow chart of the automatic adjustment submodule.

3.6. The Brief Summary of the Algorithm Proposed in the Paper

The overall flow chart of the algorithm proposed in the paper is shown in Figure 3. The whole algorithm is divided into multiple time points, which are denoted as {T0,T1, ⋯ ,T}, and the time that spans any two adjacent time points can be called a period, such as ΔT0=[T0, T1].
Figure 3

The overall flow chart of the algorithm proposed in the paper.

At the time point T0, the current severity of the emergency will be measured by experts, and the data can be called measured values for short; then, the algorithm will judge whether the predetermined goal has been achieved or not according to the measured values. If the goal has been achieved, the algorithm will be terminated immediately; if the goal has not been achieved, experts will estimate the severities at the next time point when using different alternatives and the data obtained can be called estimated values for short. The estimated effects of different alternatives can be calculated according to the measured values and the evaluated values, and the cost of each alternative can be estimated according to specific measures. After above preparation, the effect per cost of each alternative can be calculated. Finally, the most appropriate alternative that has the lowest EPC will be found and it will be implemented immediately. Similarly, at the time point T1, experts will measure the current severity of the emergency, and then, they will judge whether the predetermined goal has been achieved or not again. If the goal has been achieved, the algorithm will be terminated; if the goal has not been achieved, the total differences between the values estimated at the previous time point and the values measured currently will be calculated, and the corresponding automatic adjustment submodules will be activated according to the differences. The following processing methods are similar to the above steps, and the most appropriate alternative of this time point will be found and implemented. From the time point T2 to the time point T, the algorithm will repeat the above processes and the severity of the emergency will gradually decrease. The emergency will be effectively controlled after several rounds of treatment. At the time point T, experts will measure the current severity of the emergency, and they find that the inequality |1 − S(T)| ≤ ς holds, which indicates that the predetermined goal has been achieved, the algorithm will be terminated immediately. The parameter ς is called the completion threshold. The emergency has been handled effectively with the lowest cost.

4. A Case of the Closed-Loop Collaborative Decision-Making Algorithm

4.1. The General Description of the Emergency

The whole world is facing the severe challenge of the COVID-19 (corona virus disease 2019), and the latest prediction shows that the epidemic will lead to a global economic recession and large-scale unemployment. It has caused a large number of infections; even worse, various prevention and control methods are not mature enough to fundamentally eradicate the infectious disease. At present, the COVID-19 has been basically controlled in China; however, we found that the epidemic is still breaking out occasionally in some areas of China and has the trend of further expansion, and it has added great resistance to the employment and the economic development of China. The Chinese government has taken various measures to deal with the epidemic for many years; however, the epidemic situation is changing continuously over time. Obviously, this problem belongs to the dynamic decision-making problem; in addition, we can hardly hope to solve this problem through only a round of measures, and therefore, the multiround decision-making algorithm discussed in the paper is suitable to deal with this problem. The specific steps of the proposed algorithm will be introduced in this section according to the chronological order.

4.2. The Processing Methods at the Time Point T0

Let us take one of universities in the high-risk areas as an example to illustrate the algorithm and the university is facing the threat of the epidemic. The appropriate alternatives must be found out at different time points to prevent and control the epidemic. Supposing that a total of three experts are summoned to deal with this emergency, and they have put forward four response alternatives, which can be denoted as A0={A01, A02, A03, A04} at the initial time point T0. The predetermined goal is to minimize the adverse impact of the COVID-19 on normal teaching and student activities. Table 5 lists the alternatives proposed by experts for handling the emergency at the initial time point (T0). We can find that the measures in the table have gradually become more and more stringent from top to bottom, and we must admit that the latter alternative is indeed better than the former alternative in controlling the epidemic situation; however, the disadvantage is that the cost will be higher; once again, we point out that the most appropriate alternative is not necessarily the most stringent alternative.
Table 5

The alternatives proposed by experts at the initial time point.

AlternativesThe specific measures
A 0 1 We isolate all close contacts and provide disinfection equipment for the dormitories and classrooms visited by close contacts
A 0 2 In addition to the A01, we measure the temperature of all the students
A 0 3 In addition to the A02, we suspend the courses held by the college of close contacts
A 0 4 In addition to the A03, we suspend all the courses and cancel all unnecessary activities among students
The current severities of the emergency measured by experts separately according to the available information are listed in Table 6, and the weights of experts are also given. Obviously, the predetermined goal has not been achieved. The scattered information can be aggregated according to equations (5) and (6). The score value obtained ranges from 0 to 1, and the symbol “0” indicates that the situation is extremely bad, while the symbol “1” indicates that the situation is perfect. The current severity is 0.1554, and the specific calculation processes are shown as follows:
Table 6

The current severities measured at the initial time point.

Experts E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
Measured values(0.1|0.4, 0.2|0.6)(0.13|0.8, 0.17|0.2)(0.12|0.3, 0.17|0.2, 0.19|0.5)
The values of the estimated severities at the time point T1 when using different alternatives are listed in Table 7. Similarly, the score values are calculated and their specific calculation steps are shown as follows:
Table 7

The estimated severities at the time point T1 when using different alternatives.

Experts alternatives E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
A 1 1 (0.24|0.4, 0.26|0.6)(0.21|0.6, 0.23|0.3, 0.24|0.1)(0.21|0.6, 0.26|0.4)
A 1 2 (0.28|0.3, 0.30|0.5, 031|0.2)(0.29|0.5, 0.31|0.5)(0.29|0.3, 0.32|0.7)
A 1 3 (0.36|0.3, 0.38|0.7)(0.35|0.6, 0.37|0.3, 0.39|0.1)(0.33|0.5, 0.36|0.5)
A 1 4 (0.38|0.3, 0.39|0.2, 0.40|0.5)(0.36|0.3, 0.40|0.7)(0.38|0.6, 0.410.4|)
The score values of the estimated severities at the time point T1 can be recorded as S′(T1)={0.2333, 0.3031, 0.3587, 0.3908}. Subsequently, the estimated effects at the period ΔT01 can be calculated according to equation (7), which are shown as follows: The costs at the period ΔT01 can be denoted as Δη01={Δη011, Δη012, Δη013, Δη014}when using different alternatives. Supposing the cost of the alternative A1 is normalized and is regarded as “1,” other values will be standardized based on this value. The estimated costs of all alternatives are Δη01=(1,1.2, 1.7, 2). Obviously, the alternative A4 has the best effect; however, its cost is also the highest, and therefore, the most appropriate alternative cannot be determined directly, and the effects per cost of all alternatives need to be further calculated according to equation (8), which are shown as follows: The order of the alternatives can be denoted as ψ012 > ψ013 > ψ014 > ψ011 according to the values of EPCs; therefore, the alternative A2 is the most appropriate alternative at the time point T0, and it will be implemented immediately.

4.3. The Processing Methods at the Time Point T1

Similarly, the experts will measure the severities again at the time point T1 and their values are listed in Table 8.
Table 8

The current severities measured at the time point T1.

Experts E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
Measured values(0.29|0.3, 0.31|0.7)(0.26|0.4, 0.3|0.6)(0.32|1)
Obviously, the predetermined goal has still not been achieved. In order to test and improve the accuracy of the system, the differences between the values estimated at the T0 and the values measured at the T1 will be calculated and their values are listed in Table 9.
Table 9

The differences at the first period.

Experts E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
Differences 0.0|10.09,0.03|0.210.01|0.15,0.01|0.350.02|0.06,0|0.14 0.030.2,0.01||0.30.05|0.2,0.01|0.3 (−0.03|0.3, 0|0.7)
The system parameters are set as λ1=−0.001, λ2=0.001, ε=00005, and ς=0.04. We can find that the inequality λ1 < S(d1) < λ2 holds; therefore, major adjustments are not required, and however, the inequality −ε ≤ S(d1) ≤ ε does not hold, which indicates minor adjustments are still required and the automatic adjustment module will be activated immediately. According to the algorithm, the maximum estimated value of the alternative A02 in Table 7 can be found, the value 0.32 will increase to 0.3216081827 according to equation (11), and other values remain unchanged. The updated severities are shown in Table 10.
Table 10

The updated estimated severities of the implemented alternative at the time point T1.

Experts alternatives E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
A02(0.28|0.3, 0.30|0.5, 0.31|0.2)(0.29|0.5, 0.31|0.5) 0.3216081827|0.70.29|0.3
The total difference will be calculated again according to the data in Table 11, and the specific steps are shown as follows:
Table 11

The updated differences at the first period.

Experts E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
Differences 0.01|0.09,0.03|0.210.01|0.15,0.01|0.350.02|0.06,0|0.14 0.03|0.2,0.01|0.30.05|0.2,0.01|0.3 0.0016081827|0.70.03|0.3
We can find that the inequality −ε < S′(d1) < ε holds at this time, which indicates that the automatic adjustment module works well. The updated values in Table 10 can provide references for experts in the next estimation. Since the inequality λ1 < S(d1) < λ2 holds, the most appropriate alternative at this time point is the same as the one at the previous time point; therefore, the alternative A2 is still the most appropriate alternative at the time point T1 and it will be implemented immediately. Table 12 lists the estimated severities at the time point T2 when using different alternatives. Since all the alternatives proposed by experts have not changed, the costs remain unchanged.
Table 12

The estimated severities at the time point T2 when using different alternatives.

Experts alternatives E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
A 2 1 (0.42|1)(0.41|0.6, 0.44|0.4)(0.39|0.3, 0.43|0.7)
A 2 2 (0.46|0.2, 0.5|0.8)(0.47|0.5, 0.52|0.5)(0.49|0.7, 0.51|0.3)
A 2 3 (0.55|0.8, 0.57|0.2)(0.52|0.4, 0.56|0.6)(0.54|1)
A 2 4 (0.58|0.4, 0.62|0.6)(0.6|1)(0.58|0.7|0.61|0.3)

4.4. The Processing Methods at the Time Point T2

In the same way, the experts will measure the severities again at the time point T2 and their values are listed in Table 13.
Table 13

The current severities measured at the time point T2.

Experts E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
Measured values(0.63|0.5, 0.65|0.3, 0.67|0.2)(0.67|0.3, 0.68|0.7)(0.66|1)
Obviously, the predetermined goal has not been achieved. The differences between the values estimated at the time point T1 and the values measured at the time point T2 will be calculated, which are shown in Table 14. The total difference will be aggregated according to the data in Table 14.
Table 14

The differences at the second period.

Experts E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
Differences 0.17|0.10,0.19|0.060.21|0.04,0.13|0.400.15|0.24,0.17|0.16 0.20|0.15,0.21|0.350.15|0.15,0.16|0.35 (−0.17|0.7, −0.15|0.3)
We can find that the inequality S(d2) < λ1 holds, which indicates that the actual effects of the alternative are much better than the estimated effects and major adjustments must be required. Experts need to check the system carefully to find out whether any important information for decision-making is missing. The alternative with lower cost should be adopted, if the alternative adopted in the last round of decision-making is already the cheapest alternative, experts should propose a new and cheaper alternative. Since the inequality Δη231 < Δη232 holds in this case, which indicates that the alternative with lower cost exists, therefore, there is no need to propose a new alternative, and the alternative A1 will be the most appropriate alternative at the time point T2, and it will be implemented immediately. Duo to the good effect of the alternative, the experts will give more optimistic estimated values in the next round of estimations, which are shown in Table 15.
Table 15

The estimated severities at the time point T3 when using different alternatives.

Experts alternatives E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
A 3 1 (0.88|0.2, 0.92|0.2)(0.91|0.6, 0.93|0.4)(0.82|1)
A 3 2 (0.94|0.4, 0.96|0.6)(0.95|1)(0.93|0.7, 0.96|0.3)
A 3 3 (0.97|0.5, 0.98|0.5)(0.95|0.4, 0.97|0.6)(0.96|1)
A 3 4 (0.97|1)(0.98|0.4, 0.99|0.6)(0.94|0.7|0.96|0.3)

4.5. Achieve the Predetermined Goal

The experts will measure the severities of the emergency again at the time point T3, and their values are listed in Table 16, and then, the score value will be calculated.
Table 16

The current severities measured at the time point T3.

Experts E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
Measured values(0.95|0.4, 0.98|0.6)(0.97|1)(0.96|0.8, 0.98|0.2)
We can find that the inequality |1 − S(T3)| ≤ ς holds, which indicates that the emergency has almost been eliminated, and only routine inspections are required and the algorithm will be terminated.

5. The Comparisons and Discussions

Many scholars have also proposed several outstanding algorithms in the field of decision-making from various perspectives, and these algorithms have their characteristics and suitable application scopes [49]. The comparisons between the algorithms proposed in the paper and others will be made in this section, which will be helpful for finding out the advantages and disadvantages of the algorithm proposed in this paper.

5.1. The Hesitant Fuzzy Set and Its Processing Methods

The hesitant fuzzy set, a classic data structure, is one of the important definitions in the fuzzy mathematics [50], and its information aggregation operators and comparison methods are also quite mature; particularly, many complex data structures are developed from it. Unfortunately, the probability information of the evaluation values cannot be recorded together in the hesitant fuzzy set. Table 17 lists the conversion values of Table 7 when the data are recorded in the form of hesitant fuzzy sets.
Table 17

The conversion values in the form of hesitant fuzzy sets.

Experts alternatives E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
A 1 1 (0.24, 0.26)(0.21, 0.23, 0.24)(0.21, 0.26)
A 1 2 (0.28, 0.30, 0.31)(0.29, 0.31)(0.29, 0.32)
A 1 3 (0.36, 0.38)(0.35, 0.37, 0.39)(0.33, 0.36)
A 1 4 (0.38, 0.39, 0.40)(0.36, 0.40)(0.38, 0.41)
We find that only the evaluation values can be recorded, and all the corresponding probability information is missing. From the other point of view, it can be considered that all the probability values are equal to each other in any hesitant fuzzy set. Therefore, the hesitant fuzzy set is a special case of the continuously probabilistic linguistic set, and the continuously probabilistic linguistic set can record more detail information, which will make the algorithm more accurate fundamentally.

5.2. The Probabilistic Linguistic Set and Its Processing Methods

The probabilistic linguistic set (PLS) is also one of the efficient data structures, and it is widely used in the field of dealing with fuzzy problems, especially the collection and storage of the fuzzy data [51]. The total number of the possible evaluation values in the PLS is limited [52], and all the possible evaluation values are contained in the additive linguistic term set, which are denoted as S={s|α=0,1, ⋯, 2τ}, and the symbol τ indicates a positive integer. The definition of the probabilistic linguistic set can be described mathematically as follows: Obviously, the data structure CPLS proposed in the paper is developed from the probabilistic linguistic set and it not only inherits the advantages of the PLS but also overcomes its disadvantages, and it expands the number of possible evaluation values from limited to countless. For the case mentioned above, the additive linguistic term set can be set as S={s|α=0,1,2,3,4}, the symbol s0 indicates “terrible”; the symbol s1 indicates “bad”; the symbol s2 indicates “moderate”; the symbol s3 indicates “good”; and the symbol s4 indicates “perfect.” Let us also take the data in Table 7 as an example to illustrate the data structure, and the estimated values cannot be directly converted to the additive linguistic term sets; therefore, first, we should establish the transformation rules, which can be described as follows: the values will be set as s0 if the inequality 0 ≤ τ1/′ < 0.2 holds; the values will be set as s1 if the inequality 0.2 ≤ τ1/′ < 0.4 holds; the values will be set as s2 if the inequality 0.4 ≤ τ1/′ < 0.6 holds; the values will be set as s3 if the inequality 0.6 ≤ τ1 ′ < 0.8 holds; and the values will be set as s4 if the inequality 0.8 ≤ τ1/′ ≤ 1 holds. Table 18 lists the transformed values when the data are recorded in the form of the probabilistic hesitant fuzzy sets.
Table 18

The transformed values in the form of the probabilistic hesitant fuzzy sets.

Experts alternatives E 1(ω1=0.3) E 2(ω2=0.32) E 3(ω3=0.38)
A 1 1 (s1(1))(s1(1))(s1(1))
A 1 2 (s1(1))(s1(1))(s1(1))
A 1 3 (s1(1))(s1(1))(s1(1))
A 1 4 (s1(0.5), s2(0.5))(s1(0.3), s2(0.7))(s1(0.6), s2(0.3))
We find that the values in the A11, A12, A13 are equal to each other and all the evaluation values given by different experts are s1 and s2; obviously, the discrimination ability of this method is poorer than the algorithm proposed in the paper.

5.3. The Decision-Making Algorithms without the Cost Limitation

The cost limitation in the decision-making process is one of the characteristics of the algorithm proposed in the paper. Although many other algorithms have considered costs, they only take the cost as one of decision-making indicators and do not list it separately [53]. In some cases, we found that the increase in cost does not improve any effect. For the case discussed in the paper, the most appropriate alternatives will be A4 ~ A4 ~ A4 if only the effects are considered, the total cost will be η=Δη014+Δη124+Δη234=6. The final result is A2 ~ A2 ~ A1, which is obtained by the algorithm proposed in the paper, and the total cost is η′=Δη012+Δη122+Δη231=3.4. We can find that the same goal has been achieved, but the cost is saved by 43.3%, which verifies the superiority of the algorithm proposed in the paper from the perspective of the cost.

5.4. The Open-Loop Decision-Making Algorithms

At present, most decision-making algorithms adopt the open-loop mode; in other words, they fail to establish a set of feedback mechanisms [54]. Now we will demonstrate the method without feedback mechanisms to solve the above case and point out the differences between the method and the algorithm proposed in the paper. The alternative A2 will still be the most appropriate alternative at the time point T0. The estimated values cannot be compared with the measured values at the time point T1; therefore, the accuracy of the system cannot be verified, the automatic adjustment module proposed in the paper cannot be activated, the system cannot be adjusted in time, and the error rate will be higher and higher with the increasing of time. One of the noticed differences will occur at the time point T2, the A2 instead of the A1 will be the most appropriate alternative if the feedback mechanism fails work, and the conclusion that only the alternative with lower cost is needed and the estimated values must be improved in the next estimation cannot be drawn, and this will directly lead to the increase of costs and processing cycles. In short, the feedback mechanism is effective for timely verifying the correctness of the system, and it can save the total cost and reduce time effectively [55], which verifies the superiority of the algorithm proposed in the paper from the perspective of the accuracy.

6. Conclusions

When faced with emergencies, especially disasters, it is crucial to make timely and appropriate decisions; however, it is not easy to achieve this goal because of the limited time for making decisions and the fuzzy information that can be acquired. The accuracy of data can directly affect the quality of the final decision, while we find that it is hard to record data accurately and scientifically. How to improve the accuracy of the collected data is the first problem to be solved. The data structure, the continuously probabilistic linguistic set, is adopted to save original data after comparisons. This data structure allows multiple possible values can be stored together in a record; meanwhile, the probability information of each possible value can also be stored together, and these characteristics can overcome the uncertainty and fuzziness in the process of data acquisition, which can improve the data quality to the greatest extent and lay a solid foundation for the later decision-making. At present, most decision-making models adopt the linear structure and single-round mode, although these models have been elaborately designed, an important defect cannot be ignored; that is, it is impossible to verify the accuracy of the estimated results given by the system in time. In order to solve this problem, a new structure is proposed in the paper. The whole decision-making process is divided into multiple sub-decision-making stages, and each estimated result can be verified at the next decision-making time point. The estimated values and the current measured values are two different types of signals used in the system, the differences of the values estimated at the previous time point, and the values measured currently will be calculated by the fuzzy subtraction proposed in the paper. In general, there are certain differences between them, and the greater the difference, the lower the accuracy of the system. Due to time constraints, it is almost impossible for experts to reevaluate alternatives; fortunately, the paper proposes an automatic repair algorithm, which can solve this problem. The repair algorithm contains several submodules according to different situations, when the inequality S(d) ≤ |ε| holds, which indicates that the system works well and does not need any adjustment; when the inequalities λ1 ≤ S(d) < −ε or ε < S(d) ≤ λ2 hold, which indicates that the system needs minor adjustments and the automatic adjustment algorithm will be activated immediately; when the inequality λ2 < S(d) ≤ 1 holds, which indicates that the system is too optimistic and the actual situation is more serious than estimated; and when the inequality −1 ≤ S(d) < λ1 holds, which indicates that the system is too prudent and the actual effect is much better than estimated. The closed-loop decision-making system can be constructed through the establishment of the feedback mechanisms, and the accuracy of the whole model will be improved effectively. The cost is one of the most important factors in the decision-making process, and we must point out again that the cost mentioned in the paper refers to the generalized cost, not just the economic cost. The effectiveness of each alternative will be evaluated separately in each round of decision-making. Generally, the rigorous alternative can achieve better results, while it may also cause a lot of losses; thus, it is not necessarily the most appropriate alternative. Based on these considerations, the paper proposes the definition and calculation method of the effect per cost, when the predetermined goal can be achieved, we believe that the most appropriate alternative must be the one that has the lowest cost. The establishment of the above theory is also one of the innovations of this paper. We have to point out some limitations of the paper. As one of the initial conditions, the estimated cost is essentially a fuzzy value, which is difficult to be accurately described by a simple value. Thus, the problem discussed in the paper is actually a double fuzzy problem and more fuzzy variables need to be considered. Further researches will be conducted by our team for this problem in the near future.
  3 in total

1.  Optimal Decision-Making in an Opportunistic Sensing Problem.

Authors:  Derek Mikesell; Christopher Griffin
Journal:  IEEE Trans Cybern       Date:  2015-12-03       Impact factor: 11.448

2.  Disentangling expectation from selective attention during perceptual decision making.

Authors:  Alexander J Simon; Jessica N Schachtner; Courtney L Gallen
Journal:  J Neurophysiol       Date:  2019-03-13       Impact factor: 2.714

3.  The optimal emergency decision-making method with incomplete probabilistic information.

Authors:  Ming Fu; Lifang Wang; Bingyun Zheng; Haiyan Shao
Journal:  Sci Rep       Date:  2021-12-03       Impact factor: 4.379

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.