Literature DB >> 35776766

Research on performance and dynamic competency evaluation of bid evaluation experts based on weight interval number.

Tie Li1, Guoliang Li1, Mi Zhang1, Yuan Qin1, Guolong Wei1.   

Abstract

PURPOSE/SIGNIFICANCE: In the past many years, some scholars have studied bid evaluation experts, such as the behavior of bid evaluation experts. However, previous research ignores the performance and competency of bid evaluation experts, so this paper aims to provide a theoretical basis for incentive and constraint mechanism and hierarchical or dynamic management of bid evaluation experts by implementing performance and dynamic competency evaluation of bid evaluation experts. METHOD/PROCESS: Firstly, the evaluation index system of performance and dynamic competency of bid evaluation experts is preliminarily constructed by referring to relevant literature, and then the constructed evaluation index was modified and improved by consulting relevant stakeholders' experts. Secondly, considering the hesitation and consistency of expert weighting, the calculation method of expert weight coefficient and index score interval number is improved. Based on the theory of weight interval number, the corresponding mathematical optimization model is constructed to calculate the index weight according to the purpose of performance judgment and dynamic competency clustering of bid evaluation experts. Finally, the data of performance and dynamic competency of bid evaluation experts is obtained by questionnaire survey, and the empirical analysis was carried out by simulating the bid evaluation experts consistent with the actual situation. RESULTS/
CONCLUSION: After improving the calculation method of index score interval number, and then calculating index weight interval number through index score interval number, the length of index weight interval number can be decreased and the calculation accuracy of index weight interval number can be increased. In addition, the index weight calculated by the constructed mathematical optimization model can make the intra-class discrimination smaller and the inter-class discrimination larger. Finally, some suggestions are also provided for the management of bid evaluation experts.

Entities:  

Mesh:

Year:  2022        PMID: 35776766      PMCID: PMC9249197          DOI: 10.1371/journal.pone.0269467

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


1. Introduction

Engineering bidding is a widely used transaction method in the world [1-3]. For project owners, the selection of contractors has a significant impact on project cost and quality [4]. In China, in order to select the bidders who best meets the bidding conditions, the relevant departments randomly select the bid evaluation experts in the relevant fields to form a temporary bid evaluation committee (China stipulates that the number of members of the bid evaluation committee is an odd number and more than 5 people) based on the provisions of relevant laws and regulations and the needs of the project, then the bid evaluation committee evaluates and selects the bidder according to the bidder’s quotation, technical measures, etc. As a result, this leads to the inequality of legal responsibility and power of the bid evaluation subject, and the contradiction between the temporality of the bid evaluation committee and the long-term nature of the project. The two aspects become an important part of the research hotspot [1]. The evaluation and selection of contractors is a difficult and challenging task [5], and the decisions of bid evaluation and bid winning are often considered as key links in auctions [6]. Therefore, the assessment of contractors and the selection of best bidders require complex knowledge and experience to ensure that selected contractors are able to implement projects as required by owners [7]. At the same time, the bid evaluation committee can decide by itself, but not in an arbitrary way [8]. They hold the dominant power in the evaluation work and have the most direct and fundamental impact on the evaluation results [9]. Therefore, bid evaluation experts must have high competency. Although the relevant laws and regulations in China have stipulated the qualification of bid evaluation experts when they enter into expert database, there are no reliable measures to implement the periodic assessment system after the bid evaluation experts entered into expert database. Hence, the quality of bid evaluation experts is worrying [9]. Some provinces put forward the hierarchical or dynamic management of bid evaluation experts. In the long run, the ability of bid evaluation experts will change with the accumulation of knowledge and experience. Therefore, it is necessary to take periodic assessment the competency of bid evaluation experts as the theoretical basis of hierarchical management. In addition, the evaluation records of China’s bid evaluation experts participating in the evaluation work are generally used for archiving and verification. Most provinces do not assess them through the performance of the bid evaluation experts. Although a few provinces propose “scoring system” management according to the performance of the bid evaluation experts, there are still large limitations, which only consider whether the bid evaluation experts are illegal or not. At present, China is in the transition stage from offline bid evaluation (i.e. traditional bid evaluation method) to online bid evaluation. The bid evaluation process of the two methods is shown in Fig 1. Through comparison, the common points between the two and the advantages of online bid evaluation are found as follows:
Fig 1

Two bid evaluation mechanisms.

a. Offline bid evaluation. b. Online bid evaluation.

Two bid evaluation mechanisms.

a. Offline bid evaluation. b. Online bid evaluation. (1) Common points between the two methods: no matter which method of bid evaluation is adopted, bid evaluation experts need to put forward bid evaluation suggestions, score the bids and put forward bid evaluation conclusions in the process of bid evaluation according to their professional knowledge and work experience. Meanwhile, they must comply with relevant laws and regulations. (2) Advantages of online bid evaluation: no matter which method of bid evaluation is adopted, the performance of bid evaluation experts can be evaluated. However, online bid evaluation can automatically evaluate the performance of bid evaluation experts according to the evaluation process and results, and conduct periodic evaluation. At the same time, the digital footprint of the evaluation process of bid evaluation experts can be collected through technical means. Previous studies have shown that digital footprint provides an effective way to reduce information asymmetry and moral hazard [10-13]. Therefore, the performance of bid evaluation experts can be evaluated by digital footprint (such as the seriousness of performance of bid evaluation experts through equipment testing and the time for bid evaluation experts to browse bids, etc.). Online can realize off-site bid evaluation, experts do not need to meet, do not affect each other, and can realize incomplete information static game and independent bid evaluation. Therefore, how to evaluate the performance of bid evaluation experts and periodically assess the competency of bid evaluation experts under the background of online bid evaluation to provide a theoretical basis for the hierarchical or dynamic management and incentive and constraint mechanism of bid evaluation experts is a very meaningful research topic.

2. Related research review

2.1 Research on bid evaluation experts

The first section reviews the current management situation of bid evaluation experts in China and expounds the importance of competency and performance evaluation of bid evaluation experts and the limitations of current management under the background of information technology. The research on bid evaluation experts mainly includes two aspects: integrity, bid evaluation behavior and results. As for the integrity of bid evaluation experts, References [14, 15] constructed evaluation index system and evaluation model to evaluate the integrity of bid evaluation experts from different perspectives. As for the bid evaluation behavior and results of bid evaluation experts, the existing research focused on the behavior of the bid evaluation committee [16], the antagonism or uncooperative behavior of the bid evaluation expert groups (i.e. technical group and business group) [6, 17–20], the bid evaluation behavior [21] and collusive behavior [22] of bid evaluation experts as well as the abnormal score of bid evaluation experts [23] and the difference of score results [24]. In addition, References [25, 26] proposed incentivizing and constraining bid evaluation experts by analyzing the principal-agent relationship, and Reference [27] further designed the incentive and constraint mechanism. However, the consensus-building process of bid evaluation experts, the generation of collective decision-making matrix and the rank-oriented decision-making method consider the expert decision-making problem, which is different from the perspective of this paper and will not be further discussed in this paper. The bid evaluation process of experts can be regarded as the expert service process. Due to the information asymmetry in the process of expert service, there are different kinds of hidden moral behavior in the expert service market, such as fraud, improper service, and internalization of the entire objective functions of the clients and so on [28]. Bid evaluation experts also have the characteristics of “gig economy” such as temporary feature and feature of project. Based on the network platform, the ‘new gig economy’ [29] in the Internet era has derived an online labor platform with algorithm as the underlying technical logic [30] by deeply integrating digital technology with the on-demand gig economy. Highly automated and data-driven method is used to replace the functions of managers in the labor execution management of platform workers through algorithm management [31], and the current situation of the incomplete and asymmetric information acquisition of both sides [32] and the principal-agent problem of incomplete labor contracts under the condition of asymmetric information are overcome through the exchange of a large amount of information [33]. It makes that the individual behavior of platform workers in the labor process is almost completely exposed to the continuous and rigorous monitoring environment of the algorithm. Therefore, they must show behavior consistent with organizational goals and platform specifications, and complete the assigned tasks [34, 35]. As a result, human resource management activities such as performance management are significantly different from traditional model [36]. By reshaping the work mode, the digital economy has triggered a series of new problems in behavior, efficiency and ethics in the workplace, so research on organizational behavior and human resource management at the micro level is urgently needed [37]. Big data and artificial intelligence simplify the data acquisition, and provide more research data that are difficult to obtain and trace for existing research [38]. They cover all aspects of the production process, and can penetrate into each production link, insight into relevant factors including human emotions and preferences [39]. Therefore, it is feasible to evaluate the evaluation performance of online bid evaluation experts based on digital footprint research, which is also a topic worthy of in-depth discussion. Reviewing the relevant research on bid evaluation experts, it is found that there is a lack of research on the performance evaluation of bid evaluation experts. In terms of China’s relevant policy provisions, practical needs of management and theoretical research, it is urgent to study the performance evaluation of online bid evaluation experts, and consider that the changes in performance, integrity, knowledge, experience and other factors within a period may lead to changes in their competence, so as to provide support for management practice and expand relevant theories. According to the definition of performance, the performance of bid evaluation experts should comprehensively consider bid evaluation behavior and results. Referring to competency theory [40], competency should consider the performance and other related indicators within the period. Based on the dynamic view of competency theory [41] and the static and dynamic content characteristics of competency [42], this paper defines the periodic competency of bid evaluation experts as dynamic competency. In view of these, this paper will overall consider the relevant factors to evaluate the performance and dynamic competency of bid evaluation experts.

2.2 Research on subjective weighting method

Based on the above contents, this paper constructs the evaluation index system in order to realize the performance evaluation and dynamic competency evaluation of bid evaluation experts. Therefore, the calculation of reasonable and effective index weight has become the key issue of evaluation. The common practice is to find some stakeholders (i.e. all those who have sufficient professional knowledge to carry out reasonable evaluation) [43] and combine the importance of the indices with the linguistic value to construct a judgment matrix through linguistic variables [44, 45]. In this way, the limitations of individual expert opinions can be avoided and the reliability of the evaluation results can be improved by integrating multiple expert opinions, such as analytic hierarchy process (AHP) [46] and order relationship analysis method(G1) [47, 48]. In this paper, IAHP method of reference [49] is used to construct the evaluation model. However, there are also some shortcomings in Reference [49]: Firstly, in the process of judging the importance of evaluation indices, the main professional knowledge characteristics (i.e. the hesitation of cognitive limitations and the consistency of different preferences) [50] as expert judgment information reflects the credibility of expert evaluation and affects the final evaluation results, while Reference [48] only considers the evaluation consistency to calculate the expert coefficient. Secondly, it is also unreasonable to eliminate expert coefficient in the calculation method of index score interval number constructed in Reference [49], because the size of expert coefficient represents the credibility of judgment results. Therefore, this paper improves the method proposed in Reference [49]. At the same time, the purpose of dynamic competency evaluation in this paper is to cluster bid evaluation experts and provide a theoretical basis for hierarchical management. Some scholars consider optimizing under the condition of calculating the weight interval number, including the highest satisfaction [51], the minimum weight deviation [52], and the minimum total projection deviation [53] as the optimization objectives. Therefore, this paper refers to this idea and constructs a mathematical optimization model under the condition of weight interval number. The structure of this paper is as follows. Section 1 introduces the significance of performance and dynamic competency evaluation of bid evaluation experts. Section 2 reviews the relevant research on bid evaluation experts and the related theories of subjective weighting method. Section 3 constructs the evaluation index system of performance and dynamic competency of bid evaluation experts. Section 4 improves the calculation method of expert weight coefficient and index score interval number based on the evaluation method proposed in Reference [49], and constructs a mathematical optimization model according to the purpose of performance and dynamic competency evaluation of bid evaluation experts. Section 5 makes an empirical analysis. Section 6 expounds the research conclusions of this paper, and puts forward suggestions for the management of bid evaluation experts based on the research and related theories. Purpose principle. Realize the performance and dynamic competency evaluation of bid evaluation experts, and provide a theoretical basis for hierarchical management, incentive and constraint of bid evaluation experts. Scientific principle. Fully follow the law of bid evaluation activities, and the selected indices, calculation methods and standards meet the characteristics of bid evaluation. Practical principle. Conform to the objective reality, the selected index data are collectable and easy to operate. Systematic principle. Comprehensively reflect the performance and dynamic competency of bid evaluation experts.

3. Evaluation index system

3.2 Construction process of evaluation index system

Based on the analysis of the management laws and regulations of some provincial bid evaluation experts and the current situation of bid evaluation of bid evaluation experts in China, this paper sorts the relevant evaluation indices of existing bid evaluation experts [14, 15], refers to other project evaluation experts [54, 55] and follow the above principles to preliminarily construct the evaluation index system of bid evaluation experts’ performance and dynamic competency according to the common points of offline bid evaluation and online bid evaluation and the digital footprint of online bid evaluation. The performance evaluation index system includes 3 first-level indices, namely bid evaluation performance, bid evaluation quality and code of conduct, and 10 corresponding second-level evaluation indices. The dynamic competency evaluation index system includes 3 first-level indices of interim performance, code of conduct, and database-entry competency, and 8 corresponding evaluation indices. The expert consultation method is used to consult a total of 12 experts, which include 5 owners, 3 from regulatory agency, 2 from construction organization, and 2 bid evaluation experts, so as to modify and improve the evaluation indices, and finally construct the performance evaluation index system including 3 first-level indices of bid evaluation performance, bid evaluation quality, code of conduct, and the corresponding 11 second-level evaluation indices, as shown in Table 1. As well as the dynamic competency evaluation index system including the 2 first-level indices of the interim comprehensive situation, capacity improvement and the corresponding six second-level evaluation indices, as shown in Table 2. Finally, referring to the relevant references [54-56], the index calculation method is determined according to the actual situation of bid evaluation, as shown in Tables 1 and 2.
Table 1

Performance evaluation index system of bid evaluation experts.

First-level indexSecond-level indexSecond-level index remarkCalculation method
Bid evaluation performance A1Study of bidding documents B1Find unreasonable places in bidding documents and whether the suggestions are adoptedC1 = min(t1, 9}t is the number of adopted suggestions, according to the number of suggestions made by experts and adopted, if suggestions do not match the browsing page, even if adopted will not score points
Formal review B2Find minor deviations in bid documents and confirm whether there are omissions after approval and post-qualification reviewC2 Refer to the above calculation method
Responsiveness Review B3Find significant deviation in bidding documents and whether they meets the relevant requirements in bidding documentsC3 Refer to the above calculation method
Detailed Review B4Evaluate the bidding documents and put forward reasonable suggestions or find unreasonable parts in the bidding documentsC4 Refer to the above calculation method
Review Conclusion B5Carefully fill in the review comments and review report, set bonus points for the situations that the constructive proposal of tenders is adopted and the together-conspired bidding is judged and recognized, and draw the evaluation conclusion (i.e., the order of the recommended bid winning candidates)C5 = 0.5∙min{t51, 9}+0.5∙t52∙9t52:SPEARMAN is rank correlation coefficient [41]
Quality of bidding evaluation A2Score abnormality B6Whether there is abnormal consistency in scoring (scoring by experts for different items of the same bid, scoring by experts for different bids, scoring among different experts), scoring errors or abnormally high or low, assignment of wrong scores, plagiarism.C6 = max{−t6, −9}t6 is Number of unreasonable places
Scoring credibility B7Identify experts with significant bias effects on the evaluation data through the Tukey test, and experts are given additional points according to their credibility, the greater the credibility is, the more the additional points will be addedC7 Calculating referring to Reference [45]
Code of conduct A3Review Seriousness B8Facial movement: Using facial information to analyze a person’s concentration level through Facial Expression Recognition [57]C8 = {9,5,1}9 Represents focused, 5 represents neutral, and 1 represents unfocused
Timeliness B9Time of submission of the review report and the review (i.e., the time to browsing each page of the bidding documents and the fitting of other experts)C9=max(9i=1mσit¯m,0),t¯=1ni=1nti,σi2=1ni=1n(tit¯)2ρ = 0 or 1.0 indicates that not timely submission of review report. 1 means timely submission of bid evaluation report. ti represents expert evaluation time of the i-th page. t¯i represents the average review time of i-th page of the same professional experts, σi2 is the variance of expert review time, n represents the number of professional experts, m represents the number of pages of the bidding documents
Discipline B10Whether there are other circumstances stipulated by laws, regulations and rules such as not timely submission of review reports, imposture, disclosure of bid evaluation information, unauthorized departure from duty, use of communication tools, private contact with bidders, bribery, confirmation of participation in bid evaluation but not evaluating the bid without asking for leave. Action criteria: logging in to other web pages, using other applications, taking screenshots, photographing, etc.C10 = [0, −1, −3, −5, −7, −9]According to the impact on bid evaluation, if serious consequences are caused, bid evaluation qualification will be suspended (i.e. not providing bid evaluation information to the expert within a certain period of time) or cancelled. If there is an impostor, other indices do not score
Strictness B11Whether there are other situations stipulated by laws, regulations and rules, such as bid evaluation in strict accordance with the bid evaluation standards and methods of the bidding documents, calculation errors in bidding documents, etc.C11 = [0, −1, −3, −5, −7, −9]According to the influence on the bid evaluation, if serious consequences are caused, the suspension (i.e. not pushing the bid evaluation information to the expert within a certain period of time) or disqualification from bid evaluation
Table 2

Evaluation index system of dynamic competency.

First-level indexSecond-level indexSecond-level index remarkcalculation method
Interim comprehensive situation D1Interim performance E1Set by the average value of performance in Table 1G1=h1Ch1h1Ch is the performance of the h1-th bid evaluation and h1 is the total number of bid evaluations
Review Status E2Complaint review after the end of the bid evaluation.G2=9(1h2h2)h2 means the number of problems in review and errors in bid evaluation, h2 means the number of review
Participation E3Set by participation rateG3=9h3h3h3 is the number of participation in bid evaluation, h3 is the number of receiving bid invitations.
Assistance or cooperation with supervision, inspection E4Assist or cooperate with the supervision and inspection of the relevant administrative supervision departments, set by good, comparatively good, average, comparatively poor and poor respectively.G4=h4Ch4h4Ch4 is the score of h4-th, taking 9, 7, 5, 3, 1. h4 is the number of inspections
Competency improvement D2Professional technical capability E5Indicates the professional and technical ability of bid evaluation experts, including education background, scientific research ability, practical ability, etc.Set G5 by virtual professional technical ability
Credit E6Indicates the credit of bid evaluation experts, including personal credit, bid evaluation integrity, institution credit, etc.Set G6 by virtual credit

4. Evaluation model

4.1 Related theoretical knowledge

Definition 1 [58]. Let X be a non-empty domain, then an intuitionistic fuzzy set A on X is: . In the formula, and are respectively the degree of affiliation and non-affiliation of element x belonging to A, and satisfying is the degree of hesitation of element x in A, indicating degree of uncertainty that x belongs to the . All intuitionistic fuzzy sets on non-empty domain are denoted by IFS(X), and a = (μ, v, π) is called intuitionistic fuzzy number (IFN), π = 1−μ−v in this formula. The intuitionistic fuzzy number is expressed by IFN in the following paper. Definition 2 [59, 60]. Let R denote a real number. If a−, a+∈R and a−≤a+, a = [a−, a+] is called a binary interval number. If a is a positive interval number, then a = [a−, a+] = {x|0≤a−≤x≤a+}.

4.2 Semantic information and intuitionistic fuzzy number

In this paper, referring to Reference [61], hesitation is divided into three levels of ‘very small’, ‘small’, ‘general’, where semantic evaluation granularity r = 5, π = 0.1,0.2,0.3 respectively represent three levels of hesitation, and the language evaluation value is quantified by referring to the reference [49, 62–64], as shown in Table 3. During the evaluation, N experts independently evaluate the importance of the index a of the M(M≥2) layer on the upper level associated indices a.
Table 3

Semantic information and IFNs.

Linguistic variablesLabelIntuitionistic fuzzy number (IFNs)Quantitative value
ImportanceI (0.80.5×π,0.20.5×π) 0.9
Comparatively importantMI (0.70.5×π,0.30.5×π) 0.7
AverageM (0.550.5×π,0.450.5×π) 0.5
Comparatively unimportantMUI (0.40.5×π,0.60.5×π) 0.3
UnimportanceUI (0.30.5×π,0.70.5×π) 0.1

4.3 Expert weight coefficient and index score interval number

The Reference [49] combined the basic theory of interval number with the hierarchical analysis method, proposed and proved the theorem of “positive interval number” and the theorem of “consistency of interval number judgment matrix”. In the process of calculating the index score interval number, the expert weight coefficient is calculated considering the consistency of expert evaluation, and then the index score interval number is calculated according to the expert weight coefficient, but the expert weight coefficient is approximately subtracted in the calculation process. The actual calculated index score interval number is independent of the expert weight coefficient, and the calculation of expert weight coefficient only considers the consistency of expert evaluation, and does not consider the hesitation of expert evaluation, which is also incomplete. Therefore, this paper comprehensively considers the consistency and hesitation of expert evaluation to calculate the weight coefficient of expert evaluation, improves the method of calculating the index score interval number and proves that the improved calculation method meets the ‘positive interval number’ theorem. The improved calculation steps are as follows: Step 1: Calculate the expert weight coefficient [65] based on evaluation consistency according to the evaluation results of the importance of evaluation experts to indices. In the formula, [66] is the deviation coefficient. The larger the deviation is, the smaller the expert weight coefficient is, and the smaller the deviation is, the larger the deviation weight coefficient is. According to the Reference [66], the parameter ∂ is the adjustment coefficient. It is generally appropriate to define ∂ = 10 in practical application. According to reference [65], ε is a moderator variable with a value greater than 0. ε = 0.2 based on the standard characteristics of the index importance evaluation scale. Step 2: Calculate the weight coefficient of experts based on hesitation according to the evaluation results of the importance of evaluation experts to indices and IFNs. Due to the different professional knowledge and work experience of experts, there are different degrees of hesitation in the evaluation of the importance of the same index. In the judgment of the importance of a certain index, the greater the degree of hesitation is, the smaller the expert weight coefficient is, and the smaller the degree of hesitation is, the greater the expert weight coefficient is. Step 3: Calculate the expert weight coefficient [61] in the evaluation based on comprehensively consideration of the consistency and hesitation of expert evaluation. In this formula, the parameters ϑ1, ϑ2∈[0,1] satisfy ϑ1+ϑ2 = 1. When ϑ1>0.5, it indicates that more attention is paid to the consistency of expert evaluation information. When ϑ2>0.5, it indicates that more attention is paid to the determination of expert evaluation information. Since the evaluation experts are experts and scholars in this field, they are very familiar with each index. When evaluating, their hesitation is low and consistency information is more important, so ϑ1 = 0.8 and ϑ2 = 0.2 are determined. Step 4: Calculate the index score interval number . The calculation method given in Reference [49] is as follows, and the reasons for improving it are also as follows. Description: ∴ This calculation method of interval number reduces the evaluation experts’ weight coefficient in the process of calculation, which is unreasonable. In this paper, the improved calculation method is as follows. Firstly, it is proved that it satisfies the ‘positive interval number’ theorem, and then the rationality is explained. Proof: the interval number is a positive interval number. , namely, (if and only if the scores of all evaluation experts are equal, the equal sign is reached and the interval number degenerates into a real number) the proof is completed. Description: The rationality of the improved calculation method in this paper. The calculation results of index score interval number given in Reference [49] are: The calculation results of improved index score interval number in this paper are: The expert weight coefficient reflects the credibility of the evaluation results. The greater the expert weight coefficient is, the higher the credibility is. It is found that the calculation method in Reference [49] reduces the expert weight coefficient in the calculation process, which results in the calculation results are independent of the expert weight coefficient. The improved formula in this paper avoids this situation.

4.4 Evaluation model

According to the provisions of the bidding law, the number of members of the bid evaluation committee is an odd number and more than 5 people. In practice, the number of members of the bid evaluation committee is generally 5, 7, 9, which is not a large base. The purpose of performance evaluation is to judge the bid evaluation results of bid evaluation expert, so it is not necessary to distinguish performance of bid evaluation experts. The normalized weight vector of the index adopts the method of reference [49]. The purpose of periodic evaluation is to judge the change of competency of bid evaluation experts and realize the classification management of bid evaluation experts, which requires low discrimination of intra-class bid evaluation experts and a high discrimination of inter-class intra-class bid evaluation experts. Therefore, based on the calculation of the weight interval number, this paper determines the calculation method of the normalized weight vector of the index interval number according to the needs of performance evaluation and dynamic competency evaluation. The specific calculation process is as follows: Step 1: Calculate the index score interval number by steps 1 to 4 in section 4.3, and then calculate the interval number judgment matrix [49]. In the formula: m is the index number of layer M associated with index a of layer M−1. p indicates the comparison result of the importance for a between any two a, a of layer M associated with index a of layer M−1, which is determined by formula (6). Step 2: Transform the interval number judgment matrix into ordinary judgment matrices P and P. In the formula, , the matrix P is the left matrix of the interval number judgment matrix P. In the formula, , the matrix P is the right matrix of the interval number judgment matrix P. Step 3: Calculate the transfer matrices A, A of P, P. Step 4: Calculate the optimal transfer matrices B, B of transfer matrices A, A. In the formula, . Step 5: Calculate the quasi-optimal matrices C and C of P and P [67]. In the formula, . Step 6: Calculate the normalized vectors and of eigenvector corresponding to the largest eigenvalues C and C, and obtain the weight interval number matrix [68]. In this formula, α and β are determined by the following formulas. Step 7: Calculate normalized weight vector of performance evaluation index weight according to the formula in Reference [49], namely formulas (20), (21). The weight vector of dynamic competency evaluation index is calculated according to the goal of small intra-class discrimination and large inter-class discrimination. The smaller the standard deviation is, the more concentrated the data is, indicating that the discrimination between the evaluation objects is smaller. Therefore, the intra-class discrimination is represented by standard deviation, and the inter-class discrimination is represented by deviation. The following mathematical optimization model is constructed to calculate the normalized weight vector of index weight. Objective function: Constraint conditions: Optimization method: The clustering is constantly updated to achieve optimization goal of minimizing the intra-class discrimination and maximizing the inter-class discrimination by iterating the weight value in the weight interval number. In the formula, represents the eigenvalue of the final layer index of the evaluation object p; G and G represent dynamic competency of the evaluation objects p and q; , and G>G. V indicates the standard deviation of dynamic competency of the evaluation object within the z class, denotes the deviation of dynamic competency between z and z+1 classes, then represents the normalized weight vector of the index interval number of the M layer associated with the M−1 layer index j.

5. Empirical analysis

5.1 Calculation of index weight

5.1.1 Performance of virtual bid evaluation experts

In view of the particularity of the bid evaluation expert group, it is difficult to obtain relevant data. In order to make the performance and dynamic competency of virtual bid evaluation experts more realistic, this paper obtains some characteristics of the performance of bid evaluation experts through the expert survey of relevant departments, as shown in Table 4, and simulate the performance and dynamic competency of the virtual bid evaluation experts according to expert opinions.
Table 4

Characteristics of performance.

PerformanceIndexGeneral situation
PerformanceAbnormality of ratings9 times and above 7.64%, 7 or 8 times 11.08%, 5 or 6 times 22.41%, 3 or 4 times 19.46%, 2 times and below 39.41%
Score reliability[0.9,1]36.32%, [0.8,0.9)28.93%, [0.7,0.8)21.72%, [0.6,0.7)7.76%, [0,0.6)5.27%
Seriousness of reviewFocus state 47.43%, neutral state 42.27%, non-focus state 10.3%
Sense of discipline2.88%
Stringency2.99%
Dynamic competencySituation of check90% and above 3.95%, [80%, 90%) 4.83%, [70%, 80%) 19.91%, [60%, 70%) 21.23%, below 60 50.08%
Participation rate90% and above 41.47%, [80%, 90%) 19.80%, [70%, 80%) 15.88%, [60%, 70%) 12.75%, less than 60% 10.1%
Assistance or cooperation in supervision, inspectionVery good 41.47%, good 18.96%, general 20.02%, poor 16.00%, very poor 3.55%

Note: Individual indices not surveyed in evaluation are randomly assigned according to expert opinions. In dynamic competency, the interim performance is based on virtual 10000 kinds of performance, and the competency index is set according to the virtual value. The dependency relationship of performance evaluation indices is mainly that the better their code of conduct is, the better the bid evaluation performance and quality will be.

Note: Individual indices not surveyed in evaluation are randomly assigned according to expert opinions. In dynamic competency, the interim performance is based on virtual 10000 kinds of performance, and the competency index is set according to the virtual value. The dependency relationship of performance evaluation indices is mainly that the better their code of conduct is, the better the bid evaluation performance and quality will be. (1) Performance of virtual bid evaluation experts In this paper, 10,000 kinds of performance of bid evaluation experts are simulated as the basis for calculating the interim performance in the dynamic competency evaluation index of bid evaluation experts. In addition, 11 kinds of performance are randomly selected as the performance of 11 bid evaluation experts in a bid evaluation committee for one bid evaluation, which is used for empirical analysis, as shown in Table 5. The specific methods are as follows: firstly, analyze the dependency relationship among indices as noted in Table 4, determine an index with more dependency relationship among the indices with dependency relationship to generate, then generate other indices with dependency relationship, check the cross dependency relationship of indices, and correct the generated data with cross dependency relationship, Finally, randomly combine the above indices with dependent relationship with the indices without dependent relationship.
Table 5

Performance of 11 bid evaluation experts.

Index P 1 P 2 P 3 P 4 P 5 P 6 P 7 P 8 P 9 P 10 P 11
B 1 41296630502
B 2 66520835162
B 3 72087341918
B 4 27648469377
B 5 2.77.67.054.055.15894.13.158.056.05
B 6 -2-40-5-1-30-1-6-20
B 7 8.738.465.498.287.927.387.658.826.758.207.83
B 8 95519599515
B 9 5.927.458.693.937.986.808.093.584.437.166.23
B 10 00-1000-5000-3
B 11 0-100-1000-300
The performance of bid evaluation experts is: In the formula, ρ1 = 1 or 0, which indicate timely or not timely submission of bid evaluation report. ρ2 = 0 or 1, which indicates the existence or absence of impostor, w indicates the index weight of final layer. (2) Dynamic competency of virtual bid evaluation experts The dynamic competency evaluation of bid evaluation experts is carried out on the basis of performance evaluation. In this paper, taking Kunming city as an example, there are about 1000 experts in the bid evaluation expert database in the field of engineering in Kunming according to survey. Setting up an evaluation cycle of 2 years, the number of experts drawn accounts for 95% of the total number, the number of an expert drawn is about 1–100 times, and it is more likely to be drawn 10–20 times. Therefore, x (x∈[1,100]) times are extracted from 10,000 kinds of performance in line with the actual situation as the calculation basis of the interim performance in the dynamic competency, and x = 10–20 times are set as the times that most experts can be extracted in one cycle. In addition, considering that performance evaluation is the basis of incentive and constraint mechanism of bid evaluation experts, it is assumed that the performance of bid evaluation experts in a cycle will not deteriorate under the effect of incentive and constraint mechanism, thus virtualizing the interim performance of bid evaluation experts in a cycle. The dynamic competency of a total of 1010 bid evaluation experts is virtualized, 1000 are used to calculate the index weight, and 10 were used for empirical analysis. Due to the limitation of space, only the dynamic competency of the 10 bid evaluation experts for empirical analysis is shown in Table 6.
Table 6

Dynamic competency of 10 bid evaluation experts.

Index P1 P2 P3 P4 P5 P6 P7 P8 P9 P10
E 1 4.241.653.062.283.254.253.163.913.133.33
E 2 7.525.737.798.613.607.205.405.403.602.70
E 3 7.365.896.215.108.107.208.108.108.108.10
E 4 7.007.009.007.003.001.005.001.005.009.00
E 5 3.354.123.391.972.842.143.131.943.062.52
E 6 2.700.722.211.571.401.050.01-1.181.06-0.19

5.1.2 Index weight calculation

Due to the different preferences of experts from relevant stakeholders on the performance evaluation indices, a total of 18 experts consisted of 4 owners, 3 from regulatory agency, 3 from construction organization, 3 from bidding agency, and 5 bid evaluation experts (3 experts from university and 2 experts from enterprise) judge the importance of the evaluation index (due to space limitations, some evaluation results are shown in Table 7).
Table 7

Findings on the importance of expert segment indices.

Index A 1 A 2 A 3
Expert
1I1I2I1
2MI1MI1MI2
……………………
17MI3I1I1
18MI1I1M2

Note: 1 Represents hesitation as ’very small ’, 2 Represents hesitation as ’ small ’, 3 Represents hesitation as ’ general ’

Note: 1 Represents hesitation as ’very small ’, 2 Represents hesitation as ’ small ’, 3 Represents hesitation as ’ general ’ The weight interval numbers of performance and dynamic competency evaluation indices calculated by formulas (1)–(19) are shown in Tables 7 and 8.
Table 8

Weight interval number of performance indices by improved method.

MatrixWeight interval numbers
(A1, A2, A3)([0.3333,0.3478], [0.3333,0.3478], [0.3026,0.3333])
(B1, B2, B3, B4, B5)([0.1777,0.2000], [0.2000,0.2056], [0.2000,0.2026], [0.2000, 0.2061], [0.2000,0.2080])
(B6, B7)([0.4904,0.5000], [0.5000,0.5096])
(B8, B9, B10, B11)([0.2500,0.2748], [0.2384,0.2500], [0.2329,0.2500], [0.2500, 0.2539])
The normalized weight vector of performance evaluation index is calculated according to formulas (20) and (21), as shown in Table 9.
Table 9

Weight interval number of dynamic competency indices by improved method.

MatrixWeight interval numbers
(D1, D2)([0.5000,0.5321], [0.4679,0.5000])
(E1, E2, E3, E4)([0.2707,0.3337], [0.2532,0.2707], [0.1358,0.1880], [0.2707, 0.2740])
(E5, E6)([0.4862,0.5000], [0.5000,0.5138])
Through the optimization of formula (22), the calculated normalized weight vector of the final layer index of dynamic competency is shown in Table 10.
Table 10

Normalized weight vector of performance evaluation index.

MatrixWeight
(A1, A2, A3)(0.3410,0.3410,0.3180)
(B1, B2, B3, B4, B5)(0.1889,0.2028,0.2012,0.2031,0.2040)
(B6, B7)(0.4952,0.5048)
(B8, B9, B10, B11)(0.2624,0.2442,0.2414,0.2520)
Through the calculation of the above weight interval number and index weight, it can be found that bid evaluation performance A1 and bid evaluation quality A2 have the same weight interval number and the same index weights in the performance evaluation. The weight interval number of code of conduct A3 is relatively close to the left side of bid evaluation performance A1 and bid evaluation quality A2, and its weight is also relatively close, indicating that experts from relevant stakeholders attach great importance to evaluation performance A1, evaluation quality A2 and code of conduct A3, but pay more attention to evaluation performance A1 and evaluation quality A2. In the dynamic competency evaluation, the interim comprehensive situation D1 is on the right side of the competency improvement D2, indicating that the experts of relevant stakeholders pay more attention to the interim comprehensive situation D1 in the dynamic competency evaluation of bid evaluation experts, pay more attention to the interim performance E1 in the interim comprehensive situation D1, and pay more attention to the credit E6 in the competency improvement D2.

5.2 Comparative analysis

5.2.1 Comparison of weight interval numbers

Compared with the reference [49], the weight interval numbers of performance and dynamic competency evaluation indices calculated by the calculation method of Reference [49] are shown in Tables 11 and 12.
Table 11

Normalized weight vector of final layer of dynamic competency.

Index E 1 E 2 E 3 E 4 E 5 E 6
Weight0.14810.13530.08750.13990.24220.2470
Table 12

Weight interval number of performance evaluation indices calculated in reference [49].

MatrixWeight interval numbers
(A1, A2, A3)([0.3333,0.3544], [0.3333,0.3544], [0.2913,0.3333])
(B1, B2, B3, B4, B5)([0.1625,0.2000], [0.2000,0.2089], [0.2000,0.2049], [0.2000, 0.2105], [0.2000, 0.2131])
(B6, B7)([0.4689,0.5000], [0.5000,0.5311])
(B8, B9, B10, B11)([0.2500,0.2932], [0.2277,0.2500], [0.2170,0.2500], [0.2500, 0.2620])
Comparing the length len [69, 70] of each index weight interval number in Tables 7, 8, 11 and 12, it can be found that the length len of 22 interval numbers become small when the index weight interval number is calculated by the improved method. Therefore, the improved calculation method improves the calculation accuracy of the weight interval number and further proves the rationality of the improved calculation method of index score interval number.

5.2.2 Comparison of clustering results

After using the improved method to calculate score interval number of the dynamic competency evaluation index, then the index weight interval number is calculated (Table 8), and the normalized ranking weight vector of the final layer index then is calculated according to steps (16)—(22) of reference [49], as shown in Table 13.
Table 13

Weight interval number of dynamic competency evaluation indices in reference [49].

MatrixWeight interval numbers
(D1, D2)([0.5000,0.5616], [0.4384,0.5000])
(E1, E2, E3, E4)([0.2700,0.3552], [0.2352,0.2700], [0.1333,0.1901], [0.2700, 0.2763])
(E5, E6)([0.4808,0.5000], [0.5000,0.5192])
Because the rating is generally set to 5 levels, the number of clusters is set to 5. The normalized ranking weight vector of the final layer index above (Table 13) and the optimization method are respectively used to cluster the dynamic competency of 10,000 virtual bid evaluation experts. The clustering interval of dynamic competency and the number of experts are obtained, and the length (len) of the clustering interval and the inter-class distance are calculated, the results are shown in Table 15 It can be found that the number of experts in each category is similar, and optimized clustering interval length (len) is smaller, and inter-class distance is larger. Therefore, the results are reliable when the weight interval number is optimized.
Table 15

Dynamic competency clustering in reference [49] and this paper.

ClusteringIIIIIIIVV
Reference [49][4.0965,9][3.5771,4.0942][3.1622,3.5767][2.6825,3.1619][1.4515,2.6814]
Number of experts13222532284522041097
Length of interval4.90350.51710.41450.47941.2299
Inter-class distance0.00030.00040.00030.0011
This paper[4.1457,9][3.5086,4.0148][3.1542,3.4584][2.7131,3.1529][1.5019,2.5329]
Number of experts13452486282022341115
Length of interval4.85430.50620.30420.43981.0310
Inter-class distance0.13090.05020.00130.1802
Through the above normalized ranking vector of the final layer index of dynamic competency (Table 13) and the normalized ranking weight vector of the final layer quality assurance of dynamic competency (Table 10), the dynamic competency of 10 bid evaluation experts () is classified according to the clustering interval of this paper. The results are shown in Table 16.
Table 16

Comparison of clustering results of bid evaluation experts’ dynamic competency.

Experts P 1 P 2 P 3 P 4 P 5 P 6 P 7 P 8 P 9 P 10
Dynamic competency calculated in Reference [49]4.74373.67174.67063.78963.11433.15113.35522.33593.34843.3860
Dynamic competency calculated in this paper4.74693.68984.67643.79313.12823.15203.36652.33863.36173.3899
Reference [49] ClusteringIIIIIIIVIVIIIVIIIIII
This paper clusteringIIIIIIIVIVIIIVIIIIII
According to the results of dynamic competency and clustering of 10 bid evaluation experts (Table 16), the reliability of optimization within the weight interval number is further proved.

5.2.3 Comparison of clustering discrimination

The goal of optimization is to minimize the intra-class discrimination and maximize the inter-class discrimination. According to the clustering results in Table 16, the intra-class discrimination and inter-class discrimination are compared by referring to formula (22), and the calculation results are shown in Table 17.
Table 17

Comparison of the clustering discrimination of dynamic competency of 10 bid evaluation experts.

DiscriminationIntra-class discriminationInter-class discrimination
Comparison of Dynamic Competency discrimination with Normalized Ranking Weight Vector in Reference [49]IIIIIIIVI and IIII and IIIIII and IVIV and V
reduction 3.61%reduction 12.40%reduction 24.61%reduction 35.48%increase 0.26%increase 4.96%increase 6.30%increase 1.45%
Through the data of Table 17, it can be found that the bid evaluation expert competency calculated in this paper has smaller intra-class discrimination and larger inter-class discrimination, which is conducive to the hierarchical management of bid evaluation experts in the expert database and the implementation of incentive and constraint mechanism. Therefore, the evaluation results of this paper are more in line with the actual needs.

6. Conclusions and suggestions

By constructing the evaluation index system and evaluation model of the performance and dynamic competency of bid evaluation experts, simulating bid evaluation experts accorded with the actual situation, and calculating the weight vectors of the performance and dynamic competency evaluation indices on the basis of the weight interval number, and finally carrying out the empirical analysis, the following conclusions and suggestions are drawn: In the process of bid evaluation experts performing their duties, experts from relevant stakeholders attach great importance to the bid evaluation performance, bid evaluation quality and code of conduct of bid evaluation experts, but pay more attention to the bid evaluation performance and quality of bid evaluation experts. In the dynamic competency evaluation of bid evaluation experts, experts from relevant stakeholders pay more attention to the interim comprehensive situation of dynamic competency evaluation of bid evaluation experts, pay more attention to the interim performance in the interim comprehensive situation, and pay more attention to credit in the competency improvement. The improved calculation method of expert coefficient takes into account expert consistency and hesitation, which is more reasonable. The improved calculation method of index score interval number calculates the index score interval number and then calculates the weight interval number, which improves the calculation accuracy of the weight interval number, and the proposed mathematical optimization model meets the needs of hierarchical management of bid evaluation experts. The proposed idea of optimization in weight interval numbers has good generality, which can also be used to set other optimization objectives or to evaluate other personnel. The judgment results of the relevant stakeholders on the importance of evaluation indices reveal which aspects of quality of bid evaluation experts they pay more attention to, and also indicate which aspects of the bid evaluation experts may have prominent problems. Therefore, relevant management departments can strengthen the management in the future. The bid evaluation experts participate in the project review after entering expert database, and carry out the ‘scoring system’ management through the performance evaluation (scoring according to the performance and the number of bid evaluation: high scores for good performance and low scores for poor performance. Each time they participate in the bid evaluation, scoring once, and accumulating the scores). After a cycle, the dynamic competency is re-evaluated and classified, and repeating the cycle, to achieve the purpose of hierarchical management and dynamic management of bid evaluation experts. The relevant management departments may pay labor fees according to the performance of bid evaluation experts, give priority to the experts with high score and high competency to participate in project review, and kick experts with frequent poor performance out of the expert database. This paper assumes that the performance of bid evaluation experts will not become worse under the effect of incentive and constraint mechanism is an ideal state. Referring to the performance curve of other staff under performance evaluation, the relationship between the performance of bid evaluation experts and the number of bid evaluations is complex. The performance curve may rise first and then tend to be stable, or it may be an inverted U-shaped curve. Future research can focus on the effect of incentive and constraint mechanism on the performance curve of bid evaluation experts to improve the reliability of virtual data. 22 Nov 2021
PONE-D-21-28448
Research on performance evaluation of bid evaluation experts based on weight interval number theory
PLOS ONE Dear Dr. Li, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
 
Please take into account the reviewer's comments and suggestion to improve the paper.
Please submit your revised manuscript by Jan 06 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Miguel Angel Sánchez Granero Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section. 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ 5. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 3 & 6 in your text; if accepted, production will need this reference to link the reader to the Table. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This article presents a performance evaluation index system for bid evaluation experts and a corresponding evaluation model based on the calculation of the number of index weight intervals. However, I think the quality of the paper is not enough to meet the academic requirements, and there is still a lot of room for improvement. My comments to you generally represent a class of problems that the author needs to make serious revisions to the full text. Representative questions are as follows: 1. The novelty of the work is not very clear. The introduction does not reflect the importance of the research problem. 2. The literature review of the paper lacks systematicness and logic, the selection of references is not focused enough, and many papers are not related to the research topic of the paper. 3. The paper can not explain the superiority of the proposed method and can not reflect the innovation of the research work. 4. “Offline bidding evaluation and online bidding evaluation are two common bidding evaluation methods”, this statement is not correct! 5. Important statements lack literature support. 6. There are some grammar errors and statements are still not clarity. e.g. the last sentence in the first section of the introduction. 7. There are some format errors in Reference. 8. In the Description section, the writing format is not standardized. 9. The authenticity and reliability about data in Empirical Analysis need to clarified furtherly. Reviewer #2: 1) The Abstract and Introduction sections should be improved. The information given by the manuscript is generally self-contained. However, there should be some improvements regarding its contents with necessary amplifications. There is lack of enough illustrations regarding the necessity of introducing performance evaluation of bid evaluation experts. I would suggest authors to provide a table to analyze the literature review. This will benefit the understanding the research gaps and your plans to fill them. The table will also help in comparing the existing models in the literature and highlighting its contributions in the literature. Please work on improving the clarity of your paper. 2) There are many existing publications in this research area. It is not clear the authors collected these papers based on which criteria. The review in the Introduction is too general. The historical review of the bid evaluation is weak. Authors are suggested to read some comprehensive and relevant publications on the existing counterparts to highlight the necessity of using weight interval number theory. For instance, some of them are proportional hesitant fuzzy linguistic term set and HFLTS possibility distribution, etc. 3) The level of English about this manuscript does not meet the journal's desired standard. Therefore, language should be greatly improved. There are too many grammatical mistakes and typos. Please carefully revise and improve it. The paper requires a thorough editing. 4) The author should state the source of the data and whether it is realistic data and how the expression of the presented measures for uncertainty being evaluated is useful for solving current real-life problems. More elaborations on these aspects are suggested. 5) The comparison analysis and in-depth discussions seem to be casual in this paper. Please enhance them to demonstrate the reliability of your advocated model. The related and recent work should be discussed and commented on. Some of them are: Bid evaluation in civil construction under uncertainty: A two-stage LSP-ELECTRE III-based approach and Bid Evaluation for Major Construction Projects Under Large-Scale Group Decision-Making Environment and Characterized Expertise Levels. 6) The managerial implications of this research should be enhanced. How decision or policymakers can benefit from this work with robust and reliable conclusions. What will change the main insights if different methods were introduced. 7) The conclusion should be improved to summarize clearly the main contributions of the paper and future research efforts. It will increase the impact of the paper if the authors try to indicate this explicitly in the manuscript. Critical limitations in the proposed framework should be offered. Extensions and applications of the proposal in other fields could be exemplified in the Conclusion section. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 13 Mar 2022 We responded to the journal editor and two reviewers in 'response to reviewers' ,thanks! Submitted filename: Response to Reviewers.docx Click here for additional data file. 4 Apr 2022
PONE-D-21-28448R1
Research on performance and dynamic competency evaluation of bid evaluation experts based on weight interval number
PLOS ONE Dear Dr. Li, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. I suggest the author to take into account the reviewer's comments. Please submit your revised manuscript by May 19 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Miguel Angel Sánchez Granero Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Compared with the previous version, the paper has been improved greatly, but there is still much room for improvement in some areas. The literature review of the paper is not sufficiently relevant and focused compared with the core issues of the paper. Management inspiration and conclusion recommendations are put together. Reviewer #2: The authors respond well to my comments, I think it is greatly improved and is ready for publication in the current form. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
18 May 2022 Dear editors and reviewers, we carefully considered your suggestions and made revisions. We uploaded the 'Response to Reviewers' for detailed responses. Submitted filename: Response to Reviewers.docx Click here for additional data file. 23 May 2022 Research on performance and dynamic competency evaluation of bid evaluation experts based on weight interval number PONE-D-21-28448R2 Dear Dr. Li, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Miguel Angel Sánchez Granero Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No 13 Jun 2022 PONE-D-21-28448R2 Research on performance and dynamic competency evaluation of bid evaluation experts based on weight interval number Dear Dr. Li: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Miguel Angel Sánchez Granero Academic Editor PLOS ONE
Table 14

Normalized ranking weight vector of final layer of dynamic competency.

Index E 1 E 2 E 3 E 4 E 5 E 6
Weight0.15620.13540.08370.14080.23860.2453
  4 in total

1.  A condition metric for Eucalyptus woodland derived from expert evaluations.

Authors:  Steve J Sinclair; Matthew J Bruce; Peter Griffioen; Amanda Dodd; Matthew D White
Journal:  Conserv Biol       Date:  2017-08-31       Impact factor: 6.560

2.  Testing for competence rather than for "intelligence".

Authors:  D C McClelland
Journal:  Am Psychol       Date:  1973-01

3.  Linguistic variables, approximate reasoning and dispositions.

Authors:  L A Zadeh
Journal:  Med Inform (Lond)       Date:  1983 Jul-Sep

4.  Communicating the Neuroscience of Psychopathy and Its Influence on Moral Behavior: Protocol of Two Experimental Studies.

Authors:  Robert Blakey; Adrian D Askelund; Matilde Boccanera; Johanna Immonen; Nejc Plohl; Cassandra Popham; Clarissa Sorger; Julia Stuhlreyer
Journal:  Front Psychol       Date:  2017-03-14
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.