Literature DB >> 35756085

Uncover the reasons for performance differences between measurement functions (Provably).

Chao Wang1, Jianchuan Feng1, Linfang Liu1, Sihang Jiang1, Wei Wang1.   

Abstract

Recently, an exciting experimental conclusion in Li et al. (Knowl Inf Syst 62(2):611-637, 1) about measures of uncertainty for knowledge bases has attracted great research interest for many scholars. However, these efforts lack solid theoretical interpretations for the experimental conclusion. The main limitation of their research is that the final experimental conclusions are only derived from experiments on three datasets, which makes it still unknown whether the conclusion is universal. In our work, we first review the mathematical theories, definitions, and tools for measuring the uncertainty of knowledge bases. Then, we provide a series of rigorous theoretical proofs to reveal the reasons for the superiority of using the knowledge amount of knowledge structure to measure the uncertainty of the knowledge bases. Combining with experiment results, we verify that knowledge amount has much better performance for measuring uncertainty of knowledge bases. Hence, we prove an empirical conclusion established through experiments from a mathematical point of view. In addition, we find that for some knowledge bases that cannot be classified by entity attributes, such as ProBase (a probabilistic taxonomy), our conclusion is still applicable. Therefore, our conclusions have a certain degree of universality and interpretability and provide a theoretical basis for measuring the uncertainty of many different types of knowledge bases, and the findings of this study have a number of important implications for future practice.
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.

Entities:  

Keywords:  Concept structure; Knowledge base; Knowledge structure; ProBase; Rough set theory; Uncertainty

Year:  2022        PMID: 35756085      PMCID: PMC9207183          DOI: 10.1007/s10489-022-03726-7

Source DB:  PubMed          Journal:  Appl Intell (Dordr)        ISSN: 0924-669X            Impact factor:   5.019


Introduction

Although knowledge constitutes our area of interest and the cognitive world, it does not have a unified and clear definition [2], which means that knowledge has uncertainty. Uncertainty, including randomness, vagueness, inconsistency, fuzziness, and incompleteness, exists in almost every system and model [3-5], the KBs are no exception. Uncertainty is really a key ingredient in the decision and a fundamental part in modelling [6], therefore, uncertainty is an important research topic in many real-world applications, such as decision making [7], recommendation system [8], Dempster-Shafer evidence theory [9], graph data [10], social networks [11, 12], multi-objective optimization problems [13] and risk analysis during the outbreak of COVID-19 [14-17]. In machine learning tasks, data is an indispensable resource for any machine learning model. However, any machine learning model always has uncertainty when it performs the task of predicting unobserved data. For the KBs, when using the existing knowledge in the KBs to perform inference and decision-making tasks, the uncertainty of the KBs will affect the prediction results of some downstream tasks of natural language understanding. An important reason is the existence of soft concepts, which have imprecision. For instance, in the phrase “large area”, the definition of large lacks strict quantitative standards. Therefore, how to measure the uncertainty of a system plays a vital role in machine learning, data analysis, artificial intelligence applications, and cognitive science [6]. The current mainstream method is to use r ough s et t heory (RST) [18] to measure the uncertainty of KBs [1, 19]. RST, as a powerful tool that effectively measures the uncertainty of KBs, has attracted more and more attention from artificial intelligence practitioners, such as decision-making [20, 21], computer-aided diagnosis [22], attribute reduction [23], decision analysis [24, 25], and predicting the COVID-19 cases [26]. There are significant advantages in measuring the uncertainty of KBs based on RST. For instance, the RST uses the existing knowledge in the KBs to approximately characterize the unknown knowledge (i.e., target concept) that needs to be explored. The upper and lower approximation concepts in RST can well describe the uncertainty of KBs [18], and it can be combined with information theory to establish a connection between knowledge uncertainty and information entropy [27]. In addition, the RST is closely related to fuzzy mathematics, which uses the method of describing the fuzziness to measure the uncertainty of knowledge [7, 28].

Motivation

Based on RST, a series of measurement methods used to measure the uncertainty of the KBs are proposed. For instance, measurement based on the combination of information entropy and rough sets [29]; Using rough entropy theory to measure the uncertainty of KBs [30]; Measurement based on the combination of knowledge granulation and rough sets [31, 32]. Especially in recent work, many scholars focus on the method based on knowledge structure [33] to measure the uncertainty of knowledge bases [1, 19]. And obtain many exciting conclusions through a lot of experiments. Although the use of RST to measure the uncertainty of the KBs has achieved a series of great progress, we find that there are still many issues that have not been completely solved. Conclusions are often based on the verification of a limited number of data sets, lacking a solid and comprehensive theoretical guarantee. For example, recently, an exciting experimental conclusion in [1] about measures of uncertainty for the KBs has attracted great research interest for scholars. In [1], the authors select three data sets and conduct numerical experiments on these three data sets to verify the superiority of using the knowledge amount to measure the uncertainty of the KBs.1 However, these successful conclusions lack perfect mathematical expression and interpretability. The classification of the instances of the knowledge base heavily depends on its attributes. Using RST to measure the uncertainty of a KB, an important prerequisite is that this KB can be divided by equivalence relations. Unfortunately, subject to certain real task scenarios, some KBs are difficult to meet this condition. For some special datasets, such as ProBase [34], it does not contain a large number of attributes of instances. Therefore, in ProBase, it is difficult to perform the above classification operations on instances based on their attributes. This requires us to transfer the opinions in RST to ProBase for analogy research. To address the first issue, we employ RST as the theoretical basis to analyze the differences between different methods used to measure the uncertainty in the KBs. Specifically, (1) In terms of theoretical analysis, we compare and analyze in detail the mathematical principles of using knowledge granulation of knowledge structure, knowledge entropy of knowledge structure, rough entropy of knowledge structure and knowledge amount of knowledge structure (four measurement functions in total) to measure the uncertainty of the KBs. We find that the above four measurement functions can be unified into an elementary function λ(⋅) (i.e., (12)). The four measurement functions correspond to the four different inputs of function λ(⋅). Based on it, we theoretically prove that the conclusion in [1] is universal and interpretable, and further improved the theory of measures of uncertainty for the KBs. (2) In terms of experimental evaluation, we conduct experiments on 18 public datasets in different fields. The experimental results fully verified our theoretical analysis conclusions. To address the second issue, we transfer the method of using RST to measure the uncertainty of the KBs to the study of the uncertainty of ProBase. (1) In terms of theoretical analysis, we explore the theoretical feasibility of using RST to measure the uncertainty of ProBase. From the view of RST, equivalence relations determine the partitions on the set , and get equivalence classes under different equivalence relations thereby. Inspired by this, we regard an equivalence relation in the KBs as a hypernym (or concept) in ProBase, then we use hypernyms (or concepts) to divide instances, to obtain the equivalence class thereby. To this end, we provide a strategy for inducing datasets from ProBase, and the instances in the induced datasets can be divided by their concepts. (2) In terms of experimental evaluation, in order to verify the above ideas, we induce three datasets based on the strategy in ProBase, and perform experimental verification on three data sets. The experimental results fully verified our theoretical analysis conclusions.

Contribution

In brief, the contributions in this paper are summarized as follows: We rigorously explain why k nowledge am ount (KAM) has much better performance for measuring the uncertainty of KBs. We prove an empirical conclusion established through experiments from a mathematical point of view. We prove that measurement methods based on knowledge granulation, knowledge entropy, rough entropy, and knowledge amount can be integrated into a unified measurement function in measuring the uncertainty of KBs. We provide a formal representation of the unified measurement framework and exhaustive comparative analysis. We propose an efficient strategy that induces a new dataset from ProBase. The instances in the induced dataset can be rigorously partitioned based on their concepts. Therefore, we expand the usage scenarios of the measurement function so that the measurement function is still valid for datasets that do not have enough attributes.

Paper organization

In Section 2, we briefly review the previous studies related to the work of this paper. In Section 3, we review some definitions related to RST, KBs and summarize some notations used in our work. In Section 4, we summarize the calculation methods and properties of the four measurement functions used to measure the uncertainty of KBs. In Section 5, we review the dispersion analysis of numerical experiments in [1]. In Section 6, we conduct a detailed theoretical analysis of different measurement functions and provide our main conclusions (i.e., Theorems 1,2, 3, and 4). Specifically, we unified the four popular measurement functions into a new measurement function. In Section 7, we first provide the definition of the concept structure of ProBase (see Definition 13). And then, we provide an effective strategy to induce KBs from ProBase, and instances in induced KBs can be classified by their concept of them. In Section 8, we verify our theoretical analysis via extensive experiments. Specifically, we conduct experiments on 18 public datasets and on three datasets induced from ProBase based on our proposed strategy. Section 11 summarizes our work.

Related work

In recent years, research on KBs has become one of the important topics in industry and academia. Many researchers have made exceptional contributions to this field and achieved a series of important results. Especially in theoretical research on the KBs, a series of important results have been obtained. These important conclusions have far-reaching significance for establishing a computable and measurable framework in the KBs. In particular, the uncertainty measurement of KBs based on knowledge structure has been widely concerned.

Knowledge structure

Qian et al. [35] describe the differences between various knowledge structures in the KBs based on the concept of knowledge distance. Li et al. [33] propose the definition of lattice, mapping, soft characterizations, and the group of knowledge structures. In the study of the relationship between different KBs, Li et al. [36] regard the KBs as a special relation information system. By introducing homomorphisms, they prove that the KBs are invariant under homomorphisms. Subsequently, based on the homomorphism relation between KBs, Qin et al. [37] propose the concept of communication between KBs, and they obtain a series of invariant characterizations under homomorphisms. It is worth noting that the above works all involve RST, which also provides a strong theoretical basis for our work. In addition, some scholars are committed to using other means to describe the knowledge structure, such as using fuzzy skill maps [38] and knowledge space theory [39].

Measurement method

The uncertainty of the KBs is usually calculated by entropy (e.g., information entropy) [40]. Some scholars have shown an increased interest in the combination of entropy theory and rough theory to measure the uncertainty of the system. Hence, many classic mathematical tools have been proposed. For example, Düntsch and Gediga et al. [29] study measuring uncertainty of rough sets with information entropy; Beaubouef et al. [30] propose a new concept, called rough entropy; Liang et al. [27] establish the relationships between rough and information entropy. In the study of knowledge granulation, Wierman [31] focuses on using knowledge granulation to measure the uncertainty of rough sets; Yao [41] employs the concept of granularity measure when studying the probabilistic approaches to rough sets; Shah et al. [32] propose many measures using soft rough covering sets theory and applied this theory to the task of multi-criteria decision making. Qin et al. [42] use rough set theory to analyze knowledge structures in a tolerance knowledge base. Kobren et al. [43] provide a new framework that can use user feedback to realize the construction and maintenance of the knowledge base in the case of identity uncertainty. Guo and Xu [7] provide a novel entropy-independent measurement function to capture the features of intuitionistic fuzzy sets.

Preliminaries

In this section, the key mathematical notations and their descriptions are listed in Table 1, and some basic definitions are reviewed.
Table 1

Key Notations and Descriptions

NotationDescription
the empty set
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathbb {R}$\end{document} the set of real numbers
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathbb {Z}^{+}$\end{document}+the set of positive integers
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W a non-empty finite set, named universe \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$2^{\mathcal {W}}$\end{document}2Wthe family of all subsets of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W
wiRwjthe binary relation between wi and wj on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {R}=\{\textbf {R}_{i}\}_{n_{1}}$\end{document}R={Ri}n1the set of all binary relations Ri on universe \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {O}=\{\textbf {O}_{i}\}_{n_{2}}$\end{document}O={Oi}n2the set of all binary relations Oi on universe \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {P}=\{\textbf {P}_{i}\}_{n_{3}}$\end{document}P={Pi}n3the set of all binary relations Pi on universe \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {Q}=\{\textbf {Q}_{i}\}_{n_{4}}$\end{document}Q={Qi}n4the set of all binary relations Qi on universe \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {R}[\mathcal {W}]$\end{document}R[W]the family of all equivalence relations on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W
|W|the cardinality of W, e.g., |{a,b,c}| = 3
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$M \triangleq N$\end{document}MNM and N are equivalent, where M and N be two functions or sets
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W} = \{w_{i}\}_{k}$\end{document}W={wi}kthe simplified form of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}=\{w_{1},w_{2},...,w_{k}\}$\end{document}W={w1,w2,...,wk}
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$[\mathcal {W}, \mathcal {R}]$\end{document}[W,R]the knowledge base
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$[\mathcal {T}, {\mathscr{H}}]$\end{document}[T,H]the knowledge base induced by ProBase
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$M(\mathcal {W})$\end{document}M(W)the measure set on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}W
Key Notations and Descriptions

Definition 1 (1 Binary relation R on )

Let wRw denote the binary relation between w and w on , where w is the predecessor of w, and w is the successor of w. If , then we have wRw. For any , the binary relation R can be represented by a 0-1 square matrix as follows, where , if , otherwise, .

Definition 2 (1, 44 Equivalence relation on )

If R satisfies the following three properties, then we call R to be an equivalence relation on . Specifically, reflexive means that wRw always holds for any , symmetric means that wRv implies vRw for any w,v , transitive refers to wRv and vRu imply wRu for any Since can be partitioned by an equivalence relation R, and the following definition of the equivalence class is obtained.

Definition 3 (44 Equivalence class on )

Let R be an equivalence relation on , we call that is the equivalence class including w, and is the family of all .

Definition 4 (18 Knowledge base)

is called a KB if and only if .

Definition 5 (44 Equivalence relationship between KBs)

Given two KBs and , if and are equivalent (i.e., then we have

Definition 6 (1 Knowledge structure of )

If the finite set can be divided by relations , then we call the vector the knowledge structure of .

Definition 7 (Indiscernibility relation over )

If , then we call is the indiscernibility relation over , which is denoted by . In other words, let F be the finite set, and f and f are two entities in F. f and f satisfy indiscernibility relation over if and only if f and f have the same value on all elements in . For example, a red Porsche and a red Tesla satisfy the indiscernibility relation on the attribute color.

Example 1

Given a collection that contains 8 candies. Suppose these candies have different colors (e.g., red, blue, yellow), shapes (e.g., square, round, triangular), flavors (e.g., lemony, sweet). Therefore, these candies can be divided according to color, shape and taste. Statistical information about is summarized in Table 2.
Table 2

Candies are divided according to color, shape and taste

Attributew1w2w3w4w5w6w7w8
Red
Blue
Yellow
Square
Round
Triangular
Lemony
Sweet
Candies are divided according to color, shape and taste As shown in Table 2, we can define three equivalence relations, namely, R1 (i.e., color), R2 (i.e., shape), and R3 (i.e., taste). Further, through these three equivalence relations, the following three equivalence classes are obtained, i.e., Apparently, according to Definition 4, [] is the KB. And according to Definition 7, w1 and w3 satisfy the indiscernibility relation on the color red, w1 and w4 satisfy the indiscernibility relation on the shape square.

Four uncertainty measurement functions for KBs

In this section, we introduce the categories, the core idea, and the formalization of four measurement functions. It is worth noting that, for a finite set , we can divide based on its equivalence relations (based on rough set theory guidance) to obtain the knowledge base . Then, according to Definition 6, we obtain the knowledge structure (i.e., ) of . Moreover, based on the , we can unitize the knowledge granulation of , the knowledge entropy of , the rough entropy of , and the knowledge amount of to construct the measure set, respectively. Finally, based on the constructed measure set (the principles of measure set construction and example are provided in Section 8), and the coefficient of variation (denoted as in (11), which is a common objective statistical indicator used to measure the uncertainty of a dataset) of the set is calculated to measure the uncertainty of the KB .

Categories of four measurement functions

In this paper, we focus on 4 currently popular measurement functions for measuring uncertainty of knowledge bases. Specifically, these methods include: Granularity-based measures (i.e., the knowledge granulation of in Definition 8). Entropy-based measures (i.e., the knowledge entropy of in Definition 9, and the rough entropy of in Definition 10). Knowledge amount-based measures (i.e., the knowledge amount of in Definition 11).

The core idea of four measurement functions

The core idea of granularity-based measures: The granulation of knowledge in the KB is mainly quantified by counting the number of elements in the equivalence relations . Specifically, given a KB , if , then the granulation of can be formalized as a mapping function from to . The core idea of entropy-based measures: In classical thermodynamics, entropy as a measurable physical property reveals the disorder of the system (the higher the value of entropy, the higher disorder of the system). In information theory, entropy (e.g., Shannon entropy) is used to measure the uncertainty of a system. Similarly, a large number of studies applied the concept of entropy to measure the uncertainty of KBs. The core idea of knowledge amount-based measures: These measures are the variation of the entropy-based measures described above, which introduces a probability measure (e.g., the probability of W in the universe ). These makes it possible to measure the uncertainty and the fuzziness of the KB.

Formalization of four measurement functions

Definition 8

[[1] Knowledge granulation of ] For a knowledge base , the knowledge granulation of is quantified as: where , (i.e., |W| = n), . is the set of equivalence relations.

Definition 9 (1 Knowledge entropy of )

For a knowledge base , the knowledge entropy of is quantified as: where , (i.e., |W| = n), . is the set of equivalence relations.

Definition 10 (1 Rough entropy of )

For a knowledge base , the rough entropy of is quantified as: where , (i.e., |W| = n), . is the set of equivalence relations.

Definition 11 (1 Knowledge amount of )

For a knowledge base , the knowledge amount of is quantified as: where , (i.e., |W| = n), . is the set of equivalence relations.

The main properties of , , , and

Lemma 1 (1 Boundedness)

Suppose that is a KB and , then Inequalities in (8) reveal the boundedness of , , , and on .

Lemma 2 (1 Monotonicity)

Let , be two KBs. If (i.e., ), then For rigorous proof of Lemma 1 and 2, the reader is referred to [1].

Dispersion analysis

In this section, we first review the conclusion of numerical experiments of [1]. The authors construct 4 measure sets (the principles of measure set construction and example are provided in Section 8) on three datasets2 (Nursery, Solar Flare, and Tic-Tac-Toe Endgamelaintaio in Table 3). Then, they compare the performance of four measurement functions (i.e., Definitions 8-11) by dispersion analysis. In their numerical experiment, they use the coefficient of variation of datasets to compare the performance differences between four different measurement functions. The experimental results are shown in Table 3.
Table 3

C-values of measure sets M(KGR), M(REN), M(KEN) and M(KAM)

Date setCv(M(KGR))Cv(M(REN))Cv(M(KEN))Cv(M(KAM))
Nursery2.04310.69780.47500.1141
Solar Flare0.98570.32190.28060.0615
Tic-Tac-Toe Endgame1.78820.90150.43400.1186
C-values of measure sets M(KGR), M(REN), M(KEN) and M(KAM) According to Table 3, it is easy to see that this may imply an interesting conclusion, i.e., Inequality (10) shows that has a much better performance. The conclusion of Inequality (10) and Table 3 may reflect a kind of regularity, which naturally leads to further thinking about the following questions: This motivates us to conduct deeper insight into different measurement functions. In the next section, we will give answers to these three questions. Does the conclusion of (10) apply to most datasets? Does (10) reveal general laws? What is the mathematical principle of (10)?

Theoretical analysis of measurement functions

In this section, we answer the above three questions. We provide a unified framework to prove Inequality (10), and theoretically prove that Inequality (10) has general properties for most KBs. These conclusions provide a rigorous theoretical basis for measuring uncertainty for KBs. Before giving the conclusions, we review the mathematical tools and notations we need to use in our proof. Specifically, for a given finite set , we use and to represent standard deviation and coefficient of variation of , respectively, i.e., Next, we provide our core theorems, which are Theorems 1,2.3, and 4. These conclusions strictly theoretically prove the experimental conclusion in [1], solving the two questions raised in Section 5 thereby.

Theorem 1

Suppose that be a KB. Let be the measure set on , where , which can be divided by relation . Then the can be equivalently described by the measurement function λ(x), where

Proof

Suppose that be a KB, and let be the measure set on the based on knowledge granulation, we suppose that, According to (11), we obtain the following, i.e., According to (4), for the set (i.e., ), it follows that, and Further, we obtain , and By (18), we establish the mapping relationship between and , i.e., where λ(⋅) satisfies (12). The proof is completed. □

Theorem 2

Suppose that be a KB. Let be the measure set on , where , which can be divided by relation . Then the can be equivalently described by the measurement function λ(x) (i.e., (12)). Suppose that be a KB, and let be the measure set on the based on rough entropy, we suppose that, According to (11), then we obtain the following, i.e., According to (6), for the set (i.e., ), it follows that, and Further, we obtain , and By (25), we establish the mapping relationship between and , i.e., where λ(⋅) satisfies (12). The proof is completed. □

Theorem 3

Suppose that be a KB. Let be the measure set on , where , which can be divided by relation . Then the can be equivalently described by the measurement function λ(x) (i.e., (12)). Suppose that be a KB, and let be the measure set on the based on rough entropy, we suppose that, According to (11), then we obtain the following, i.e., According to (5), for the set (i.e., |W| = k), it follows that, and Further, we can obtain , and According to (32), we establish the mapping relationship between and , i.e., where λ(⋅) satisfies (12). The proof is completed. □

Theorem 4

Suppose that be a KB. Let be the measure set on , where , which can be divided by relation . Then the can be equivalently described by the measurement function λ(x) (i.e., (12)). Suppose that be a KB, and let be the measure set on the based on rough entropy, we suppose that, According to (11), then we obtain the following, i.e., According to (7), for the set (i.e., |W| = k), it follows that, and Further, we obtain that, and Therefore, we establish the mapping relationship between and , i.e., where λ(⋅) satisfies (12). The proof is completed. □

The relation between λ(⋅) and , , and

According to Theorems 1-4, we summarize the intrinsic properties of function λ(⋅). Specifically, we can capture the following three important pieces of information: Universality Measurement function λ(⋅) establishes an internal relationship with C(⋅) (e.g., (19)), in the final mathematical expression, we find that the set does not affect (12). In other words, (12) is applied to any finite set (only requires can be divided according to some relation ), which means that the function λ(⋅) has universality. One-to-one correspondence between four measurement functions and the input of λ(⋅) For example, corresponds to ; corresponds to . Therefore, λ(⋅) achieves formal unification of the four different measurement functions. Monotonicity The function λ(⋅) can uniformly describe these four different measurement tools in a two-dimensional plane. Since , thus that, , , and can be described by the parameters x, , , and , where , and they are all elementary functions in a two-dimensional plane.

Equivalent representation

According to λ(⋅) and C(⋅), we use λ(⋅) to describe C(⋅) equivalently. In addition, according to (12), we see that the difference between , , and are completely dependent on their different inputs , , and . Therefore, the difference between four mathematical tools for measuring the uncertainty of can be represented by x, , , and .

Interval range

Observably, considering the monotonicity of each function, we can obtain that in the interval [α,β], the In (10) always holds, where α satisfies (i.e., ), and β satisfies β = x2 = 2k or x2 = k (i.e., ). Consequently, we obtain an initial range, that is However, which contradicts with (because ). Then the value of β should be subject to i.e., Therefore, we obtain that,

Corollary 1

If where ⌈k⌉ is ceiling function, i.e., (e.g., ⌈2.4⌉ = 3). Then, For an intuitive experience, we provide two visualizations of the different evaluation functions of x, , , and under different k values. According to Fig. 1 (k = 16), and Fig. 2 (k = 25), we can clearly see the difference between the four measurement functions.
Fig. 1

A visualization of the different evaluation functions x, , , and at k = 16

Fig. 2

A visualization of the different evaluation functions x, , , and at k = 25

A visualization of the different evaluation functions x, , , and at k = 16 A visualization of the different evaluation functions x, , , and at k = 25

Note

We provide two visual examples to understand the unified representation of these four measurement functions, which correspond to the four different inputs of the unified metric function λ(⋅). In the previous section, we provide an explicit interval within which the Inequality (10) holds strictly. However, as shown in Figs. 1 and 2, the magnitude relations of the four measurement functions are not unique, if . In summary, we conclude the following: When , inequality (10) holds strictly. In other words, KAM() has a much better performance for measuring the uncertainty of KBs. When , the four measurement functions do not show regularity in the results, and KAM() almost always shows better performance. Note that since k represents the number of samples in the dataset, the interval does not exist in practice, so we will not discuss this situation.

Comparison analysis

λ(⋅) formally unifies , , , and . Next, we visualize the similarities and differences between λ(⋅) and , , , and by Figs. 3 and 4.
Fig. 3

Comparison of the measure values of the four measurement functions

Fig. 4

Comparison of the outputs in λ(⋅) corresponding to the four different inputs

Comparison of the measure values of the four measurement functions Comparison of the outputs in λ(⋅) corresponding to the four different inputs It is worth noting that λ(⋅) is not a new measurement function, which is used as a unified equivalent form of , , , and . Therefore, the following analysis does not involve a comparison of performance, while focusing on the differences between λ(⋅) and each measurement function in terms of principle, interpretability. Specifically, as shown in Figs. 3 and 4, we summarize the comparison between λ(⋅) and , , , and as follows: Measurement principle: For , , , and , they focus only on outputting specific numerical results (e.g., coefficients of variation) in their studies of measures of uncertainty for knowledge bases. In other words, the comparison of the performance between these measurement functions is also limited to the presentation by the magnitude of the statistical values they compute. Unfortunately, this comparison at the level of results alone does not reflect why the four measurement functions differ. For example, in the case where the potential association between , , , and are not considered, it does not reveal the reason, although it can reflect that the value of “pink” is (almost always) greater than the value of “blue” (as shown on the left in Fig. 3). Interpretability: As shown in Fig. 4, λ(⋅) integrates the four measurement functions in a unified measurement framework, where different inputs correspond to different outputs. In Theorem 1, we have proved that λ(⋅) has the following form, i.e., Obviously, for determined x, n, and k (which can be determined from the knowledge base), λ(⋅) involves only changes in values and therefore does not change the monotonicity of the original input. This excellent property allows the comparison between different outputs based on λ(⋅) to be translated into a comparison of their corresponding inputs, i.e., x, , , and . Fortunately, each of the above four inputs corresponds to four more primitive functions and can be compared (as shown in Figs. 1 and 2). Thus, although λ(⋅) is not a new measurement function, as a unified integrated framework for , , , and , it explains the differences in the metric values of different measurement functions by comparing x, , , and .

Limitations

In RST, knowledge reflects the ability to classify some objects [45]. Specifically, in a KB, the set of entities we are interested in a certain field can be regarded as a finite set (or universe) and any subset is called a category (or concept) in , which contains many entities. The concept family, which contains many concepts, is called abstract knowledge about . A KB over is equivalent to a family of classifications over . Objects in a KB can be divided according to their different attributes. For example, given a set , which contains many candies, and suppose these candies have different colors (e.g., white, yellow, red) and shapes (e.g., round, square, triangle), then, these candies can be described by attributes such as color and shape, e.g., red round candies, or yellow triangle candies, etc. According to different attributes, we can describe the specific situation of these candies by a certain attribute (e.g., color and shape). Hence, we can obtain two equivalence relations (or attributes) from the above example, i.e., . According to these equivalence relations, the corresponding equivalence class can be further obtained. The elements in the set are divided and recombined according to the equivalence relations, e.g., candies are divided by color.

Measures of uncertainty for KBs without attribute information

In the previous section, we analyze the performance of different measurement functions in measuring the uncertainty of KBs. The limitation of previous research is that the division of instances in a KB can often only depend on their attributes. However, the type of knowledge base has changed with the needs of real applications, and some of the knowledge bases do not contain the attributes of the instances or lack sufficient attribute relations to classify the instances (e.g., ProBase). In this section, we first provide the definition of concept structure of ProBase (see Definition 6). And then, we provide an effective strategy to induce KBs from ProBase, and instances in induced KBs can be classified by their concepts.

Inducing KBs from ProBase: intuition

According to the definition 4, for the sake of simplicity of description, we use a to represent a KB induced by ProBase. In fact, in ProBase, all KBs are induced by the same strategy. Hence, in the rest of this paper, we unify all knowledge bases into for theoretical analysis. Specifically, the more accurate description is that is the set containing a large number of instances, which refer to nodes that no longer have hyponyms in Pobase, and is the family of hypernyms (or concepts) set of instances. Therefore, in this paper, we do not strictly distinguish the difference between InstanceOf and SubClass. In most downstream tasks, the two can be unified as the isA relationship.

Definition 12 (ProBase 34)

ProBase3 is probabilistic of taxonomy, which contains hundreds of millions of instances, concepts, and isA relationships. isA relationship can be specified as InstanceOf relation between a concept and an instance (e.g., (Snoopy, isA, dog)) or SubClass relation between a pair of concepts (e.g., (fruit, isA, botany)).

Classifications

We first use a simple example to illustrate the intuition that the instances in ProBase can be classified according to their concepts.

Example 2

Given a finite set , if is divided by the equivalence relation H = {carnivore}, the equivalence class of can form an independent set, i.e., where If is divided by the equivalence relation Then can be divided into where As can be seen from Example 2, can be divided by to obtain and . For ProBase, the dimension of can be determined by , hence, can be regarded as a vector in vector space. Note that, suppose be a KB induced by ProBase, where is the set of instances, and is the family consisting of the set of hypernyms (i.e., concepts) of instances, then the choice of concepts is constrained. This means that the instances in can be divided by . Therefore, in this paper, we regard an equivalence relation (i.e., attribute) in the KB as a concept (i.e., hypernym) in ProBase. Li et al. [33] define the vector as the knowledge structure of KBs. Similarly, we provide the definition of the concept structures of as follows:

Definition 13 (Concept structures of )

Suppose be a KB induced by ProBase, if the finite set can be divided by relations , then we call the vector is the concept structure of . In Example 2, let t1 = tiger, t2 = lion, and H2 = {felidae}, then , which mean that tiger and lion are equivalent under relation H2. Similarity, and are equivalent under relation H.

Inducing KBs from ProBase: strategy

Strategy

It is worth noting that in ProBase, most instances belong to many hypernyms, in other words, two or more different concepts may have the identical instances (e.g., the hypernyms of apple can be company, fruit, etc.). Therefore, intuitively, ProBase can divide instances based on different levels of hypernyms to obtain multiple KBs, and the specific division strategy is: Select an instance which should have at least three hypernym hierarchies (denoted as ), i.e., where x→y means x is the hyponym of y. For example, Repeat the above strategy, and finally obtain all satisfying (45), i.e., For example. Collect all the instances in each to form set T1. Repeat the selection strategy above, similarly, we collect all the instances in each to form set T2. For example, Until t does not satisfy (45), the search is terminated. The final acquired dataset can be viewed as a sub-dataset induced by ProBase, based on instance t. T ∩ T = ∅ ensures that the same instance is strictly divided according to its hypernyms. For example, a candy cannot be both red and blue. hypo(h2(t,q)) ∩ hypo(h2(t,q))≠∅ ensures that presence of instances under any combination of hypo(h2(t,q),q ∈{1,2,,...,q}).

Rationality analysis

The strategy is not unique. Similarly, we also select a concept (the concept must have enough hypernym hierarchies and hyponym hierarchies) to conform to the selection strategy of (45). We won’t repeat it here. Obviously, multiple KBs can be induced from ProBase based on the above strategy, and the instances in these KBs can be divided according to their selected concepts. As a comparison, in , a “h2(t,q)” plays the role of an attribute, and “” represents the attribute value. Therefore, based on the above strategy and analysis, we theoretically provide a strategy for inducing a KB from ProBase, and the instances in the induced KB can be strictly classified based on their selected concepts. Our results indicate that λ(⋅) provides valuable insights to integrate four measurement functions into a unified framework for measuring the uncertainty of KBs.

Experiments

KBs with attribute information

Comparison of four measurement functions

We conduct experiments on the datasets in Table 4 with the aim of comparing the performance of four measurement functions, KGR(⋅), REN(⋅),KEN(⋅) and KAM(⋅), across different knowledge bases.
Table 4

Data sets from UCI,a “#X”represents the number of “X”

DatasetsArea#Attributes#Instances
Tic-Tac-Toe EndgameGame9958
ChessGame363,196
Dota2 GamesGame116102,944
LymphographyLife Science18148
MushroomLife Science228,124
SPECT HeartLife Science22267
AbaloneLife Science84,177
Estimation of obesity levelsLife Science172,111
Primary TumorLife Science17339
Breast CancerLife Science10116
Congressional Voting RecordsSocial Science16435
Balance ScaleSocial Science4625
NurserySocial Science812,960
Student PerformanceSocial Science33649
Letter RecognitionComputer1620,000
Solar FlarePhysical101,389
Car EvaluationOther61,728
MONK’s ProblemsOther7432

a https://archive.ics.uci.edu/ml/index.php

Data sets from UCI,a “#X”represents the number of “X” a https://archive.ics.uci.edu/ml/index.php

The measure sets construction

Specifically, for a KB , we denote , where ind(⋅) stands for the indiscernibility relation, such as . Let be the set consisting of R, where satisfies (e.g., ). Obviously, is the knowledge base induced by . Therefore, we obtain four measure sets on as follows:

Example 3

For example, “Lymphography” in Table 4 can be viewed as an information system with , . We can obtain four measure sets on “Lymphography” as follows: and the values of , , and are calculated by (4)–(7).

Experimental results and analysis on multi-domain datasets

Experimental results

The experimental results are shown in Table 5 and Fig. 5.
Table 5

Coefficient of variation values of measure sets , , , and

IndexDatesets\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$C_{v}(\textbf {M}_{\text {KGR}}(\mathcal {W}_{i}))$\end{document}Cv(MKGR(Wi))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$C_{v}(\textbf {M}_{\text {REN}}(\mathcal {W}_{i}))$\end{document}Cv(MREN(Wi)) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$C_{v}(\textbf {M}_{\text {KEN}}(\mathcal {W}_{i}))$\end{document}Cv(MKEN(Wi))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$C_{v}(\textbf {M}_{\text {KAM}}(\mathcal {W}_{i}))$\end{document}Cv(MKAM(Wi))
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{1}$\end{document}W1Tic-Tac-Toe Endgame1.78790.90150.43400.1186
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{2}$\end{document}W2Chess1.57650.67190.58650.1276
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{3}$\end{document}W3Dota2 Games4.68682.52291.57750.6037
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{4}$\end{document}W4Lymphography1.59710.71350.45180.0946
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{5}$\end{document}W5Mushroom2.85920.65010.32790.0807
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{6}$\end{document}W6SPECT Heart0.80960.55930.29690.1384
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{7}$\end{document}W7Abalone2.06761.78540.68370.1041
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{8}$\end{document}W8Estimation of obesity levels3.63143.00760.24420.1288
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{9}$\end{document}W9Primary Tumor1.88700.88390.32880.1289
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{10}$\end{document}W10Breast Cancer1.52470.95600.35170.0980
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{11}$\end{document}W11Congressional Voting Records1.51890.64810.35740.1253
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{12}$\end{document}W12Balance Scale1.29430.74530.44720.0861
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{13}$\end{document}W13Nursery2.04310.69780.47500.1141
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{14}$\end{document}W14Student Performance3.10881.93250.29460.1643
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{15}$\end{document}W15Letter Recognition3.10321.38830.29530.0380
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{16}$\end{document}W16Solar Flare0.95370.42240.19880.0204
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{17}$\end{document}W17Car Evaluation1.54391.06860.21480.0556
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}_{18}$\end{document}W18MONK’s Problems1.36500.98470.39160.1201
Fig. 5

Coefficient of variation values of four measure sets on datasets (a)–(r)

Coefficient of variation values of measure sets , , , and Coefficient of variation values of four measure sets on datasets (a)–(r)

Analysis

From the results, we conclude that: Consistency of results: We select datasets from different domains to validate our theoretical analysis, which contains different numbers of instances and attributes. Specifically, 18 datasets involving 6 domains (i.e., game, life science, social science, computer, physical and other) all consistently demonstrate our theoretical analysis, i.e., Metric Performance: For the dataset of different domains, the value of fluctuates the most, and it has the worst performance for measuring the uncertainty of KBs. By contrast, the value of has good stability, and it has the best performance for measuring the uncertainty of KBs. Comparison of and : As shown in Fig. 5, the gap between and is not significant in most of the datasets, which is consistent with our analysis of the measurement functions and in the previous section. For example, as shown in Figs. 1 and 2, when the value of x is in the interval , the gap between and is not too significant in most cases. Comparison of and : Contrasted with the above conclusion, the gap between and demonstrates a significant difference on almost all datasets, which is consistent with our analysis of the measurement functions and in the previous section. For example, as shown in Figs. 1 and 2, when the value of x is in the interval , the gap between and will increase as x increases.

KBs induced by ProBase

In this section, we aim to induce several KBs from ProBase based on the above strategy and to perform uncertainty measurement on the induced KBs. Specifically, we induce three different sizes of KBs (denoted as D1,D2,andD3) for the metric, and the specific information of D1 (based on concept fruit induction), D2 (based on concept corn induction, containing 123 instances) and D3 (based on concept corn induction, containing 1290 instances) are shown in Table 6. The construction method of the measure sets on D1, D2, and D3 is the same as the construction method (49) on the general datasets.
Table 6

Statistical information of D1, D2 and D3

Datasets#concepts (h2(ti,q))#Instances
D1 372
D2 3123
D3 31290
Statistical information of D1, D2 and D3

Experimental results and analysis on ProBase

The experimental results are shown in Table 7 and Fig. 6.
Table 7

Coefficient of variation values of measure sets MKGR(D), MREN(D), MKEN(D), and MKAM(D) on dataset D

DatesetsCv(MKGR(Di))Cv(MREN(Di)) Cv(MKEN(Di))Cv(MKAM(Di))
D1 0.62170.35540.42460.2498
D2 0.88890.51060.40730.1239
D3 0.27050.08910.23970.0658
Fig. 6

Coefficient of variation values of four measure sets on datasets D1, D2 and D3

Coefficient of variation values of measure sets MKGR(D), MREN(D), MKEN(D), and MKAM(D) on dataset D Coefficient of variation values of four measure sets on datasets D1, D2 and D3 From the results, we conclude that: In datasets D1 and D3, the results show the following relationship, i.e., The result is in line with our analysis conclusion. As shown in Figs. 1 and 2, we find that, in the interval , there will be a situation where This fully validates the rigor of our theoretical analysis. Moreover, this conclusion also reveal that and are greatly affected by the parameter k. In dataset D2, the results reveal the following relationship, i.e., This further verifies that has stable and excellent performance in measuring the uncertainty of the KB. Consistent with the experimental conclusions on the public datasets, has the worst performance in measuring the uncertainty of KBs, while maintains the best performance in measuring the uncertainty of KBs.

Case study

In this section, we provide a small-scale case to visually demonstrate how to use rough set theory and induction strategy (i.e., Section 7.2) to induce a measurable knowledge base (denoted as D4) from ProBase. Dataset D4 contains 19 concepts about fruit, and their corresponding hypernyms in ProBase (the selection of hypernyms is based on the induction strategy in Section 7.2). The statistical information of D4 is summarized in Table 8.
Table 8

Statistical information of D4

FruitsHardSoftNon-citrusCitrus
Apple
Apricot
Banana
Berry
Cherry
Gooseberry
Grape
Grapefruit
Kiwi
Melon
Orange
Papaya
Peach
Pear
Pineapple
Plum
Raspberry
Tomato
Statistical information of D4 Further, as in the above experiments, we construct measure sets on D4, and calculate the coefficient of variation of measure sets, and the results are shown in Fig. 7.
Fig. 7

Coefficient of variation values of measure sets on dataset D4

Coefficient of variation values of measure sets on dataset D4 Obviously, the experimental results based on dataset D4 are consistent with the previous theoretical analysis and experimental evaluation conclusions. That is KGR(D4) has the worst performance in measuring the uncertainty of KBs, while KAM(D4) maintains the best performance in measuring the uncertainty of the KB. In particular, the case study also captures the situation where C(MKEN(D4)) is greater than C(MREN(D4)).

Discussion

In this section, we hope to bring some guidance and insight to the study of knowledge base uncertainty through the results of the theoretical analysis in this paper. According to Table 5 and Fig. 5, we visually observe that although , , , and exhibit the theoretical analysis of this paper on all 18 public datasets, i.e., However, a more detailed analysis reveals that there are significant differences between the different measurement functions (e.g., in the dataset “Letter Precognition”, is 0.0380, but can reach 3.1032). Therefore, a single conclusion based on a single measurement function is not sufficient. Based on the theoretical analysis and experimental validation in this paper, we advocate that the uncertainty of the knowledge base should be evaluated by combining the four measurement functions. For example, for datasets“Solar Flare” and “Letter Recognition”, although they differ slightly in the (, ), they differ significantly in the and . Therefore, it may be a more reasonable way to comprehensively consider these measurement functions. The rapid development of deep neural networks (DNNs) in recent years has reached almost every field of AI, meanwhile, many researchers begin to think deeply about the reliability of prediction results based on neural networks. There is already evidence that uncertainty (e.g., data uncertainty and model uncertainty) imposes many limitations on DNNs, such as the lack of transparency of a DNN’s inference framework [46]. In the previous sections, we focus on measures of uncertainty for knowledge bases, aiming to provide a rigorous theoretical analysis for the existing conclusions (e.g., uncover the reasons for performance differences between measurement functions). We hope these results will provide insights into understanding the essence of uncertainty (e.g, uncertainty quantification [47]) for knowledge bases.

Conclusion and further work

The work of this paper is inspired by the experimental conclusions of [1]. In [1], the authors verify the superiority of measuring the uncertainty of KBs based on the knowledge amount through experiments on three datasets. Although this conclusion lacks rigorous theoretical analysis, it encourages us to study why the knowledge-amount-based measurement function has the best performance in measuring the uncertainty of the knowledge base. Therefore, this paper provides deeper insights into the uncertainty measurement of the knowledge base. In this paper, we review four popular measurement functions in measuring the uncertainty for KBs. Then, at the theoretical level, we integrate the four measurement functions into a unified new measurement function, which provides valuable insights for measuring the uncertainty of KBs. At the experimental level, the experimental results on the 18 public datasets are consistent with our theoretical analysis conclusions, which fully demonstrates the correctness of our theoretical analysis. In addition, for some special datasets (e.g., ProBase), which contains a large amount of structured knowledge, there are not enough attributes to classify the instances in it. This leads to the inability of the above measurement functions to perform the uncertainty measurement on ProBase. In order to solve this issue, we propose an effective strategy, which can induce sub-datasets from ProBase, and all the instances in the sub-dataset can be divided according to their concepts. Comparative experimental results justify the effectiveness of the strategy and the consistency with the theoretical conclusions.

Further work

Knowledge base, as an indispensable carrier for the development of artificial intelligence technology today, provides far-reaching resources for smart devices. With the increase in the amount of downstream real tasks and the diversification of real application scenarios, various types of knowledge bases have appeared one after another, and their knowledge structures have become more and more complicated. Therefore, how to measure the uncertainty of these knowledge bases is the future important work. In addition, the timeliness, accuracy, and redundancy of the knowledge base are also important indicators to measure the knowledge base. Whether a complete theoretical analysis of the above measurement indicators can be established is one of our future efforts.
  7 in total

Review 1.  Probabilistic machine learning and artificial intelligence.

Authors:  Zoubin Ghahramani
Journal:  Nature       Date:  2015-05-28       Impact factor: 49.962

2.  Bayesian QuickNAT: Model uncertainty in deep whole-brain segmentation for structure-wise quality control.

Authors:  Abhijit Guha Roy; Sailesh Conjeti; Nassir Navab; Christian Wachinger
Journal:  Neuroimage       Date:  2019-03-26       Impact factor: 6.556

3.  Decision making under measure-based granular uncertainty with intuitionistic fuzzy sets.

Authors:  Yige Xue; Yong Deng
Journal:  Appl Intell (Dordr)       Date:  2021-02-05       Impact factor: 5.086

4.  The only certainty is uncertainty: An analysis of the impact of COVID-19 uncertainty on regional stock markets.

Authors:  Jan Jakub Szczygielski; Princess Rutendo Bwanya; Ailie Charteris; Janusz Brzeszczyński
Journal:  Financ Res Lett       Date:  2021-01-26

5.  COVID-19 and the United States financial markets' volatility.

Authors:  Claudiu Tiberiu Albulescu
Journal:  Financ Res Lett       Date:  2020-07-25

Review 6.  Reopening schools during the COVID-19 pandemic: governments must balance the uncertainty and risks of reopening schools against the clear harms associated with prolonged closure.

Authors:  Russell M Viner; Christopher Bonell; Lesley Drake; Didier Jourdan; Nicolette Davies; Valentina Baltag; John Jerrim; Jenny Proimos; Ara Darzi
Journal:  Arch Dis Child       Date:  2020-08-03       Impact factor: 3.791

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.