Literature DB >> 27747563

Identifying HIV-induced subgraph patterns in brain networks with side information.

Bokai Cao1, Xiangnan Kong2, Jingyuan Zhang3, Philip S Yu3,4, Ann B Ragin5.   

Abstract

Investigating brain connectivity networks for neurological disorder identification has attracted great interest in recent years, most of which focus on the graph representation alone. However, in addition to brain networks derived from the neuroimaging data, hundreds of clinical, immunologic, serologic, and cognitive measures may also be documented for each subject. These measures compose multiple side views encoding a tremendous amount of supplemental information for diagnostic purposes, yet are often ignored. In this paper, we study the problem of subgraph selection from brain networks with side information guidance and propose a novel solution to find an optimal set of subgraph patterns for graph classification by exploring a plurality of side views. We derive a feature evaluation criterion, named gSide, to estimate the usefulness of subgraph patterns based upon side views. Then we develop a branch-and-bound algorithm, called gMSV, to efficiently search for optimal subgraph patterns by integrating the subgraph mining process and the procedure of discriminative feature selection. Empirical studies on graph classification tasks for neurological disorders using brain networks demonstrate that subgraph patterns selected by the multi-side-view-guided subgraph selection approach can effectively boost graph classification performances and are relevant to disease diagnosis.

Entities:  

Keywords:  Brain network; Graph mining; Side information; Subgraph pattern

Year:  2015        PMID: 27747563      PMCID: PMC4737668          DOI: 10.1007/s40708-015-0023-1

Source DB:  PubMed          Journal:  Brain Inform        ISSN: 2198-4026


Introduction

Modern neuroimaging techniques have enabled us to model the human brain as a brain connectivity network or a connectome. Rather than vector-based feature representations as traditional data, brain networks are inherently in the form of graph representations which are composed of brain regions as the nodes, e.g., insula, hippocampus, thalamus, and functional/structural connectivities between the brain regions as the links. The linkage structure in these brain networks can encode tremendous information concerning the integrated activity of the human brain. For example, in brain networks derived from functional magnetic resonance imaging (fMRI), connections/links can encode correlations between brain regions in functional activity, while structural links in diffusion tensor imaging (DTI) can capture white matter fiber pathways connecting different brain regions. The complex structures and the lack of vector representations within these graph data raise a challenge for data mining. An effective model for mining the graph data should be able to extract a set of subgraph patterns for further analysis. Motivated by such challenges, graph mining research problems, in particular graph classification, have received considerable attention in the last decade. The graph classification problem has been studied extensively. Conventional approaches focus on mining discriminative subgraphs from graph view alone. This is usually feasible for applications like molecular graph analysis, where a large set of graph instances with labels are available. For brain network analysis, however, usually we only have a small number of graph instances, ranging from 30 to 100 brain networks [19]. In these applications, the information from the graph view alone may not be sufficient for mining important subgraphs. Commonly, however, in neurological studies, hundreds of clinical, serologic, and cognitive measures are available for each subject in addition to brain networks derived from the neuroimaging data [4, 5]. These measures comprise multiple side views. This supplemental information, which is generally ignored, may contain a plurality of side views to guide the process of subgraph mining in brain networks. An example of multiple side views associated with brain networks in medical studies Despite its value and significance, the feature selection problem for graph data using auxiliary views has not been studied in this context so far. There are two major difficulties in learning from multiple side views for graph classification, as follows:

The primary view in graph representation

Graph data naturally compose the primary view for graph mining problems, from which we want to select discriminative subgraph patterns for graph classification. However, it raises a challenge for data mining with the complex structures and the lack of vector representations. Conventional feature selection approaches in vector spaces usually assume that a set of features are given before conducting feature selection. In the context of graph data, however, subgraph features are embedded within the graph structures and usually it is not feasible to enumerate the full set of subgraph features for a graph dataset before feature selection. Actually, the number of subgraph features grows exponentially with the size of graphs.

The side views in vector representations

In many applications, side information is available along with the graph data and usually exists in the form of vector representations. That is to say, an instance is represented by a graph and additional vector-based features at the same time. It introduces us to the problem of how to leverage the relationship between the primary graph view and a plurality of side views, and how to facilitate the subgraph mining procedure by exploring the vector-based auxiliary views. For example, in brain networks, discriminative subgraph patterns for neurological disorders indicate brain injuries associated with particular regions. Such changes can potentially express in other medical tests of the subject, e.g., clinical, immunologic, serologic, and cognitive measures. Thus, it would be desirable to select subgraph features that are consistent with these side views. Two strategies of leveraging side views in feature selection process for graph classification: late fusion and early fusion [6] Figure 2 illustrates two strategies of leveraging side views in the process of selecting subgraph patterns. Conventional graph classification approaches treat side views and subgraph patterns separately and may only combine them at the final stage of training a classifier. Obviously, the valuable information embedded in side views is not fully leveraged in the feature selection process. Most subgraph mining approaches focus on the drug discovery problem which have access to a great amount of graph data for chemical compounds. For neurological disorder identification, however, there are usually limited subjects with a small sample size of brain networks available. Therefore, it is critical to learn knowledge from other possible sources. We notice that transfer learning can borrow supervision knowledge from the source domain to help the learning on the target domain, e.g., finding a good feature representation [10], mapping relational knowledge [24, 25], and learning across graph database [29]. However, to the best of our knowledge, they do not consider transferring complementary information from vector-based side views to graph database whose instances are complex structural graphs.
Fig. 2

Two strategies of leveraging side views in feature selection process for graph classification: late fusion and early fusion [6]

To solve the above problems, in this paper, we introduce a novel framework that fuses heterogeneous data sources at an early stage. In contrast to existing subgraph mining approaches that focus on a single view of the graph representation, our method can explore multiple vector-based side views to find an optimal set of subgraph features for graph classification. We first verify side information consistency via statistical hypothesis testing. Based on auxiliary views and the available label information, we design an evaluation criterion for subgraph features, named gSide. By deriving a lower bound, we develop a branch-and-bound algorithm, called gMSV, to efficiently search for optimal subgraph features with pruning, thereby avoiding exhaustive enumeration of all subgraph features. In order to evaluate our proposed model, we conduct experiments on graph classification tasks for neurological disorders, using fMRI and DTI brain networks. The experiments demonstrate that our subgraph selection approach using multiple side views can effectively boost graph classification performances. Moreover, we show that gMSV is more efficient by pruning the subgraph search space via gSide.

Problem formulation

A motivation for this work is the premise that side information could be strongly correlated with neurological status. Before presenting the subgraph feature selection model, we first introduce the notations that will be used throughout this paper. Let denote the graph dataset, which consists of n graph objects. The graphs within are labeled by , where denotes the binary class label of .

Definition 1

(Graph) A graph is represented as , where is the set of vertices, is the set of edges.

Definition 2

(Subgraph) Let and be two graphs. is a subgraph of G (denoted as ) iff and . If is a subgraph of G, then G is supergraph of .

Definition 3

(Side view) A side view is a set of vector-based features associated with each graph object , where d is the dimensionality of this view. A side view is denoted as . We assume that multiple side views are available along with the graph dataset , where v is the number of side views. We employ kernels on , such that represents the similarity between and from the perspective of the p-th view. The RBF kernel is used as the default kernel in this paper, unless otherwise specified:In this paper, we adopt the idea of subgraph-based graph classification approaches, which assume that each graph object is represented as a binary vector associated with the full set of subgraph patterns for the graph dataset . Here is the binary feature of corresponding to the subgraph pattern , and iff is a subgraph of (), otherwise . Let denote the matrix consisting of binary feature vectors using to represent the graph dataset . . The full set is usually too large to be enumerated. There is usually only a subset of subgraph patterns relevant to the task of graph classification. We briefly summarize the notations used in this paper in Table 1.
Table 1

Important notations

SymbolDefinition and description
|.|Cardinality of a set
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert . \Vert$$\end{document}. Norm of a vector
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{D}}=\{G_1,\ldots ,G_n\}$$\end{document}D={G1,,Gn} Given graph dataset, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_i$$\end{document}Gi denotes the i-th graph in the dataset
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{y}}=[y_1,\ldots ,y_n]^\top$$\end{document}y=[y1,,yn] Class label vector for graphs in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{D}}$$\end{document}D, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y_i\in \{-1,+1\}$$\end{document}yi{-1,+1}
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{S}}=\{g_1,\ldots ,g_m\}$$\end{document}S={g1,,gm} Set of all subgraph patterns in the graph dataset \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{D}}$$\end{document}D
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{f}}_i=[f_{i1},\ldots ,f_{in}]^\top$$\end{document}fi=[fi1,,fin] Binary vector for subgraph pattern \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_i$$\end{document}gi, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{ij}=1$$\end{document}fij=1 iff \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_i\subseteq G_j$$\end{document}giGj, otherwise \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{ij}=0$$\end{document}fij=0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{x}}_j=[x_{1j},\ldots ,x_{mj}]^\top$$\end{document}xj=[x1j,,xmj] Binary vector for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_j$$\end{document}Gj using subgraph patterns in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{S}}$$\end{document}S, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{ij}=1$$\end{document}xij=1 iff \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_i\subseteq G_j$$\end{document}giGj, otherwise \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{ij}=0$$\end{document}xij=0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X=[x_{ij}]^{m\times n}$$\end{document}X=[xij]m×n Matrix of all binary vectors in the dataset, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X=[{\mathbf{x}}_1,\ldots ,{\mathbf{x}}_n]=[{\mathbf{f}}_1,\ldots ,{\mathbf{f}}_m]^\top \in \{0,1\}^{m\times n}$$\end{document}X=[x1,,xn]=[f1,,fm]{0,1}m×n
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{T}}$$\end{document}T Set of selected subgraph patterns, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{T}}\subseteq {\mathcal{S}}$$\end{document}TS
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{I}}_{\mathcal{T}}\in \{0,1\}^{m\times m}$$\end{document}IT{0,1}m×m Diagonal matrix indicating which subgraph patterns are selected from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{S}}$$\end{document}S into \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{T}}$$\end{document}T
min_sup Minimum frequency threshold; frequent subgraphs are contained by at least min_sup \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times |{\mathcal{D}}|$$\end{document}×|D| graphs
k Number of subgraph patterns to be selected
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda ^{(p)}$$\end{document}λ(p) Weight of the p-th side view (default: 1)
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa ^{(p)}$$\end{document}κ(p) Kernel function on the p-th side view (default: RBF kernel)
The key issue of discriminative subgraph selection using multiple side views is how to find an optimal set of subgraph patterns for graph classification by exploring the auxiliary views. This is non-trivial due to the following problems:In the following sections, we will first introduce the optimization framework for selecting discriminative subgraph features using multiple side views. Next, we will describe our subgraph mining strategy using the evaluation criterion derived from the optimization solution. How to leverage the valuable information embedded in multiple side views to evaluate the usefulness of a set of subgraph patterns? How to efficiently search for the optimal subgraph patterns without exhaustive enumeration in the primary graph space? Important notations Demographic characteristics

Data analysis

A motivation for this work is that the side information could be strongly correlated with the health state of a subject. Before proceeding, we first introduce real-world data used in this work and investigate whether the available information from side views has any potential impact on neurological disorder identification.

Data collections

In this paper, we study the real-world datasets collected from the Chicago Early HIV Infection Study at Northwestern University [27]. The clinical cohort includes 56 HIV (positive) and 21 seronegative controls (negative). Demographic information is presented in Table 2. HIV and seronegative groups did not differ in age, gender, racial composition or education level. More detailed information about data acquisition can be found in [5]. The datasets contain functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) for each subject, from which brain networks can be constructed, respectively.
Table 2

Demographic characteristics

HIVControl p
Age (mean years \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm$$\end{document}± SD)33.3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm$$\end{document}± 10.131.4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm$$\end{document}± 8.90.45
Gender (% male)89 %76 %0.22
Race (% white)62 %76 %0.22
Education (% college)81 %90 %0.29
For fMRI data, we used DPARSF toolbox1 to extract a sequence of responds from each of the 116 anatomical volumes of interest (AVOI), where each AVOI represents a different brain region. The correlations of brain activities among different brain regions are computed. Positive correlations are used as links among brain regions. For details, functional images were realigned to the first volume, slice timing corrected, and normalized to the MNI template and spatially smoothed with an 8-mm Gaussian kernel. The linear trend of time series and temporally band-pass filtering (0.01–0.08 Hz) were removed. Before the correlation analysis, several sources of spurious variance were also removed from the data through linear regression: (i) six parameters obtained by rigid body correction of head motion, (ii) the whole-brain signal averaged over a fixed region in atlas space, (iii) signal from a ventricular region of interest, and (iv) signal from a region centered in the white matter. Each brain is represented as a graph with 90 nodes corresponding to 90 cerebral regions, excluding 26 cerebellar regions. For DTI data, we used FSL toolbox2 to extract the brain networks. The processing pipeline consists of the following steps: (i) correct the distortions induced by eddy currents in the gradient coils and use affine registration to a reference volume for head motion, (ii) delete non-brain tissue from the image of the whole head [15, 30], (iii) fit the diffusion tensor model at each voxel, (iv) build up distributions on diffusion parameters at each voxel, and (v) repetitively sample from the distributions of voxel-wise principal diffusion directions. As with the fMRI data, the DTI images were parcellated into 90 regions (45 for each hemisphere) by propagating the Automated Anatomical Labeling (AAL) to each image [34]. Min-max normalization was applied on link weights. In addition, for each subject, hundreds of clinical, imaging, immunologic, serologic, and cognitive measures were documented. Seven groups of measurements were investigated in our datasets, including neuropsychological tests, flow cytometry, plasma luminex, freesurfer, overall brain microstructure, localized brain microstructure, brain volumetry. Each group can be regarded as a distinct view that partially reflects subject status, and measurements from different medical examinations can provide complementary information. Moreover, we preprocessed the features by min-max normalization before employing the RBF kernel on each view. Hypothesis testing results (p values) to verify side information consistency

Verifying side information consistency

We study the potential impact of side information on selecting subgraph patterns via statistical hypothesis testing. Side information consistency suggests that the similarity of side view features between instances with the same label should have higher probability to be larger than that with different labels. We use hypothesis testing to validate whether this statement holds in the fMRI and DTI datasets. For each side view, we first construct two vectors and with an equal number of elements, sampled from the sets and , respectively:Then, we form a two-sample one-tail t test to validate the existence of side information consistency. We test whether there is sufficient evidence to support the hypothesis that the similarity score in is larger than that in . The null hypothesis is , and the alternative hypothesis is , where and represent the sample means of similarity scores in the two groups, respectively. The t test results, p values, are summarized in Table 3. The results show that there is strong evidence, with significance level , to reject the null hypothesis on the two datasets. In other words, we validate the existence of side information consistency in neurological disorder identification, thereby paving the way for our next study of leveraging multiple side views for discriminative subgraph selection.
Table 3

Hypothesis testing results (p values) to verify side information consistency

Side viewsfMRI datasetDTI dataset
Neuropsychological tests1.3220e−203.6015e−12
Flow cytometry5.9497e−575.0346e−75
Plasma luminex9.8102e−067.6090e−06
Freesurfer2.9823e−061.5116e−03
Overall brain microstructure1.0403e−028.1027e−03
Localized brain microstructure3.1108e−045.7040e−04
Brain volumetry2.0024e−041.2660e−02

Multi-side-view discriminative subgraph selection

In this section, we address the first problem discussed in Sect. 2 by formulating the discriminative subgraph selection problem as a general optimization framework as follows:where denotes the cardinality and k is the maximum number of feature selected. is the evaluation criterion to estimate the score (can be the lower the better in this paper) of a subset of subgraph patterns . denotes the optimal set of subgraph patterns .

Exploring multiple side views: gSide

Following the observations in Sect. 3.2 that the side view information is clearly correlated with the prespecified label information, we assume that the set of optimal subgraph patterns should have the following properties. The similarity/distance between instances in the space of subgraph features should be consistent with that in the space of a side view. That is to say, if two instances are similar in the space of the p-th view (i.e., a high value), they should also be close to each other in the space of subgraph features (i.e., a small distance between subgraph feature vectors). On the other hand, if two instances are dissimilar in the space of the p-th view (i.e., a low value), they should be far away from each other in the space of subgraph features (i.e., a large distance between subgraph feature vectors). Therefore, our objective function could be to minimize the distance between subgraph features of each pair of similar instances in each side view, and maximize the distance between dissimilar instances. This idea is formulated as follows:where is a diagonal matrix indicating which subgraph features are selected into from , iff , otherwise . The parameters are employed to control the contributions from each view.where , , and is the mean value of , i.e., . This normalization is to balance the effect of similar instances and dissimilar instances. Intuitively, Eq. (5) will minimize the distance between subgraph features of similar instance-pairs with , while maximizing the distance between dissimilar instance-pairs with in each view. In this way, the side view information is effectively used to guide the process of discriminative subgraph selection. The fact verified in Sect. 3.2 that the side view information is clearly correlated with the prespecified label information can be very useful, especially in the semi-supervised setting. With prespecified information for labeled graphs, we further consider that the optimal set of subgraph patterns should satisfy the following constraints: labeled graphs in the same class should be close to each other; labeled graphs in different classes should be far away from each other. Intuitively, these constraints tend to select the most discriminative subgraph patterns based on the graph labels. Such an idea has been well explored in the context of dimensionality reduction and feature selection [2, 32]. The constraints above can be mathematically formulated as minimizing the loss function:whereand denotes the set of pairwise constraints between graphs with the same label, and denotes the set of pairwise constraints between graphs with different labels. By defining matrix aswe can combine and rewrite the function in Eq. (5) and Eq. (7) aswhere is the trace of a matrix, D is a diagonal matrix whose entries are column sums of , i.e., , and is a Laplacian matrix.

Definition 4

(gSide)Let denote a graph dataset with multiple side views. Suppose is a matrix defined as Eq. (9), and L is a Laplacian matrix defined as , where D is a diagonal matrix, . We define an evaluation criterion q, called gSide, for a subgraph pattern aswhere is the indicator vector for subgraph pattern , iff , otherwise . Since the Laplacian matrix L is positive semi-definite, for any subgraph pattern , . Based on gSide as defined above, the optimization problem in Eq. (4) can be written asThe optimal solution to the problem in Eq. (12) can be found by using gSide to conduct feature selection on a set of subgraph patterns in . Suppose the gSide values for all subgraph patterns are denoted as in sorted order, then the optimal solution to the optimization problem in Eq. (12) is

Searching with a lower bound: gMSV

Now we address the second problem discussed in Sect. 2, and propose an efficient method to find the optimal set of subgraph patterns from a graph dataset with multiple side views. A straightforward solution to the goal of finding an optimal feature set is the exhaustive enumeration, i.e., we could first enumerate all subgraph patterns from a graph dataset, and then calculate the gSide values for all subgraph patterns. In the context of graph data, however, it is usually not feasible to enumerate the full set of subgraph patterns before feature selection. Actually, the number of subgraph patterns grows exponentially with the size of graphs. Inspired by recent advances in graph classification approaches [7, 20, 21, 37], which nest their evaluation criteria into the subgraph mining process and develop constraints to prune the search space, we adopt a similar approach by deriving a different constraint based upon gSide. By adopting the gSpan algorithm proposed by Yan and Han [38], we can enumerate all the subgraph patterns for a graph dataset in a canonical search space. In order to prune the subgraph search space, we now derive a lower bound of the gSide value:

Theorem 1

Given any two subgraph patterns , is a supergraph of , i.e., . The gSide value of is bounded by , i.e., . is defined aswhere the matrix is defined as .

Proof

According to Definition 4,where . Since , according to anti-monotonic property, we have . Also , we have and . Therefore,Thus, for any , . □ We can now nest the lower bound into the subgraph mining steps in gSpan to efficiently prune the DFS code tree. During the depth-first search through the DFS code tree, we always maintain the currently top-k best subgraph patterns according to gSide and the temporally suboptimal gSide value (denoted by ) among all the gSide values calculated before. If , the gSide value of any supergraph of should be no less than according to Theorem 1, i.e., . Thus, we can safely prune the subtree rooted from in the search space. If , we cannot prune this subtree since there might exist a supergraph of such that . As long as a subgraph can improve the gSide values of any subgraphs in , it is added into and the least best subgraph is removed from . Then we recursively search for the next subgraph in the DFS code tree. The branch-and-bound algorithm gMSV is summarized in Algorithm 1.

Experiments

In order to evaluate the performance of the proposed solution to the problem of feature selection for graph classification using multiple side views, we tested our algorithm on brain network datasets derived from neuroimaging, as introduced in Sect. 3.1.

Experimental setup

To the best of our knowledge, this paper is the first work on leveraging side information in feature selection problem for graph classification. In order to evaluate the performance of the proposed method, we compare our method with other methods using different statistical measures and discriminative score functions. For all the compared methods, gSpan [38] is used as the underlying searching strategy. Note that although alternative algorithms are available [17, 18, 37], the search step efficiency is not the focus of this paper. The compared methods are summarized as follows:We append the side view data to the subgraph-based graph representations computed by the above algorithms before feeding the concatenated feature vectors to the classifier. Another baseline that only uses side view data is denoted as MSV. gMSV: The proposed discriminative subgraph selection method using multiple side views. Following the observation in Sect. 3.2 that side information consistency is verified to be significant in all the side views, the parameters in gMSV are simply set to for experimental purposes. In the case where some side views are suspect to be redundant, we can adopt the alternative optimization strategy to iteratively select discriminative subgraph patterns and update view weights. gSSC: A semi-supervised feature selection method for graph classification based upon both labeled and unlabeled graphs. The parameters in gSSC are set to unless otherwise specified [21]. Discriminative Subgraphs (Conf, Ratio, Gtest, HSIC): Supervised feature selection methods for graph classification based upon confidence [12], frequency ratio [16-18], G test score [37], and HSIC [20], respectively. The top-k discriminative subgraph features are selected in terms of different discrimination criteria. Frequent Subgraphs (Freq): In this approach, the evaluation criterion for subgraph feature selection is based upon frequency. The top-k frequent subgraph features are selected. For a fair comparison, we used LibSVM [9] with linear kernel as the base classifier for all the compared methods. In the experiments, 3-fold cross validations were performed on balanced datasets. To get the binary links, we performed simple thresholding over the weights of the links. The threshold for fMRI and DTI datasets was 0.9 and 0.3, respectively. Classification performance on the fMRI dataset with different numbers of features. Classification performance on the DTI dataset with different numbers of features

Performance on graph classification

The experimental results on fMRI and DTI datasets are shown in Figs. 3 and 4, respectively. The average performances with different numbers of features of each method are reported. Classification accuracy is used as the evaluation metric.
Fig. 3

Classification performance on the fMRI dataset with different numbers of features.

Fig. 4

Classification performance on the DTI dataset with different numbers of features

In Fig, 3, our method gMSV can achieve the classification accuracy as high as 97.16% on the fMRI dataset, which is significantly better than the union of other subgraph-based features and side view features. The black solid line denotes the method MSV, the simplest baseline that uses only side view data. Conf and Ratio can do slightly better than MSV. Freq adopts an unsupervised process for selecting subgraph patterns, resulting in a comparable performance with MSV, indicating that there is no additional information from the selected subgraphs. Other methods that use different discrimination scores without leveraging the guidance from side views perform even worse than MSV in graph classification, because they evaluate the usefulness of subgraph patterns solely based on the limited label information from a small sample size of brain networks. The selected subgraph patterns can potentially be redundant or irrelevant, thereby compromising the effects of side view data. Importantly, gMSV outperforms the semi-supervised approach gSSC which explores the unlabeled graphs based on the separability property. This indicates that rather than simply considering that unlabeled graphs should be separated from each other, it would be better to regularize such separability/closeness to be consistent with the available side views. Similar observations are found in Fig. 4, where gMSV outperforms other baselines by achieving a good performance as high as 97.33% accuracy on the DTI dataset. We notice that only gMSV is able to do better than MSV by adding complementary subgraph-based features to the side view features. Moreover, the performances of other schemes are not consistent over the two datasets. The 2nd and 3rd best schemes, Conf and Ratio, for fMRI do not perform as well for DTI. These results support our premise that exploring a plurality of side views can boost the performance of graph classification, and the gSide evaluation criterion in gMSV can find more informative subgraph patterns for graph classification than subgraphs based on frequency or other discrimination scores. Average CPU time for pruning versus unpruning with varying min_sup Average number of subgraph patterns explored in the mining procedure for pruning versus unpruning with varying min_sup

Time and space complexity

Next, we evaluate the effectiveness of pruning the subgraph search space by adopting the lower bound of gSide in gMSV. In this section, we compare the runtime performance of two implementation versions of gMSV: the pruning gMSV uses the lower bound of gSide to prune the search space of subgraph enumerations, as shown in Algorithm  1; the unpruning gMSV denotes the method without pruning in the subgraph mining process, e.g., deleting the line 13 in Algorithm  1. We test both approaches and recorded the average CPU time used and the average number of subgraph patterns explored during the procedure of subgraph mining and feature selection. The comparisons with respect to the time complexity and the space complexity are shown in Figs. 5 and 6, respectively. On both datasets, the unpruning gMSV needs to explore exponentially larger subgraph search space as we decrease the min_sup value in the subgraph mining process. When the min_sup value is too low, the subgraph enumeration step in the unpruning gMSV can run out of the memory. However, the pruning gMSV is still effective and efficient when the min_sup value goes to very low, because its running time and space requirement do not increase as much as the unpruning gMSV by reducing the subgraph search space via the lower bound of gSide.
Fig. 5

Average CPU time for pruning versus unpruning with varying min_sup

Fig. 6

Average number of subgraph patterns explored in the mining procedure for pruning versus unpruning with varying min_sup

The focus of this paper is to investigate side information consistency and explore multiple side views in discriminative subgraph selection. As potential alternatives to the gSpan-based branch-and-bound algorithm, we could employ other more sophisticated searching strategies with our proposed multi-side-view evaluation criterion, gSide. For example, we can replace with gSide the G test score in LEAP [37] or the log ratio in COM [17] and GAIA [18], etc. However, as shown in Figs. 5 and 6, our proposed solution with pruning, gMSV, can survive at ; considering the limited number of subjects in medical experiments as introduced in Sect. 3.1, gMSV is efficient enough for neurological disorder identification where subgraph patterns with too few supported graphs are not desired.

Effects of side views

In this section, we investigate contributions from different side views. The well-known precision, recall, and F1 are used as metrics. Precision is the fraction of positive predictions that are positive subjects. Recall is the fraction of positive subjects that are predicted as positive. F-measure is the harmonic mean of precision and recall. Table 4 shows performance of gMSV on the fMRI dataset by considering only one side view each time. In general, the best performance is achieved by simultaneously exploring all side views. Specifically, we observe that the side view flow cytometry can independently provide the most informative side information for selecting discriminative subgraph patterns on the fMRI brain networks. This is plausible as it implies that HIV brain alterations in terms of functional connectivity are most likely to express from this side view (i.e., in measures of immune function, the HIV hallmark). It is consistent with our finding in Sect. 3.2 that the side view flow cytometry is the most significantly correlated with the prespecified label information. Similar results on the DTI dataset are shown in Table 5.
Table 4

Average classification performances of gMSV on the fMRI dataset with different single-side views

Side viewsPrecisionRecallF1
Neuropsychological tests0.8510.6790.734
Flow cytometry0.9190.8720.892
Plasma luminex0.7690.6820.710
Freesurfer0.8510.7370.785
Overall brain microstructure0.8240.5000.618
Localized brain microstructure0.6860.6050.637
Brain volumetry0.7390.7370.731
All side views1.0000.9490.973
Table 5

Average classification performances of gMSV on the DTI dataset with different single-side views

Side viewsPrecisionRecallF1
Neuropsychological tests0.6300.7050.662
Flow cytometry0.8470.8080.822
Plasma luminex0.8010.7050.744
Freesurfer0.6640.6320.644
Overall brain microstructure0.6260.6790.647
Localized brain microstructure0.7170.7750.741
Brain volumetry0.6160.6790.644
All side views1.0000.9510.974
Average classification performances of gMSV on the fMRI dataset with different single-side views Average classification performances of gMSV on the DTI dataset with different single-side views

Feature evaluation

Figures 7 and 8 display the most discriminative subgraph patterns selected by gMSV from the fMRI dataset and the DTI dataset, respectively. These findings examining functional and structural networks are consistent with other in vivo studies [8, 35] and with the pattern of brain injury at autopsy [11, 23] in HIV infection. With the approach presented in this analysis, alterations in the brain can be detected in initial stages of injury and in the context of clinically meaningful information, such as host immune status and immune response (flow cytometry), immune mediators (plasma luminex) and cognitive function (neuropsychological tests). This approach optimizes the valuable information inherent in complex clinical datasets. Strategies for combining various sources of clinical information have promising potential for informing an understanding of disease mechanisms, for identification of new therapeutic targets and for discovery of biomarkers to assess risk and to evaluate response to treatment.
Fig. 7

Discriminative subgraph patterns that are associated with HIV, selected from the fMRI dataset

Fig. 8

Discriminative subgraph patterns that are associated with HIV, selected from the DTI dataset

Discriminative subgraph patterns that are associated with HIV, selected from the fMRI dataset Discriminative subgraph patterns that are associated with HIV, selected from the DTI dataset

Related work

To the best of our knowledge, this paper is the first work exploring side information in the task of subgraph feature selection for graph classification. Our work is related to subgraph mining techniques and multi-view feature selection problems. We briefly discuss both of them. Mining subgraph patterns from graph data has been studied extensively by many researchers. In general, a variety of filtering criteria are proposed. A typical evaluation criterion is frequency, which aims at searching for frequently appearing subgraph features in a graph dataset satisfying a prespecified min_sup value. Most of the frequent subgraph mining approaches are unsupervised. For example, Yan and Han developed a depth-first search algorithm: gSpan [38]. This algorithm builds a lexicographic order among graphs, and maps each graph to an unique minimum DFS code as its canonical label. Based on this lexicographic order, gSpan adopts the depth-first search strategy to mine frequent-connected subgraphs efficiently. Many other approaches for frequent subgraph mining have also been proposed, e.g., AGM [14], FSG [22], MoFa [3], FFSM [13], and Gaston [26]. Moreover, the problem of supervised subgraph mining has been studied in recent work which examines how to improve the efficiency of searching the discriminative subgraph patterns for graph classification. Yan et al. introduced two concepts structural leap search and frequency-descending mining, and proposed LEAP [37] which is one of the first works in discriminative subgraph mining. Thoma et al. proposed CORK which can yield a near-optimal solution using greedy feature selection [33]. Ranu and Singh proposed a scalable approach, called GraphSig, that is capable of mining discriminative subgraphs with a low frequency threshold [28]. Jin et al. proposed COM which takes into account the co-occurrences of subgraph patterns, thereby facilitating the mining process [17]. Jin et al. further proposed an evolutionary computation method, called GAIA, to mine discriminative subgraph patterns using a randomized searching strategy [18]. Our proposed criterion gSide can be combined with these efficient searching algorithms to speed up the process of mining discriminative subgraph patterns by substituting the G test score in LEAP [37] or the log ratio in COM [17] and GAIA [18], etc. Zhu et al. designed a diversified discrimination score based on the log ratio which can reduce the overlap between selected features by considering the embedding overlaps in the graphs [39]. Similar idea can be integrated into gSide to improve feature diversity. There are some recent works on incorporating multi-view learning and feature selection. Tang et al. studied unsupervised multi-view feature selection by constraining that similar data instances from each view should have similar pseudo-class labels [31]. Cao et al. explored tensor product to bring different views together in a joint space and presents a dual method of tensor-based multi-view feature selection [4]. Aggarwal et al. considered side information for text mining [1]. However, these methods are limited in requiring a set of candidate features as input, and therefore are not directly applicable for graph data. Wu et al. considered the scenario where one object can be described by multiple graphs generated from different feature views and proposes an evaluation criterion to estimate the discriminative power and the redundancy of subgraph features across all views [36]. In contrast, in this paper, we assume that one object can have other data representations of side views in addition to the primary graph view. In the context of graph data, the subgraph features are embedded within the complex graph structures and usually it is not feasible to enumerate the full set of features for a graph dataset before the feature selection. Actually, the number of subgraph features grows exponentially with the size of graphs. In this paper, we explore the side information from multiple views to effectively facilitate the procedure of discriminative subgraph mining. Our proposed feature selection for graph data is integrated to the subgraph mining process, which can efficiently prune the search space, thereby avoiding exhaustive enumeration of all subgraph features.

Conclusion and future work

We presented an approach for selecting discriminative subgraph features using multiple side views. By leveraging available information from multiple side views together with graph data, the proposed method gMSV can achieve very good performance on the problem of feature selection for graph classification, and the selected subgraph patterns are relevant to disease diagnosis. This approach has broad applicability for yielding new insights into brain network alterations in neurological disorders and for early diagnosis. A potential extension to our method is to combine fMRI and DTI brain networks to find discriminative subgraph patterns in the sense of both functional and structural connections. Other extensions include better exploring weighted links in the multi-side-view setting. It is also interesting to have our model applied to other domains where one can find graph data and side information aligned with the graph. For example, in bioinformatics, chemical compounds can be represented by graphs based on their inherent molecular structures and are associated with properties such as drug repositioning, side effects, ontology annotations. Leveraging all these information to find out discriminative subgraph patterns can be transformative for drug discovery.
  10 in total

1.  Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain.

Authors:  N Tzourio-Mazoyer; B Landeau; D Papathanassiou; F Crivello; O Etard; N Delcroix; B Mazoyer; M Joliot
Journal:  Neuroimage       Date:  2002-01       Impact factor: 6.556

2.  Tensor-based Multi-view Feature Selection with Applications to Brain Diseases.

Authors:  Bokai Cao; Lifang He; Xiangnan Kong; Philip S Yu; Zhifeng Hao; Ann B Ragin
Journal:  Proc IEEE Int Conf Data Min       Date:  2014-12

3.  Discriminative Feature Selection for Uncertain Graph Classification.

Authors:  Xiangnan Kong; Philip S Yu; Xue Wang; Ann B Ragin
Journal:  Proc SIAM Int Conf Data Min       Date:  2013

4.  Abnormalities in resting-state functional connectivity in early human immunodeficiency virus infection.

Authors:  Xue Wang; Paul Foryt; Renee Ochs; Jae-Hoon Chung; Ying Wu; Todd Parrish; Ann B Ragin
Journal:  Brain Connect       Date:  2011

5.  Altered hippocampal-prefrontal activation in HIV patients during episodic memory encoding.

Authors:  J M B Castelo; S J Sherman; M G Courtney; R J Melrose; C E Stern
Journal:  Neurology       Date:  2006-06-13       Impact factor: 9.910

Review 6.  Changing patterns in the neuropathogenesis of HIV during the HAART era.

Authors:  T D Langford; S L Letendre; G J Larrea; E Masliah
Journal:  Brain Pathol       Date:  2003-04       Impact factor: 6.508

Review 7.  Fast robust automated brain extraction.

Authors:  Stephen M Smith
Journal:  Hum Brain Mapp       Date:  2002-11       Impact factor: 5.038

8.  Neuronal number and volume alterations in the neocortex of HIV infected individuals.

Authors:  I P Everall; P J Luthert; P L Lantos
Journal:  J Neurol Neurosurg Psychiatry       Date:  1993-05       Impact factor: 10.154

9.  Structural brain alterations can be detected early in HIV infection.

Authors:  Ann B Ragin; Hongyan Du; Renee Ochs; Ying Wu; Christina L Sammet; Alfred Shoukry; Leon G Epstein
Journal:  Neurology       Date:  2012-11-28       Impact factor: 9.910

10.  Determinants of HIV-induced brain changes in three different periods of the early clinical course: A data mining analysis.

Authors:  Bokai Cao; Xiangnan Kong; Casey Kettering; Philip Yu; Ann Ragin
Journal:  Neuroimage Clin       Date:  2015-08-01       Impact factor: 4.881

  10 in total
  1 in total

Review 1.  Brain functional network modeling and analysis based on fMRI: a systematic review.

Authors:  Zhongyang Wang; Junchang Xin; Zhiqiong Wang; Yudong Yao; Yue Zhao; Wei Qian
Journal:  Cogn Neurodyn       Date:  2020-08-31       Impact factor: 3.473

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.