Literature DB >> 28718088

How well do network models predict observations? On the importance of predictability in network models.

Jonas M B Haslbeck1, Lourens J Waldorp2.   

Abstract

Network models are an increasingly popular way to abstract complex psychological phenomena. While studying the structure of network models has led to many important insights, little attention has been paid to how well they predict observations. This is despite the fact that predictability is crucial for judging the practical relevance of edges: for instance in clinical practice, predictability of a symptom indicates whether an intervention on that symptom through the symptom network is promising. We close this methodological gap by introducing nodewise predictability, which quantifies how well a given node can be predicted by all other nodes it is connected to in the network. In addition, we provide fully reproducible code examples of how to compute and visualize nodewise predictability both for cross-sectional and time series data.

Entities:  

Keywords:  Clinical relevance; Network analysis; Network models; Predictability

Mesh:

Year:  2018        PMID: 28718088      PMCID: PMC5880858          DOI: 10.3758/s13428-017-0910-x

Source DB:  PubMed          Journal:  Behav Res Methods        ISSN: 1554-351X


Introduction

Network models graphically describe interactions between a potentially large number variables: each variable is represented as a dot (node) and interactions are represented by lines (edges) connecting the nodes (for an illustration see Fig. 1a). These models have been a popular way to abstract complex systems in a large variety of disciplines such as statistical mechanics (Albert & Barabasi, 2002), biology (Friedman, Linial, Nachman, & Pe’er, 2000), neuroscience (Huang et al., 2010), and are recently also applied in psychology (Costantini et al., 2015) and psychiatry (Borsboom & Cramer, 2013).
Fig. 1

a Example network with six nodes. An edge between two nodes indicates a pairwise interaction between those two nodes. b Illustration of predicting node A by all its neighboring nodes (C and E)

a Example network with six nodes. An edge between two nodes indicates a pairwise interaction between those two nodes. b Illustration of predicting node A by all its neighboring nodes (C and E) Particularly in psychology, network models are attractive because many psychological phenomena are considered to depend on a large number of variables and interactions between them. In this situation, the graphical representation allows an intuitive interpretation even if the number of variables is large. In addition, network models open up the possibility to study the network structure: for instance, one can use network summary measures like density or centrality to describe the global structure of the network (Newman, 2010). These could allow inferences about the behavior of the whole network that would not be possible when looking at all edge parameters separately. Another possibility is to run generative models on the network, e.g., diffusion models of diseases to explain how symptoms of psychological disorders activate each other (Shulgin, Stone & Agur, 1998). Currently, most applications are in the field of clinical psychology (e.g., Fried et al., 2015; Fried, Epskamp, Nesse, Tuerlinckx, & Borsboom, 2016; Beard et al., 2016; McNally et al., 2015; Boschloo et al., 2015) but network models are also applied in health psychology (Kossakowski, Epskamp, et al., 2016) and personality psychology (Cramer et al., 2012; Costantini et al., 2015). While initially they were used to model cross-sectional data, there is increasing interest in analyzing data obtained using the experience sampling method (ESM), which consists of repeated measurements of the same person (e.g., Bringmann et al., 2013; Pe et al., 2015). The focus in these papers is the global network structure and the connectedness of specific nodes in the network, which provide a new perspective on many psychological phenomena. For instance, Cramer and colleagues (Cramer et al., 2010) suggested an alternative view on the concept of comorbidity by analyzing how symptoms of different psychological disorders are connected to each other. The key idea of this paper is to analyze the predictability of nodes in the network in addition to the network structure. By predictability of node A we mean how well node A can be predicted by all its neighboring nodes in the network (see Fig. 1b). The predictability of nodes is important for several reasons: The edges connected to node A should be interpreted taking into account how much of the variance of A is explained by the edges connected to A. For instance, edges will be interpreted differently, depending on whether 0.5% or 50% of the variance of A is explained. This issue is particularly important for networks estimated on a large number of observations, where small edge weights can be detected that might be practically meaningless. In many areas of psychology, the goal is to design effective interventions. Using the predictability measure of node A, one can estimate to which extent we can influence A by intervening on nodes that are connected to it. Predictability across nodes tells us whether a (part of a) network is largely determined by itself through strong mutual interactions between nodes (high predictability) or whether it is mostly determined by other factors that are not included in the network (low predictability). The problem addressed here is similar to the problem of modeling only the covariance matrix in structural equation modeling (SEM) (Byrne, 2013): one might find a model that perfectly fits the covariance matrix, but if the variance of variables is much larger than their covariance, the model might be meaningless in practice. Predictability in general cannot be inferred by the network structure but has to be computed from the network model and the data. Unfortunately, currently there is no easy-to-use tool available to researchers to compute and present predictability in network models. In the present paper, we close this methodological gap by making the following contributions: We present a method to compute easy-to-interpret nodewise predictability measures for state-of-the-art network models (“Methods”). We provide a step-by-step description of how to use the R-packages mgm and qgraph to compute and visualize nodewise predictability, both for cross-sectional (“Predictability in cross-sectional networks”) and time-series networks (“Predictability in temporal networks”). The provided code is fully reproducible, which means that the reader can run the code and reproduce all figures while reading. The data in our applications are from two published studies and will be downloaded automatically with the provided code.

Methods

In order to determine the predictability of a given node A, we need to know which nodes are connected to A in the network model. Therefore the first step is to estimate a network model, which we describe in “Network models”. In a second step, we use the network model to predict the given node A by the nodes that are connected to it (its neighbors). In “Making predictions”, we describe in detail how to compute these predictions. Finally, we quantify how close these predictions are to the actual values of A. The closer the predictions are to the actual values, the higher the predictability of A. A description of predictability measures for both continuous and categorical variables is given in “Quantifying predictability”. In “Predictability and model parameters” we discuss the relationship between the predictability and the parameters of the network model. Finally we describe the data “Application to datasets” that is used in the application examples in “Predictability in cross-sectional networks” and “Predictability in temporal networks”.

Network models

We model cross-sectional data using pairwise Mixed Graphical Models (MGMs) (Yang, Baker, Ravikumar, Allen, & Liu, 2014; Haslbeck & Waldorp, 2015b), which generalize wellknown exponential family distributions such as the multivariate Gaussian distribution or the Ising model (Wainwright & Jordan, 2008). This is the model used in all papers mentioned in the introduction. MGMs are estimated via ℓ 1-regularized (LASSO) neighborhood regression as implemented in the R-package mgm by the authors (Haslbeck & Waldorp, 2015a). In this approach, one estimates the neighborhood of each node and combines all neighborhoods to obtain the complete graph (network) (Meinshausen & Bühlmann, 2006). The neighborhood of a node is the set of nodes that is connected to that node. For example, in Fig. 1a, the neighborhood of node A consists of the nodes E and C. The ℓ 1 regularization ensures that spurious edge-parameters are put to exactly zero, which makes the network model easier to interpret. The parameter that controls the strength of the regularization is selected via 10-fold cross validation. For time-series data, we use the Vector Autoregressive (VAR) model, which is a popular model for multivariate time series in many disciplines (see e.g., Hamilton, 1994; Pfaff, 2008). The VAR model is different from the MGM in that associations are now defined between time-lagged variables. Specifically, in its simplest form with a time-lag of order one, in this model all variables at time t − 1 are regressed on each of the variables at time t, where i indexes different variables. Note that this also includes the variable X itself at an earlier time point: that is, one predicts at time t by itself and all other variables at time t − 1. For the analyses in this paper, we use the implementation of mixed VAR models in the R-package mgm (Haslbeck & Waldorp, 2015a).

Making predictions

We are interested in how well a node can be predicted by all adjacent nodes in the network. This means that we would like to compute the mean of the conditional distribution of the node at hand given all its neighbors. We illustrate this by showing how to compute predictability for the node A in Fig. 1b, for (i) the case of A being a continuous-Gaussian variable and (ii) the case of A being binary. We begin with (i): the conditional mean of A given its neighbors C and E, which is given by where the mean μ = β 0 + β C + β E is a linear combination of the two neighbors C and E. This conditional distribution is obtained from the multivariate exponential family distribution of the MGM. For details see Yang et al. (2014) and Haslbeck and Waldorp (2015b). This prediction problem corresponds to the familiar linear regression problem with Gaussian noise. Now, how can one make predictions? Let’s say the intercept is β 0 = 0.25 and β = 0.1,β = −0.5. Then, if the i case in the sample is C = 2,E = 1, then for the i sample of A we predict A = 0.25 + 0.1 × 2 − 0.5 × 1 = −0.05. A measure of predictability should evaluate how close this is the actual observation for node A . In example (ii), where A is categorical, we compute a predicted probability for each category using a multinomial distribution where k indicates the category, K is the number of categories, and μ = β 0 + β C + β E. Now let’s assume A is binary (K = 2) and we have β 01 = 0,β = 0.5,β = 1 and β 02 = 0,β = −0.5,β = −1 and if for the i cases we have C = 1 and E = 1. When filling in the numbers in Eq. (2) we get P(A = 1|C,E) ≈ 0.95 and P(A = 2|C,E) ≈ 0.05, and predict category k = 1 for the i sample of A, because . Of course, all probabilities have to add up to 1, so we have 1 − P(A = 1|C,E) = P(F = 2|C,E). This direct approach of modeling the probabilities of categories is possible due to the regularization used in estimation (see e.g., Hastie, Tibshirani, & Wainwright, 2015), otherwise this model would not be identified. Note that predicting A by all its neighbors is the same as predicting A by all nodes in the network. This is because all nodes that are not in the neighborhood of A have a zero weight associated to them in the regression equation on A in (1) or (2) and can hence be dropped. In the case of other exponential family distributions, such as Poisson or exponential, one similarly uses the univariate conditional distribution to make predictions (Yang et al., 2014). Importantly, the joint distribution of the MGM can be represented as a factorization of p conditional distributions and hence our method to compute predictions is based on a proper representation of the joint distribution. Indeed, this factorization is used when estimating the MGM in the neighborhood regression approach (see “Network models”).

Quantifying predictability

After computing predictions, we would like to know how close these are to the observed values in the data. Because it is of interest how well a given node can be predicted by all other nodes in the network, we need to remove any effects of the intercept (continuous variables) and the marginal (categorical variables). The marginal indicates the probabilities of categories, when ignoring all other variables. For example, the marginal of a binary variable is described by relative frequency of observing category 1, e.g., P(X = 1) = 0.7.

Predictability of continuous variables

For continuous data, we choose the proportion of explained variance as predictability measure since it is well known in the literature and easy to interpret: where var is the variance, is a vector of predictions for A as described in “Making predictions”, and A is the vector of observed values in the data. In order to remove any influences of the intercepts, all variables are centered to mean zero. Hence, all intercepts will be zero and cannot affect the predictability measure. Thus, we can interpret R 2 as follows: a value of 0 means that a node cannot be predicted at all by its neighboring nodes in the network, whereas a value of 1 means that a node can be perfectly predicted by its neighboring nodes.

Predictability of categorical variables

For categorical variables, it is slightly more difficult to get a measure with the same interpretation as the R 2 for continuous variables because there is no way to center categorical variables. The following example shows that it is, however, important to somehow take the marginal into account: let’s say we have 100 observations of a binary variable A and observe 10 0s and 90 1s. This means that the marginal probabilities of A are p 0 = 0.1 and p 1 = 0.9. Now, if all other nodes contribute nothing to predicting whether there is a 0 or 1 present in case A , one can just predict a 1 for all cases and get a proportion of correct classification (or accuracy, see below) of 90%. For our purpose of determining how well a node can be predicted by all other nodes, this is clearly misleading, because actually nothing is predicted by all other nodes. We therefore compute a normalized accuracy that removes the accuracy that is achieved by the trivial prediction using marginal of the variable (p 1 = 0.9) alone. Let be the proportion of correct predictions (or accuracy) and let p 0,p 1,…p be the marginal probabilities of the categories, where is the indicator function for the event . In the binary case, the latter are p 0 and p 1 = 1 − p 0. We then define normalized accuracy as Hence, indicates how much the node at hand can be predicted by all other nodes in the network, beyond what is trivially predicted by the marginal distribution. means that none of the other nodes adds anything to the marginal in predicting the node at hand, while means that all other nodes perfectly predict the node at hand (together with the marginal). Let’s return to the above example: in contrast to the high accuracy of , the normalized accuracy is zero, indicating that the node at hand cannot be predicted by other nodes in the network. However, notice that both and are important for interpretation. For instance, if we have a marginal of p 1 = .9 in a binary variable, then it is less impressive if all other predictors account for 80% of the remaining accuracy that can be achieved (.98 instead of .9) than in a situation where p 1 = .5, where accounting 80% of the remaining accuracy would mean an improvement from .5 to .9. We therefore visualize both and for the binary variable in Fig. 2.
Fig. 2

Mixed graphical model estimated on the data from Fried et al. (2015). Green edges indicate positive relationships and red edges indicate negative relationships. The blue ring shows the proportion of explained variance (for continuous nodes). For the binary variable ”loss”, the orange part of the ring indicates the accuracy of the intercept model. The red part of the ring is the additional accuracy achieved by all remaining variables. The sum of both is the accuracy of the full model . The normalized accuracy is the ratio between the additional accuracy due to the remaining variables (red) and one minus the accuracy of the intercept model (white + red)

Mixed graphical model estimated on the data from Fried et al. (2015). Green edges indicate positive relationships and red edges indicate negative relationships. The blue ring shows the proportion of explained variance (for continuous nodes). For the binary variable ”loss”, the orange part of the ring indicates the accuracy of the intercept model. The red part of the ring is the additional accuracy achieved by all remaining variables. The sum of both is the accuracy of the full model . The normalized accuracy is the ratio between the additional accuracy due to the remaining variables (red) and one minus the accuracy of the intercept model (white + red)

Predictability and model parameters

Given the above definition of measures of predictability, it is evident that there is a close relationship between the parameters of the network model and predictability: if a node is not connected to any other node, then the explained variance/ normalized accuracy of this node has to be 0. Also, the more edges are connected to a node, the higher predictability tends to be. There is a strong linear relationship between predictability and edge parameters for Gaussian graphical models (GGM), where the edge parameters (partial correlation) are restricted to [−1,1]. This linear relationship is much weaker for models including categorical variables, where the model parameters are only constrained to be finite. This implies that centrality measures (like degree centrality), which are a function of edge parameters, are also strongly correlated with predictability for GGMs, but much less for MGMs (e.g., Haslbeck & Fried, 2017). However, note that even if a given centrality measure would correlate perfectly with predictability, it would not be a substitute, because it would only allow us to order nodes by predictability but would not tell us the predictability of any node. Hence, while centrality measures are related to predictability, they are not a good proxy for predictability.

Application to datasets

We illustrate how to compute and visualize nodewise predictability for network models for both cross-sectional and time-series data. We use a cross-sectional dataset from Fried et al. (2015) (N = 515) with 11 variables on the relationship on bereavement and depressive symptoms. In order to illustrate how to compute predictability for VAR models we use a dataset consisting of up to ten daily measurements of nine variables related to mood over a long period of time (N = 1478) of a single individual (Wichers, Groot, Psychosystems, & Group, 2016). A detailed description of the time-series data can be found in Kossakowski, Groot, Haslbeck, Borsboom, and Wichers (2016).

Predictability in cross-sectional networks

Here we show how to obtain the proposed predictability measures using the mgm package. We will give the code below so all steps can be reproduced exactly by the reader. First, we download the preprocessed data. The raw data and the preprocessing file can be found in the Github repository https://github.com/jmbh/NetworkPrediction. Next, we fit a MGM using the mgm-package: In addition to the data, one has to specify the type and the number of categories for each variable. The remaining arguments are tuning parameters and are selected such that the original results in Fried et al. (2015) are reproduced. For the general usage of the mgm package, see Haslbeck and Waldorp (2015a). After estimating the model, which is saved in fit_obj, we use the predict() function to compute the predictability for each node in the network. For categorical variables, we specify the predictability measures accuracy/correct classification ("CC") and normalized accuracy ("nCC"). In addition, we request the accuracy of the intercept (marginal) model ("CCmarg"), which we will use to visualize the decomposition of the total accuracy in intercept model and the contribution of other variables. For continuous variables, we specify explained variance ("R2") as predictability measure. To display both the accuracy of the intercept model and the normalized accuracy (contribution by other variables), we require a list for the ring-segments and a list for the corresponding colors: We now provide the weighted adjacency matrix and the list containing the nodewise predictability measures to qgraph, resulting in Fig. 2: The color of the ring around the node can be controlled using the pieColor argument. The remaining arguments are not necessary but improve the visualization. layout="spring" specifies that the placement of the nodes in the visualization is determined by the force-directed Fruchterman–Reingold algorithm (Fruchterman & Reingold, 1991). Note that there is no analytic relation between the distance of nodes in the plotted layout and model parameters, however, the algorithm tends to group strongly connected nodes together in order to avoid edge crossings. Green and red edges indicate positive and negative relationships, respectively, and the width of the edges is proportional to the absolute value of the edge-weight. For a detailed description of the qgraph package, see Epskamp et al. (2012). This code returns a network that is very similar to the one in the original paper (Fried et al., 2015). Note that the network is not identical as we did not dichotomize ordinal variables but treat them as continuous instead. For the 11 continuous variables, the percentage of explained variance is indicated by the blue part of the ring. For the single binary variable, the colors in the ring indicate the accuracy of the intercept model (orange) and the full accuracy (orange + red). The normalized accuracy is the ratio red / (red + white). As expected, nodes with more/stronger edges can be predicted better (e.g., lonely) than nodes with fewer/weaker edges (e.g., unfriendly unfr). While this trivially follows from the construction of the predictability measure (see “Predictability and model parameters”), this does not mean that one can use the network structure to infer the predictability of a node: by looking at the network visualization in Fig. 2, we are quite certain that predictability of lonely is higher than of unfr. However, we do not know how high predictability is in either of the two nodes (0.55 and 0.13, respectively), which is highly relevant for interpretation and practical applications. Because we used the same data for estimating the network and calculating the predictability (or error) measures, we estimated the within sample prediction error. In order to see how well the model generalizes, one has to calculate the out of sample prediction error. This can be done by splitting the data into two parts (or using a cross-validation scheme) and providing one part to the estimation function, and the other part to the prediction function.

Predictability in temporal networks

In this section we show how to compute nodewise predictability measures for VAR models. Note that the interpretation of predictability is slightly different for VAR networks because we predict each node by all nodes at the previous time point, which also includes the predicted node itself. We begin again by downloading the example dataset: Next, we provide the data and the type and number of categories of variables as input. In addition, we specify that we would like to estimate a VAR model with lag 1 and compute the predictability of each node similarly to above: Finally, we visualize the network structure together with the nodewise predictability measures, which results in Fig. 3. Because we have only one predictability measure for each node, we can provide them in a vector via the pie argument:
Fig. 3

Visualization of VAR network of the mood variables in Wichers et al. (2016). Green edges indicate positive relationships, red edges indicate negative relationships. The self-loops refer to the effect of the variable on itself over one time lag. The blue rings around the nodes indicate the proportion of explained variance in that node by all other nodes

Visualization of VAR network of the mood variables in Wichers et al. (2016). Green edges indicate positive relationships, red edges indicate negative relationships. The self-loops refer to the effect of the variable on itself over one time lag. The blue rings around the nodes indicate the proportion of explained variance in that node by all other nodes We see two groups of self-engaging mood variables in Fig. 3: (a) the positive mood variables Cheerful, Enthusiastic and Satisfied and (b) the negative mood variables Irritated, Agitated, Restless and Suspicious. Worrying seems to be influenced by both groups and Relaxed is rather disconnected and has a weak negative influence on group (b). These insights can be used to judge the effectiveness of possible interventions on these mood variables: for instance, if the goal is to change variables in group (a), one can do this by intervening on other variables in (a). In addition, we would expect an effect on Worrying when intervening on groups (a) and (b), however, the reverse is not true. Relaxed has a small influence on group (b), however, is itself not influenced by any of the variables in the network. Hence, in order to intervene on Relaxed, one has to search for additional variables influencing Relaxed that were not yet taken into account in the present network.

Discussion

In this paper, we introduced a method and easy-to-use software to compute nodewise predictability in network models and to visualize it in a typical network visualization. Predictability is an important concept that complements the network structure when interpreting network models: it gives a measure of how well a node can be predicted by all its neighboring nodes and is hence crucial information whenever one needs to judge the practical significance of a set of edges. An example is clinical practice, where it is important to make predictions of the outcome of interventions on an interpretable scale to optimally select treatments. The analyses shown in the present paper can be extended to networks that are changing over time, which allows to investigate how edge-parameters and nodewise predictability change over time. The time-varying parameters can then be modeled by a second model, which could include variables from inside and outside the time-varying network. With this modeling approach, it would be possible to gather evidence for the event of one (or several) variables causing the system to transition into another state, which is possibly reflected by a different network structure and nodewise predictability. For details about how to fit time-varying network models and time-varying predictability measures, see (Haslbeck & Waldorp, 2015a). It is important to be clear about the limitations of interpreting nodewise predictability. First, we can only interpret the predictability of a node as the influence of its neighboring nodes if the network model is an appropriate model. A network model can be inappropriate for a number of reasons: Two or more variables in the network models are caused by a variable that is not included in the network. This results in estimated edges between these variables in the network, even though they are only related via an unobserved common cause. In this situation, we cannot interpret predictability as influence by neighboring nodes because we know that the nodes are not influencing each other but are caused by a third variable outside the network. In some situations, variables are logically dependent, for instance age and age of diagnosis are always related, because one cannot be diagnosed before being born. Clearly, in this situation the relation between the variables must be interpreted differently. If two or more variables measure the same underlying construct (e.g., five questions about sad mood). In this situation, the edge-parameters indicate how similar the variables are and do not reflect mutual causal influence. Consequently, we would not interpret the predictability of these variables as the degree of determination by neighboring nodes. See Fried and Cramer (2016) for a discussion of this problem. Solutions could be to determine the topological overlap (Zhang et al., 2005) and choose only one variable in case of large overlap or to incorporate measurement models into the network model (Epskamp, Rhemtulla, & Borsboom, 2016). Second, if we interpret the predictability of node A as a measure of how much it is determined by its neighbors, we assumed that all edges are directed towards node A. However, the direction of edges is generally unknown when the model is estimated from cross-sectional data. Estimates about the direction of edges can be made using causal search algorithms like the PC algorithm (Spirtes, Glymour, & Scheines 2000) or by using substantive theory. This means that the predictability of a node is an upper bound and in practice often lower because some edges might be bi-directional or point away from the node at hand. While this is a major limitation, note that the direction of edges is unknown for any model estimated on cross-sectional data. In models with lagged predictors, like the VAR model, this problem does not exist because we use the direction of time to determine the direction of edges. Finally, it is important to stress that a topic we did not cover here is to investigate how well node A can be predicted by node B. This is different from the problem studied in this paper, where the interest was in how well node A can be predicted by all other nodes. Unfortunately, there are no straightforward solutions for the former problem in the situation of correlated predictors, which is always the case in practice. For linear regression, there is work on decomposing explained variance (Grömping, 2012) and in the machine-learning literature there are methods to determine variable importance by replacing predictor variables by noise and investigate the drop in predictability (e.g., Breiman et al., 2001). It would certainly be interesting to try to extend these ideas to the general class of network models. To sum up, if the network model is an appropriate model for the phenomena at hand, predictability is an easy-to-interpret measure of how strongly a given node is influenced by its neighbors in the network. This allows researchers to judge the practical relevance of edges connected to a node A on an absolute scale (0 = no influence on A at all, 1 = A fully determined) and thereby may help to predict intervention outcomes. In addition, the predictability of (parts of) the network is interesting on a theoretical level, as it indicates how self-determined the network is.
  16 in total

1.  Using Bayesian networks to analyze expression data.

Authors:  N Friedman; M Linial; I Nachman; D Pe'er
Journal:  J Comput Biol       Date:  2000       Impact factor: 1.479

2.  Comorbidity: a network perspective.

Authors:  Angélique O J Cramer; Lourens J Waldorp; Han L J van der Maas; Denny Borsboom
Journal:  Behav Brain Sci       Date:  2010-06       Impact factor: 12.579

3.  Critical Slowing Down as a Personalized Early Warning Signal for Depression.

Authors:  Marieke Wichers; Peter C Groot
Journal:  Psychother Psychosom       Date:  2016-01-26       Impact factor: 17.659

Review 4.  Network analysis: an integrative approach to the structure of psychopathology.

Authors:  Denny Borsboom; Angélique O J Cramer
Journal:  Annu Rev Clin Psychol       Date:  2013       Impact factor: 18.561

5.  Network analysis of depression and anxiety symptom relationships in a psychiatric sample.

Authors:  C Beard; A J Millner; M J C Forgeard; E I Fried; K J Hsu; M T Treadway; C V Leonard; S J Kertz; T Björgvinsson
Journal:  Psychol Med       Date:  2016-09-14       Impact factor: 7.723

6.  Moving Forward: Challenges and Directions for Psychopathological Network Theory and Methodology.

Authors:  Eiko I Fried; Angélique O J Cramer
Journal:  Perspect Psychol Sci       Date:  2017-09-05

7.  From loss to loneliness: The relationship between bereavement and depressive symptoms.

Authors:  Eiko I Fried; Claudi Bockting; Retha Arjadi; Denny Borsboom; Maximilian Amshoff; Angélique O J Cramer; Sacha Epskamp; Francis Tuerlinckx; Deborah Carr; Margaret Stroebe
Journal:  J Abnorm Psychol       Date:  2015-03-02

8.  What are 'good' depression symptoms? Comparing the centrality of DSM and non-DSM symptoms of depression in a network analysis.

Authors:  Eiko I Fried; Sacha Epskamp; Randolph M Nesse; Francis Tuerlinckx; Denny Borsboom
Journal:  J Affect Disord       Date:  2015-10-01       Impact factor: 4.839

9.  The application of a network approach to Health-Related Quality of Life (HRQoL): introducing a new method for assessing HRQoL in healthy adults and cancer patients.

Authors:  Jolanda J Kossakowski; Sacha Epskamp; Jacobien M Kieffer; Claudia D van Borkulo; Mijke Rhemtulla; Denny Borsboom
Journal:  Qual Life Res       Date:  2015-09-14       Impact factor: 4.147

10.  A network approach to psychopathology: new insights into clinical longitudinal data.

Authors:  Laura F Bringmann; Nathalie Vissers; Marieke Wichers; Nicole Geschwind; Peter Kuppens; Frenk Peeters; Denny Borsboom; Francis Tuerlinckx
Journal:  PLoS One       Date:  2013-04-04       Impact factor: 3.240

View more
  56 in total

1.  The Centrality of Doubting and Checking in the Network Structure of Obsessive-Compulsive Symptom Dimensions in Youth.

Authors:  Matti Cervin; Sean Perrin; Elin Olsson; Kristina Aspvall; Daniel A Geller; Sabine Wilhelm; Joseph McGuire; Luisa Lázaro; Agustin E Martínez-González; Barbara Barcaccia; Andrea Pozza; Wayne K Goodman; Tanya K Murphy; İsmail Seçer; José A Piqueras; Tiscar Rodríguez-Jiménez; Antonio Godoy; Ana I Rosa-Alcázar; Ángel Rosa-Alcázar; Beatriz M Ruiz-García; Eric A Storch; David Mataix-Cols
Journal:  J Am Acad Child Adolesc Psychiatry       Date:  2019-08-14       Impact factor: 8.829

2.  The network approach to psychopathology: a review of the literature 2008-2018 and an agenda for future research.

Authors:  Donald J Robinaugh; Ria H A Hoekstra; Emma R Toner; Denny Borsboom
Journal:  Psychol Med       Date:  2019-12-26       Impact factor: 7.723

3.  Network Structure of Perinatal Depressive Symptoms in Latinas: Relationship to Stress and Reproductive Biomarkers.

Authors:  Hudson Santos; Eiko I Fried; Josephine Asafu-Adjei; R Jeanne Ruiz
Journal:  Res Nurs Health       Date:  2017-02-21       Impact factor: 2.228

4.  A Network Analysis of the Association Between Intergroup Contact and Intergroup Relations.

Authors:  Dongfang Yu; Yufang Zhao; Chenzu Yin; Fangmei Liang; Wenyu Chen
Journal:  Psychol Res Behav Manag       Date:  2022-01-07

5.  Integrating a functional view on suicide risk into idiographic statistical models.

Authors:  Aleksandra Kaurin; Alexandre Y Dombrovski; Michael N Hallquist; Aidan G C Wright
Journal:  Behav Res Ther       Date:  2021-11-30

6.  Clusters of Trauma Types as Measured by the Life Events Checklist for DSM-5.

Authors:  Ateka A Contractor; Nicole H Weiss; Prathiba Natesan; Jon D Elhai
Journal:  Int J Stress Manag       Date:  2020-06-01

7.  Mechanisms of change in female-specific and gender-neutral cognitive behavioral therapy for women with alcohol use disorder.

Authors:  Cathryn Glanton Holzhauer; Thomas Hildebrandt; Elizabeth Epstein; Barbara McCrady; Kevin A Hallgren; Sharon Cook
Journal:  J Consult Clin Psychol       Date:  2020-02-17

8.  How handling extreme C-reactive protein (CRP) values and regularization influences CRP and depression criteria associations in network analyses.

Authors:  Daniel P Moriarity; Sarah R Horn; Marin M Kautz; Jonas M B Haslbeck; Lauren B Alloy
Journal:  Brain Behav Immun       Date:  2020-10-22       Impact factor: 7.217

9.  Network structures and temporal stability of self- and informant-rated affective symptoms in Alzheimer's disease.

Authors:  T T Saari; I Hallikainen; T Hintsa; A M Koivisto
Journal:  J Affect Disord       Date:  2020-07-19       Impact factor: 4.839

10.  The Sensory and Perceptual Scaffolding of Absorption, Inner Speech, and Self in Psychosis.

Authors:  Cherise Rosen; Michele Tufano; Clara S Humpston; Kayla A Chase; Nev Jones; Amy C Abramowitz; Ann Franco Chakkalakal; Rajiv P Sharma
Journal:  Front Psychiatry       Date:  2021-05-10       Impact factor: 4.157

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.