Xu Wang1, Ali Shojaie1. 1. Department of Biostatistics, University of Washington, Seattle, WA 98195, USA.
Abstract
Thanks to technological advances leading to near-continuous time observations, emerging multivariate point process data offer new opportunities for causal discovery. However, a key obstacle in achieving this goal is that many relevant processes may not be observed in practice. Naïve estimation approaches that ignore these hidden variables can generate misleading results because of the unadjusted confounding. To plug this gap, we propose a deconfounding procedure to estimate high-dimensional point process networks with only a subset of the nodes being observed. Our method allows flexible connections between the observed and unobserved processes. It also allows the number of unobserved processes to be unknown and potentially larger than the number of observed nodes. Theoretical analyses and numerical studies highlight the advantages of the proposed method in identifying causal interactions among the observed processes.
Thanks to technological advances leading to near-continuous time observations, emerging multivariate point process data offer new opportunities for causal discovery. However, a key obstacle in achieving this goal is that many relevant processes may not be observed in practice. Naïve estimation approaches that ignore these hidden variables can generate misleading results because of the unadjusted confounding. To plug this gap, we propose a deconfounding procedure to estimate high-dimensional point process networks with only a subset of the nodes being observed. Our method allows flexible connections between the observed and unobserved processes. It also allows the number of unobserved processes to be unknown and potentially larger than the number of observed nodes. Theoretical analyses and numerical studies highlight the advantages of the proposed method in identifying causal interactions among the observed processes.
Learning causal interactions from observational multivariate time series is generally impossible [1,2]. Among many challenges, two of the most important ones are that (i) the data acquisition rate may be much slower than the underlying rate of changes; and (ii) there may be unmeasured confounders [1,3]. First, due to the cost or technological constraints, the data acquisition rate may be much slower than the underlying rate of changes. In such settings, the most commonly used procedure for inferring interactions among time series, Granger causality, may both miss true interactions and identify spurious ones [4,5,6]. Second, the available data may only include a small fraction of potentially relevant variables, leading to unmeasured confounders. Naïve connectivity estimators that ignore these confounding effects can produce highly biased results [7]. Therefore, reliably distinguishing causal connections between pairs of observed processes from correlations induced by common inputs from unobserved confounders remains a key challenge.Learning causal interactions between neurons is critical to understanding the neural basis of cognitive functions [8,9]. Many existing neuroscience data, such as data collected using functional magnetic resonance imaging (fMRI), have relatively low temporal resolutions, and are thus of limited utility for causal discovery [10]. This is because many important neuronal processes and interactions happen at finer time scales [11]. New technologies, such as calcium florescent imaging that generate spike train data, make it possible to collect ‘live’ data at high temporal resolutions [12]. The spike train data, which are multivariate point processes containing spiking times of a collection of neurons, are increasingly used to learn the latent brain connectivity networks and to glean insight into how neurons respond to external stimuli [13]. For example, Bolding and Franks [14] collected spike train data on neurons in mouse olfactory bulb region at 30 kHz under multiple laser intensity levels to study the odor identification mechanism. Despite progress in recording the activity of massive populations of neurons [15], simultaneously monitoring a complete network of spiking neurons at high temporal resolutions is still beyond the reach of the current technology. In fact, most experiments only collect data on a small fraction of neurons, leaving many unobserved neurons [16,17,18]. These hidden neurons may potentially interact with the neurons inside the observed set and cannot be ignored. Nevertheless, given its high temporal resolution, spike train data provide an opportunity for causal discovery if we can account for the unmeasured confounders.When unobserved confounders are a concern, causal effects among the observed variables can be learned using causal structural learning approaches, such as the Fast Causal Inference (FCI) algorithm and its variants [1,19]. However, these algorithms may not identify all causal edges. Specifically, instead of learning the directed acyclic graph (DAG) of causal interactions, FCI learns the maximally ancestral graph (MAG). This graph includes causal interactions between variables that are connected by directed edges, but also bi-directed edges among some other variables, leaving the corresponding causal relationships undetermined. As a result, causality discovery using these algorithms is not always satisfactory. For example, Malinsky and Spirtes [20] recently applied FCI to infer causal network of time series and found a low recall for identifying the true casual relationships. Additionally, despite recent efforts [21], causal structure learning remains computationally intensive, because the space of candidate causal graphs grows super-exponentially with the number of network nodes [22].The Hawkes process [23] is a popular model for analyzing multivariate point process data. In this model, the probability of future events for each component can depend on the entire history of events of other components. Under straightforward conditions, the multivariate Hawkes process reveals Granger causal interactions among multivariate point processes [24]. Moreover, assuming that all relevant processes are observed in a linear Hawkes process, causal interactions among components can also be inferred [25]. The Hawkes process thus provides a flexible and interpretable framework for investigating the latent network of point processes and is widely used in neuroscience applications [26,27,28,29,30,31,32].In modern applications, it is common for the number of measured components, e.g., the number of neurons, to be large compared to the observed period, e.g., the duration of neuroscience experiments. The high-dimensional nature of data in such applications poses challenges to learning the connectivity network of a multivariate Hawkes process. To address this challenge, Hansen et al. [33] and Chen et al. [34] proposed -regularized estimation procedures and Wang et al. [35] recently developed a high-dimensional inference procedure to characterize the uncertainty of these regularized estimators. However, due to the confounding from unobserved neurons in practice, existing estimation and inference procedures assuming complete observation from all components, may not provide reliable estimates.Accounting for unobserved confounders in high-dimensional regression has been the subject of recent research. Two such examples are HIVE [36] and trim regression [37], which facilitate causal discovery using high-dimensional regression with unobserved confounders. However, these methods are designed for linear regression with independent observations and do not apply to the long-history temporal dependency setting of Hawkes processes. Moreover, they rely on specific assumptions on observed and unobsvered causal effects, which are not clear to hold in neuronal network settings.In this paper, we consider learning causal interactions among high-dimensional point processes with (potentially many) hidden confounders. Considering the generalization of the above two approaches to the setting of Hawkes processes, we show that the assumption required by trim regression is more likely to hold in a stable point process network, especially when the confounders affect many observed nodes. Motivated by this finding, we propose a generalization of the trim regression, termed hp-trim, for causal discovery from high-dimensional point processes in the presence of (potentially many) hidden confounders. We establish a non-asymptotic convergence rate in estimating the network edges using this procedure. Unlike the previous result for independent data [37], our result considers both the temporal dependence of the Hawkes processes as well as the network sparsity. Using simulated and real data, we also show that hp-trim has superior finite-sample performance compared to the corresponding generalization of HIVE for point processes and/or the naïve approach that ignores the unobserved confounders.
2. The Hawkes Processes with Unobserved Components
2.1. The Hawkes Process
Let be a sequence of real-valued random variables, taking values in , with and almost surely. Here, time is a reference point in time, e.g., the start of an experiment, and T is the duration of the experiment. A simple point process N on is defined as a family , where denotes the Borel -field of the real line and . The process N is essentially a simple counting process with isolated jumps of unit height that occur at . We write as , where denotes an arbitrarily small increment of t.Let be a p-variate counting process , where, as above, satisfies for with denoting the event times of . Let be the history of prior to time t. The intensity process is a p-variate -predictable process, defined asHawkes [23] proposed a class of point process models in which past events can affect the probability of future events. The process is a linear Hawkes process if the intensity function for each unit takes the form
whereHere, is the background intensity of unit i and is the transfer function. In particular, represents the influence from the kth event of unit j on the intensity of unit i at time t.Motivated by neuroscience applications [38,39], we consider a parametric transfer function of the form
with a transition kernel that captures the decay of the dependence on past events. This leads to , where the integrated stochastic process
summarizes the entire history of unit j of the multivariate Hawkes processes. A commonly used example is the exponential transition kernel, [40].Assuming that the model holds and all relevant processes are observed, it follows from [40] that the connectivity coefficient represents the strength of the causal dependence of unit i’s intensity on unit j’s past events. A positive implies that past events of unit jexcite future events of unit i and is often considered in the literature (see, e.g., [40,41]). However, we might also wish to allow for negative values to represent inhibitory effects [34,42], which are expected in neuroscience applications [43].Denoting and , we can writeFurthermore, let and . Then the linear Hawkes process can be written compactly as
2.2. The Confounded Hawkes Process
Because of technology constraints, neuroscience experiments usually collect data from only a small portion of neurons. As a result, many other neurons that potentially interact with the observed neurons will be unobserved. Consider a network of counting processes, where we only observe the first p components. The number of unobserved neurons, q, is usually unknown and likely much greater than p. Extending (7) to include the unobserved components, we obtain the confounded Hawkes model,
in which denotes the integrated processes of the hidden components, and denotes the connectivity coefficients from the unobserved components to unit i.Unless the observed and unobserved processes are independent, the naïve estimator that ignores the unobserved components will produce misleading conclusion about the causal relationship among the observed components. This is illustrated in the simple linear vector autoregressive process of Figure 1. This example includes three continuous random variables generated according to the following set of equations
where are mean zero innovation or error terms. The Granger causal network corresponding to the above process is shown in Figure 1A. Figure 1B shows that if is not observed, the conditional means of the observed variables and , namely,
leads to incorrect Granger causal conclusions—in this case, a spurious autoregressive effect from the past values of . The same phenomenon occurs in Hawkes processes with unobserved components.
Figure 1
Illustration of the effect of hidden confounders on inferred causal interactions among the observed variables. (A) The true causal diagram for the complete processes. (B) The causal structure of the observed process when the hidden component, , is ignored, including a spurious autoregressive effect of on its future values.
Throughout this paper, we assume that the confounded linear Hawkes model in (8) is stationary, meaning that for all units , the spontaneous rates and strengths of transition are constant over the time range [44,45].
3. Estimating Causal Effects in Confounded Hawkes Processes
3.1. Extending Trim Regression to Hawkes Processes
Let be the projection coefficient of onto such thatWe can write the confounded linear Hawkes model in (8) in the form of the perturbed linear model [37]:
where . By the construction of , is uncorrelated with the observed processes and represents the bias, or the perturbation, due to the confounding from . In general, unless .The perturbed model in (10) is generally unidentifiable because we can only estimate from the observed data, e.g., by regressing on . The trim regression [37] is a two-step deconfounding procedure to estimate for independent and Gaussian-distributed data. The method first applies a simple spectral transformation, called trim transformation (described below), to the observed data. It then estimates , using penalized regression. When is sufficiently small, the method consistently estimates . Although this condition is generally not valid for Gaussian-distributed data, previous work on Hawkes processes [34] implies that the confounding magnitude cannot be large when the underlying network is stable, particularly when the confounders affect many observed components (see the discussion following Corollary 1 in Section 4). This allows us to generalize the trim regression to learn the network of multivariate Hawkes processes.Assume, without loss of generality, that the first p components are observed at times indexed from 1 to T. Let be the design matrix of the observed integrated process and be the vector of observed outcomes. Further, let be the singular value decomposition on X, where , and ; here, is the rank of X. Denoting the non-zero diagonal entries of D by , the spectral transformation is given byDenoting by a diagonal matrix with entries , the first step of hp-trim involves applying the spectral transformation to the observed data to obtainThe spectral transformation is designed to reduce the magnitude of confounding. In particular, when aligns with the top eigen-vectors of X, for an appropriate F, e.g., as used in previous work [37], the magnitude of is small compared with . Here, is a threshold parameter and the trim transformation is a special case of the spectral transformation when . See Ćevid et al. [37] for additional details.In the second step, we then estimate the network connectivities using the transformed data by solving the following optimization problem
which is an instance of lasso regression [46] and can be solved separately for each .
3.2. An Alternative Approach
HIdden Variable adjustment Estimation (HIVE) [36] is an alternative method for estimating coefficients of a linear model with independent and Gaussian-distributed data in the presence of latent variables. Adapted to the network of multivariate point processes, HIVE first estimates the latent column space of the unobserved connectivity matrix, , with defined in (8). It then projects the outcome vector, , onto the space orthogonal to the column space of . Assuming that the column space of the observed connectivity matrix, is orthogonal to that of , HIVE consistently estimates using the transformed data. While the orthogonality assumption might be satisfied when the hidden processes are external, such as experimental perturbations in genetic studies [47], it might be too stringent in a network setting. However, when the orthogonality assumption fails, HIVE may lead to poor edge selection performance, and potentially worse than the naïve method that ignores the hidden processes. HIVE also requires the number of hidden variables to be known. Although methods in selecting the number of hidden variables have been proposed, the resulting theoretical guarantees would only be asymptotic. An over- or under-estimated number can either miss the true edges or generate false ones. Given these limitations, we outline the extension of HIVE for Hawkes processes in Appendix A and refer the interested reader to Bing et al. [36] for details.
4. Theoretical Properties
In this section we establish the recovery of the network connectivity in the presence of hidden processes. Technical proofs for the results in this section are given in Appendix B.We start by stating our assumptions. For a square matrix A, let and be its maximum and minimum eigenvalues, respectively.LetAssumption 1 is necessary for stationarity of a Hawkes process [34]. The constant does not depend on the dimension . For any fixed dimension, Brémaud and Massoulié [44] show that given this assumption the intensity process of the form (6) is stable in distribution and, thus, a stationary process exists. Since our connectivity coefficients of interest are ill-defined without stationarity, this assumption provides the necessary context for our estimation framework.There existsfor allAssumption 2 requires that the intensity rate is strictly bounded, which prevents degenerate processes for all components of the multivariate Hawkes processes. This assumption has been considered in the previous analysis of Hawkes processes [33,34,35,42,48].The transition kernelThere exists constantsAssumption 3 implies that the integrated process in (5) is bounded. Assumption 4 requires maximum in- and out- intensity flows to be bounded, which provides a sufficient condition for bounding the eigenvalues of the cross-covariance of [35]. A similar assumption is considered by Basu and Michailidis [49] in the context of VAR models. Together, Assumptions 3 and 4 imply that the model parameters are bounded, which is often required in time-series analysis [50]. Specifically, these assumptions restrict the influence of the hidden processes from being too large.Define the set of active indices among the observed components, , and and . Let , and and . Our first result provides a fixed sample bound on the error of estimating the connectivity coefficients.Suppose each of the p-variate Hawkes processes with intensity function defined in (with probability at leastCompared to the case with independent and Gaussian-distributed data ([37], Theorem 2), we obtain a slower convergence rate because of the complex dependency of the Hawkes processes. Our rate takes into account the network sparsity among the observed components. It also does not depend on the size of unobserved components, q, which is critical in neuroscience experiments because q is often unknown and potentially very large.The result in Theorem 1 is different from the corresponding result obtained when all processes are observed ([35], Lemma 10). More specifically, our result includes an extra error term, , which captures the effect of unobserved processes. Next, we show that when is sufficiently small, we obtain a similar rate of convergence as the one obtained when all processes are observed.Under the same assumptions in Theorem 1, suppose, in addition,with probability at leastThe spectral transformation empirically reduces the magnitude of , especially when the confounding vector, , stays in the sub-space spanned by top right singular vectors of X; however, this is not guaranteed to hold for arbitrary . Corollary 1 specifies a condition on that leads to consistent estimation of , regardless of the empirical performance of the spectral transformation. While the condition does not always hold for arbitrary stochastic process, it is satisfied for a stable network of high-dimensional multivariate Hawkes processes when the confounding is dense. Specifically, by the construction of in (9), Assumption 4 implies that . When the confounding effects are relatively dense—i.e., , meaning that there are large number of interactions from unobserved nodes to the observed ones—we obtain . Therefore, the constraint on is likely satisfied under a high-dimensional network, when . The high-dimensional network setting is common in modern neuroscience experiments where the number of neurons is often large compared to the duration of experiments.Next we introduce an additional assumption to establish the edge selection consistency. To this end, we consider the thresholded connectivity estimator,Thresholded estimators are used for variable selections in high-dimensional network estimation [51] as they alleviate the need for restrictive irrepresentability assumptions [52].There existsAssumption 5 is called the -min condition [53] and requires sufficient signal strength for the true edges in order to distinguish them from 0. Let the estimated edge set and the true edge set . The next result shows that the estimated edge set consistently recovers the true edge set.Under the same conditions in Theorem 1, assume Assumption 5 is satisfied withwhereTheorem 2 guarantees the recovery of causal interactions among the observed components. As before, the result is valid irrespsective of the number of unobserved components, which is important in neuroscience applications.
5. Simulation Studies
We compare our proposed method, hp-trim, with two alternatives, HIVE and the naïve approach that ignores the unobserved nodes. To this end, we compare the methods in terms of their abilities to identify the correct causal interactions among the observed components.We consider a point process network consisting of 200 nodes with half of the nodes being observed; that is . The observed nodes are connected in blocks of five nodes, and half of the blocks are connected with the unobserved nodes (see Figure 2a). This setting exemplifies neuroscience applications, where the orthogonality assumption of HIVE is violated. As a sensitivity analysis, we also consider a second setting similar to the first, in which we remove the connections of the blocks that are not connected with the unobserved nodes This setting, shown in Figure 3a, satisfies HIVE’s orthogonality assumption.
Figure 2
Edge selection performance of the proposed hp-trim approach compared with estimators based on HIVE (run with the known (oracle) number of latent features) and the naïve approach. Here, . (a) Visualization of the connectivity matrix, with unobserved connecitivies colored in gray and entries corresponding to edges shown in black. This setting violates the orthogonality condition of HIVE because of the connections between the observed and the hidden nodes (represented by the non-zero coefficients colored in red). (b) Average number of true positive and false positive edges detected using each method over 100 simulation runs.
Figure 3
Edge selection performance of the proposed hp-trim approach compared with estimators based on HIVE and the naïve approach. Here, . (a) Visualization of the connectivity matrix, with unobserved connecitivies colored in gray and entries corresponding to edges shown in black. This setting satisfies the orthogonality condition of HIVE, which is run both with and without assuming known number of latent features. These two versions are denoted HIVE-oracle and HIVE-empirical, respectively. In HIVE-empirical the number of latent factors is estimated based on the estimate with highest frequency over the 100 simulation runs (estimated ). (b) Average number of true positive and false positive edges detected using each method over 100 simulation runs.
To generate point process data, we consider and in the setting of Figure 2a, and and in the setting of Figure 3b. The background intensity, , is set to in both settings. The transfer kernel function is chosen to be . These settings satisfy the assumptions of stationary Hawkes processes. In both settings, we set the length of the time series to .The results in Figure 2b shown that hp-trim offers superior performance for both small and large sample sizes in the first setting. For example, with large sample size, , hp-trim is able to detect almost all 200 true edges at the expense of about 50 falsely detected edges; this is almost twice as large as the number of true edges detected by HIVE and the naïve method, which only detect half of the true edges at the same level of falsely detected edges. The naïve method eventually detects all true edges but at much bigger cost of about 400 falsely detected edges. In this case, HIVE performs poorly and detects at most half of the true edges, no matter the tolerance level of the number of falsely detected edges. The poor performance of HIVE is because its stringent orthogonality condition is violated in this simulation setting. When the orthogonality condition is satisfied (Figure 3a), HIVE shows the best performance. Specifically, with large sample size, , HIVE detects all true edges almost without identifying any falsely detected edges (the red solid line in Figure 3b). However, this advantage requires knowledge of the correct number of latent features. When the number of latent features is unknown and estimated from data, HIVE’s performance deteriorates, especially with an insufficient sample size. For example, HIVE with empirically estimated number of latent features only detect about 40 true edges (out of a total of 100) at the expense of 100 falsely detected edges (pink lines in Figure 3b). In contrast, hp-trim’s performance with both moderate and large sample sizes is close to the oracle version of HIVE (HIVE-oracle). Specifically, with a large sample size, , hp-trim captures all 100 true edges at the expense of 50 falsely detected edges, again than twice as many true edges as HIVE-empirical.Although our main focus is on the edge selection relevant for causal discovery, in Appendix C we also examine the estimation performance of our algorithm on the connectivity coefficients associated with the observed processes. Not surprisingly, the results indicate that hp-trim can also offer advantages in estimating the parameters, especially in settings where it offers improved edge selection.
6. Analysis of Mouse Spike Train Data
We consider the task of learning causal interactions among the observed population of neurons, using the spike train data from Bolding and Franks [14]. In this experiment, spike times are recorded at 30 kHz on a region of the mice olfactory bulb (OB), while a laser pulse is applied directly on the OB cells of the subject mouse. The laser pulse has been applied at increasing intensities from 0 to 50 (mW/mm). The laser pulse at each intensity level lasts 10 seconds and is repeated 10 times on the same set of neuron cells of the subject mouse.The experiment consists of spike train data multiple mice and we consider data from the subject mouse with the most detected neurons (25) under laser (20 mW/mm) and no laser conditions. In particular, we use the spike train data from one laser pulse at each intensity level. Since one laser pulse spans 10 seconds and the spike train data is recorded at 30 kHz, there are 300,000 time points per experimental replicate.The population of observed neurons is a small subset of all the neurons in mouse’s brain. Therefore, to discover causal interactions among the observed neurons, we apply our estimation procedure, hp-trim, along with HIVE and naïve approaches, separately for each intensity level, and obtain the estimated connectivity coefficients for the observed neurons. For ease of comparison, the tuning parameters for both methods are chosen to have about 30 estimated edges; moreover, for HIVE, q is estimated following the procedure in Bing et al. [36], which is based on the maximum decrease in eigenvalue of the covariance matrix of the errors, in (A1).Figure 4 shows the estimated connectivity coefficients specific to each laser condition in a graph representation. In this representation, each node represents a neuron, and a directed edge indicates a non-zero estimated connectivity coefficient. We see different network connectivity structures when laser stimulus is applied, which agrees with the observation by neuroscientists that the OB response is sensitive to the external stimuli [14].
Figure 4
Estimated functional connectivities among neurons using mouse spike train data from laser and no-laser conditions [14]. Common edges estimated by the three methods are in red and the method-specific edges are in blue. Thicker edges indicate estimated connectivity coefficients of larger magnitudes.
Compared to our proposed method, the naïve approach generates a more similar network than HIVE under both laser and no-laser conditions, which is likely an indication that the naïve estimate is incorrect in this application.As discussed in Section 4, our inference procedure is asymptotically valid. In other words, with large enough sample size, if the other assumptions in Section 4 are satisfied, the estimated edges should represent the true edges. Assessing the validity of the assumptions and selecting the true edges in real data applications is challenging. However, we can assess the sample size requirement and the validity of assumptions by estimating the edges over a subset of neurons as if the other removed neurons are unobserved. If the sample size is sufficient and the other assumptions are satisfied, we should obtain similar connectivities among the observed subset of neurons, even when some neurons are hidden. Figure 5 shows the result of such a stability analysis for the laser condition using hp-trim. Comparing the connectivities in this graph with those in Figure 4 indicates that the estimated edges using the subsets of neurons are all consistent with those estimated using all neurons. Thus, the assumptions are likely satisfied in this application.
Figure 5
Estimated functional connectivities using hp-trim among multiple subset of neurons. Here, data is the same as that used in Figure 4 under the laser condition, except that 5, 10 and 15 neurons (shown in gray) are considered hidden. Thicker edges indicate estimated connectivity coefficients of larger magnitudes. All estimated edges using the subsets of neurons are also found in the estimated network using all neurons (a–c).
7. Conclusions and Future Work
We proposed a causal-estimation procedure with theoretical guarantees for high-dimensional network of multivariate Hawkes processes in the presence of hidden confounders. Our method extends the trim regression [37] to the setting of point process data. The choice of trim regression as the starting point was motivated by the fact that its assumptions are less stringent than conditions required for the alternative HIVE procedure, especially for a stable point process network with dense confounding effects. Empirically, our procedure, hp-trim, shows superior performance in identifying edges in the causal network compared with HIVE and a naïve method that ignores the unobserved nodes.Causal discovery from observational time series is a challenging problem and the success of our method is not without limitations. First, the theoretical guarantees for hp-trim require the magnitude of the hidden confounding to be bounded. As we discussed in the paper, this condition is likely met for a stable network of high-dimensional multivariate Hawkes processes when the confounding is dense. Nonetheless a careful examination of this condition is required when applying the method in other settings. When certain structure exists between the observed and hidden network connectivities, more structure-specific methods, such as HIVE, may be able to better utilize the structural property of the network for improved performance in identifying the causal effects. Second, our estimates assume a linear Hawkes process with a particular parametric form of the transition function. We also assume the underlying Hawkes process is stationary, where certain structural requirements of the process (specified as assumptions in Section 4) must be satisfied. The proposed method is guaranteed to identify causal effects only if these modeling assumptions are valid. When the modeling assumptions are violated, the estimated effects may not be causal. In other words, the method is primarily designed to generate causal hypotheses—or facilitate causal discovery—and the results should be interpreted with caution. Extending the proposed approach to model the transition function nonparametrically, learning its form adaptively from data and capturing time-varying processes would be important future research directions. Finally, given that non-linear link functions are often used when analyzing spike train data [54,55], it would also be of interest to develop causal-estimation procedure for non-linear Hawkes processes.
Authors: Andrew T Reid; Drew B Headley; Ravi D Mill; Ruben Sanchez-Romero; Lucina Q Uddin; Daniele Marinazzo; Daniel J Lurie; Pedro A Valdés-Sosa; Stephen José Hanson; Bharat B Biswal; Vince Calhoun; Russell A Poldrack; Michael W Cole Journal: Nat Neurosci Date: 2019-10-14 Impact factor: 24.884
Authors: Robert Prevedel; Young-Gyu Yoon; Maximilian Hoffmann; Nikita Pak; Gordon Wetzstein; Saul Kato; Tina Schrödel; Ramesh Raskar; Manuel Zimmer; Edward S Boyden; Alipasha Vaziri Journal: Nat Methods Date: 2014-05-18 Impact factor: 28.547