Literature DB >> 34852277

Spatial sampling of MEG and EEG based on generalized spatial-frequency analysis and optimal design.

Joonas Iivanainen1, Antti J Mäkinen2, Rasmus Zetter3, Matti Stenroos3, Risto J Ilmoniemi3, Lauri Parkkonen3.   

Abstract

In this paper, we analyze spatial sampling of electro- (EEG) and magnetoencephalography (MEG), where the electric or magnetic field is typically sampled on a curved surface such as the scalp. By simulating fields originating from a representative adult-male head, we study the spatial-frequency content in EEG as well as in on- and off-scalp MEG. This analysis suggests that on-scalp MEG, off-scalp MEG and EEG can benefit from up to 280, 90 and 110 spatial samples, respectively. In addition, we suggest a new approach to obtain sensor locations that are optimal with respect to prior assumptions. The approach also allows to control, e.g., the uniformity of the sensor locations. Based on our simulations, we argue that for a low number of spatial samples, model-informed non-uniform sampling can be beneficial. For a large number of samples, uniform sampling grids yield nearly the same total information as the model-informed grids.
Copyright © 2021. Published by Elsevier Inc.

Entities:  

Keywords:  Electroencephalography; Magnetoencephalography; On-scalp MEG; Optimal design; Spatial frequency; Spatial sampling

Mesh:

Year:  2021        PMID: 34852277      PMCID: PMC8752968          DOI: 10.1016/j.neuroimage.2021.118747

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


Introduction

Electro- (EEG) and magnetoencephalography (MEG) are noninvasive neuroimaging techniques for measuring brain activity at millisecond time scale (Hämäläinen, Hari, Ilmoniemi, Knuutila, Lounasmaa, 1993, Nunez, Srinivasan, et al., 2006). EEG and MEG measure the electric and magnetic field, respectively, due to neuronal currents. Adequate spatial sampling is necessary to capture the spatial detail of the continuous field due to brain activity. EEG setups typically involve 16–128 electrodes placed uniformly on the subject’s scalp while state-of-the-art MEG systems use around 300 superconducting quantum interference device (SQUID) sensors placed rigidly around the subject’s head (Baillet, 2017). Spatial sampling of EEG and MEG has again become a topic of interest. High-density EEG (hdEEG) with an electrode count of 128–256 or even more has been suggested to be beneficial (e.g., Brodbeck, Spinelli, Lascano, Wissmeier, Vargas, Vulliemoz, Pollo, Schaller, Michel, Seeck, 2011, Grover, Venkatesh, 2016, Hedrich, Pellegrino, Kobayashi, Lina, Grova, 2017, Petrov, Nador, Hughes, Tran, Yavuzcetin, Sridhar, 2014, Robinson, Venkatesh, Boring, Tarr, Grover, Behrmann, 2017). In MEG, in contrast to conventional liquid-helium-cooled SQUID sensors that measure the field 2 cm off the scalp, novel sensors such as optically-pumped magnetometers (OPMs; Budker and Romalis 2007) and high-T SQUIDs (Faley et al., 2017) have enabled on-scalp field sensing within millimetres from the head surface. Simulations have shown that the closer proximity of the sensors to the brain improves spatial resolution of MEG and provides more information about neuronal currents (Boto, Bowtell, Krüger, Fromhold, Morris, Meyer, Barnes, Brookes, 2016, Iivanainen et al., 2017, Riaz, Pfeiffer, Schneiderman, 2017, Schneiderman, 2014). The number of sensors that is beneficial in on-scalp MEG is currently not known well. A recent simulation study suggested that fewer sensors are needed in on-scalp MEG than with SQUIDs to achieve similar spatial discrimination performance (Tierney et al., 2019), while another experimental study suggested that 50 OPMs give comparable results to a state-of-the-art SQUID system (Hill et al., 2020). The number of sensors or, equivalently, sensor spacing in EEG has been extensively studied. Srinivasan and colleagues (1998) argued that to characterize the full range of spatial detail available for EEG, at least 128 electrodes are needed (spacing 3 cm). More recently, finite-element modeling in a realistic head model suggested a minimum electrode spacing of 50–59 mm determined by the distance at which the potential decreased to 10% of its peak (Slutzky et al., 2010). Experiments by Freeman and colleagues (2003) suggested that electrode spacing as small as 5–8 mm could be beneficial, if high spatial frequencies contain behaviorally relevant information above the noise level. The sensor spacing in MEG has been less studied. Analytical calculations in half-infinite homogeneous volume conductors suggest that the sensor spacing should be approximately equal to the distance of the sensors to the brain (Ahonen et al., 1993). In many MEG studies, different sensor arrays have been compared without necessarily formulating the comparison as a sampling problem (Boto, Bowtell, Krüger, Fromhold, Morris, Meyer, Barnes, Brookes, 2016, Iivanainen et al., 2017, Nenonen, Kajola, Simola, Ahonen, 2004, Riaz, Pfeiffer, Schneiderman, 2017, Wilson, Vrba, 2007). Previous studies have mainly focused on spatial sampling where the sensors cover the entire scalp uniformly. However, the field can also be sampled nonuniformly. In certain applications, such as brain–computer interfaces, targeted non-uniform sampling would be of value: one could reduce the sensor count while maintaining spatial resolution and sensitivity to the cortical area of interest. In this work, we analyze spatial sampling of EEG and MEG on realistically curved surfaces. We carry out a spatial-frequency analysis of continuous fields using the eigenfunctions of the surface Laplace–Beltrami operator (Bronstein, Bruna, LeCun, Szlam, Vandergheynst, 2017, Reuter, Biasotti, Giorgi, Patanè, Spagnuolo, 2009). The basis formed by these functions can be seen as a natural generalization of the 1-D Fourier basis on a surface. These functions have been used in a variety of applications from signal and geometry processing (Levy, 2006, Reuter, Biasotti, Giorgi, Patanè, Spagnuolo, 2009) to cortical analysis (Qiu et al., 2006) and deep learning (Bronstein et al., 2017). We further describe how more compact bases can be constructed using prior information of the neuronal field patterns. We discuss how these basis-function representations of the field patterns relate to the number of spatial samples. We investigate how to obtain optimal sample positions in EEG and MEG. We utilize Gaussian processes (Abrahamsen, Williams, Rasmussen, 2006) to encode prior knowledge about the field. This perspective is similar to kriging in geospatial sciences (Chilès, Desassis, 2018, Cressie, 1993) and Gaussian-process regression in machine learning (Williams and Rasmussen, 2006). We introduce measures from experimental design (Chaloner, Verdinelli, 1995, Krause, Singh, Guestrin, 2008, Lindley, 1956) that can be used to quantify the optimality of the sample positions. We suggest a method that maximizes Shannon’s information (Lindley, 1956, Shannon, 1949) for a given number of samples and prior assumptions. We use the method to generate sampling grids for different EEG and MEG experiments, where either the whole brain or parts of it are of interest.

Theory and methods

In this Section, we introduce the theoretical concepts behind our analysis of sampling of neuro-electromagnetic fields. In Section 2.1, we describe how a scalar field measured on a surface can be analyzed using a spatial-frequency basis. In Section 2.2, we introduce the spatial-frequency-limited representation of the field and analyze the reconstruction error due to the use of such a representation. This analysis provides the motivation for using the 99%-energy threshold for the number of spatial-frequency components in the field, which we later use to quantify the number of samples beneficial in spatial sampling of MEG/EEG. In Section 2.3, we change the perspective to random fields, which provide another framework for field reconstruction and insight to the metrics we use to quantify the performance of a sampling grid in Section 2.5. The concept of a field kernel is introduced in Section 2.3 and further described in Section 2.4; we employ such a kernel in constructing optimized sampling grids in Section 2.6.

Spatial-frequency representation of dipole fields

The quasi-static electric potential or magnetic field component is generated by distributed source activity inside the brain and can be written in terms of Green’s function as (Hämäläinen, Hari, Ilmoniemi, Knuutila, Lounasmaa, 1993, Nunez, Srinivasan, et al., 2006)By discretizing the neural source distribution into a set of primary current dipoles , where is the amplitude of the th source at and its orientation, Eq. (1) becomesi.e., the field is composed of dipole fields corresponding to unit dipole sources. When the field is also discretized to (or sampled at) points, the equation reduces to a linear matrix equation , where is the lead-field matrix (Hämäläinen et al., 1993). Please note that with an arrow we denote a Euclidean vector in the 3-D space while bolded letters refer to column vectors and matrices. In MEG and EEG, we sample on a curved 2-D surface embedded in 3-D space. Conventional 1-D Fourier analysis can be extended to spatial-frequency (SF) analysis on such surfaces using a suitable orthonormal function basis. The generating equation for the spatial-frequency basis is the Helmholtz equation (Levy, 2006)which is the eigenvalue equation of the Laplace–Beltrami (LB) operator , i.e., the surface Laplacian. The eigenfunctions are modes of standing waves on the surface, with increasing spatial frequency and complexity towards higher (Fig. 3B). The square roots of the eigenvalues generalize the concept of wavenumber. For example, in 1-D, correspond to the angular wavenumber of a sinusoid. Generalizing this relationship for eigenfunctions on a surface, the wavenumbers correspond to wavelengths approximately as . On a compact surface, the eigenvalue spectrum is discrete and the eigenfunctions form an orthonormal basis with respect to the inner product (Levy, 2006):where is the Kronecker delta.
Fig. 3

A: Field covariances corresponding to two prior models: the field is bandlimited in spatial-frequency basis (SF bandlimited) and the field is generated by identically and independently distributed neural sources (IID source prior). A, left: Prior coefficient covariance in the SF basis and in the eigenbasis of . The covariance matrix and its diagonal are shown. A, right: Spatial covariance. The spatial kernel and its decay are shown for one position on the measurement surface. The rightmost column shows the spatial variance. B: Comparison of the spatial-frequency (SF) basis and the eigenbasis of . Each column shows the SF function and eigenfunction with the corresponding index. Wavenumbers and approximate wavelengths of the SF functions are displayed. The energy spectrum of the eigenfunction in SF basis is shown at bottom. The SF basis was generated using the zero-Neumann boundary condition. The eigenfunctions comprise multiple SF components with an average wavenumber proportional to the index. C: Example fields and their energy spectra both in the SF basis and eigenbasis. Energy is distributed to lower index components in the eigenbasis than in the SF basis indicating compression by the eigenbasis. For details on the computation, see Section 3.1. Units in the plots are arbitrary.

The SF basis can be used to analyze the spatial-frequency content of the fields. As the basis is orthonormal, any can be expressed as a linear combination of SF basis functionswhere the coefficients are projections of on . Due to orthonormality, the field energy can be decomposed asa relation similar to Parseval’s theorem for Fourier series. The squared coefficients comprise the energy spectrum of the field in the sense of spectral signal analysis. Sampling theorems typically assume that the signals are bandlimited [e.g., in 1-D (Shannon, 1949); on a sphere (McEwen and Wiaux, 2011); on a surface/manifold (Pesenson, Pesenson, 2015)]. In our context, a bandlimited field can be expressed with a limited number of SF basis functions , i.e., for (. On a surface, the sampling theorem by Pesenson (2015) states that a -bandlimited field is uniquely determined by its values on a grid with a uniform sample spacing , where is a proportionality constant that depends on the surface geometry. Figure 1 shows a spatial-frequency representation of a dipole field in MEG and EEG. Although most of the field energy in general resides at low spatial frequencies, fields are not strictly bandlimited. Thereby, standard sampling theorems are not directly applicable.
Fig. 1

Spatial-frequency representations of dipole fields in MEG and EEG. ‘On scalp’ refers to MEG measurement within millimetres from the scalp while ’off scalp’ to that within 2 cm. For details, see Section 3.1. Left: Representation of a dipole field with an increasing number of spatial-frequency components. The maximum wavenumber (1/cm) and the cumulative energy of the representation are shown. Right: Energy spectrum and cumulative energy of the field as a function of wavenumber and wavelength (upper x-axis; estimated from the wavenumber as ).

Spatial-frequency representations of dipole fields in MEG and EEG. ‘On scalp’ refers to MEG measurement within millimetres from the scalp while ’off scalp’ to that within 2 cm. For details, see Section 3.1. Left: Representation of a dipole field with an increasing number of spatial-frequency components. The maximum wavenumber (1/cm) and the cumulative energy of the representation are shown. Right: Energy spectrum and cumulative energy of the field as a function of wavenumber and wavelength (upper x-axis; estimated from the wavenumber as ).

Sampling and reconstruction assuming a bandlimit

Here, we analyze how an effective bandlimit for the field can be chosen by studying reconstruction of from noisy samples. This section introduces the methodology used in Results Section 3.2. The field can be represented aswhere is the -bandlimited representation of the field and the residual. We assume that the field can be approximated with the first SF basis functions as . To simplify the notation of Eq. (7), we express the summation over the basis functions as a dot product where and . We model noisy samples of at locations stacked in a column vector aswhere is an matrix with the column vectors at on rows, i.e., , and is white measurement noise. When , we can estimate the coefficients optimally using the least-squares method (Kailath et al., 2000)and the bandlimited field can be reconstructed aswhere the functions are interpolation functions for the samples . To study the reconstruction error, we update the measurement model as where corresponds to the residual field . Inserting into Eq. (9), the coefficient estimate can be expressed aswhere is the error in the estimated coefficients. The error consists of two parts: is the error due to aliasing from components outside the band , and is the random error due to noise. The expected reconstruction error can now be evaluated asSince and , we can rewrite the error asIf the noise term dominates the reconstruction error, can be considered a reasonably good approximation of ; the white noise variance can then be used to determine the number of components and the number of samples needed for the reconstruction. If the noise level is unknown, a threshold such as 1% expected residual can be used to determine the effective bandlimit . This threshold may be obtained by finding the that covers at least 99% energy of every dipole field . This bounds the expected error of a random distribution of such sources (Appendix A). A similar threshold has been used previously (e.g., Ahonen, Hämäläinen, Ilmoniemi, Kajola, Knuutila, Simola, Vilkman, 1993, Grover, Venkatesh, 2016). Last, both the noise and the aliasing error depend on the sample positions via the matrix . We will quantify the optimality of the sample positions in Section 2.5.

Sampling and reconstruction of random fields

In this Section, we introduce random fields that will be used to model and quantify spatial field correlations as well as to introduce ways to optimize sample locations. We express the field as , where can be any set of (not necessarily orthonormal) basis functions (e.g., or ). We incorporate the prior knowledge by assigning a prior distribution for the coefficients , which we consider random variables. We assume them to be Gaussian with a joint probability density , where is the prior mean of and is the covariance matrix of . A linear combination of basis functions with Gaussian random coefficients is a Gaussian random field, an extension of the Gaussian process (Williams and Rasmussen, 2006) to 3-D space. A random field can be described by its mean fieldand covariance kernelEvery finite collection of samples of a random field is jointly distributed as , where is the sample covariance matrix defined elementwise as , and is the sample mean, i.e., . The effect of measurement is encoded in the posterior distribution of the field as illustrated in Fig. 2. Having a collection of (noisy) samples at sampling points , modeling the noise as , and assuming , the posterior mean field and covariance are (Williams and Rasmussen, 2006)where is the covariance between the field at the sample points and the field at the point : . Alternatively, the posterior mean and covariance of ) can be expressed as [(Kailath et al., 2000) Sec 3.4]where is the posterior mean of the coefficients and is the associated posterior covariance matrix.
Fig. 2

Bayesian estimation of a random field. Left: Prior variance and a ground-truth field. Right: Posterior variance and the field reconstruction (the posterior mean field) after 1, 15, 30 and 100 noisy measurements (white dots) taken uniformly on the surface. Each additional sample decreases the posterior variance, yielding a better reconstruction of the field.

Bayesian estimation of a random field. Left: Prior variance and a ground-truth field. Right: Posterior variance and the field reconstruction (the posterior mean field) after 1, 15, 30 and 100 noisy measurements (white dots) taken uniformly on the surface. Each additional sample decreases the posterior variance, yielding a better reconstruction of the field. The posterior mean depends linearly on the samples and can be considered as the reconstruction of the field similar to Eq. (10). The posterior covariance describes the uncertainty in the field after the measurements; the noisier the samples, the more uncertainty is left. Uncertainty is reduced at all points that are correlated with the field at the sampling location.

Kernels for bioelectric potential and magnetic field

In this Section, we outline how to construct prior covariance kernels (Eq. (15)) for a random field on the measurement surface. In the later Sections, we discuss how these kernels can be used to obtain sampling grids. Spatial-frequency kernel The SF basis functions can be used to express the prior kernel as , where is the eigenfunction of the LB operator and is the covariance matrix of the spatial-frequency coefficients. For more details of kernel representation with Laplacian eigenfunctions, see Solin and Särkkä (2020). The assumption of a bandlimited field (Section 2.1) can be encoded by setting the diagonal of to a constant up to components and to zero above ; the highest wavenumber of the field is assumed to be . The covariance structure of this kernel is visualized in Fig. 3A both in the coefficient and spatial domains . The spatial profiles are sinc-like similar to reconstruction kernels in uniform 1D sampling of bandlimited functions (Jerri, 1977) and the spatial variance roughly uniform. A: Field covariances corresponding to two prior models: the field is bandlimited in spatial-frequency basis (SF bandlimited) and the field is generated by identically and independently distributed neural sources (IID source prior). A, left: Prior coefficient covariance in the SF basis and in the eigenbasis of . The covariance matrix and its diagonal are shown. A, right: Spatial covariance. The spatial kernel and its decay are shown for one position on the measurement surface. The rightmost column shows the spatial variance. B: Comparison of the spatial-frequency (SF) basis and the eigenbasis of . Each column shows the SF function and eigenfunction with the corresponding index. Wavenumbers and approximate wavelengths of the SF functions are displayed. The energy spectrum of the eigenfunction in SF basis is shown at bottom. The SF basis was generated using the zero-Neumann boundary condition. The eigenfunctions comprise multiple SF components with an average wavenumber proportional to the index. C: Example fields and their energy spectra both in the SF basis and eigenbasis. Energy is distributed to lower index components in the eigenbasis than in the SF basis indicating compression by the eigenbasis. For details on the computation, see Section 3.1. Units in the plots are arbitrary. Dipole-field kernel Considering the source amplitudes in Eq. (2) to be Gaussian random variables (de Munck et al., 1992), the covariance kernel iswhere is the source covariance matrix. The dipole fields can be considered as the basis functions that assemble the kernel and as the associated (possibly correlated) random coefficients. Compared to the SF kernel, more detailed prior knowledge of the random field can be encoded in this form. For a set of sampling points, the kernel reduces to a covariance matrix , where is the discrete lead-field matrix. A relevant kernel can be constructed by assuming the source amplitudes identically and independently distributed (IID source prior): . This kernel has non-uniform spatial variance and the spatial profiles are asymmetric (Fig. 3A). Kernel eigendecomposition and eigenbasis Principal component analysis can be extended to random processes using the Karhunen–Loéve decomposition, which corresponds to finding the eigenfunctions of the covariance kernel (Loeve, 1978, Mercer, 1909). The Karhunen–Loéve theorem (Stark and Woods 1986, section 7.6) states that the eigenbasis has the minimal truncation error among all possible bases for representing the random process with a given number of components. Any kernel can be written using its eigendecomposition aswhere eigenfunctions form an orthonormal basis on the measurement surface and is a diagonal matrix with variances of the eigenfunctions on the diagonal. When studying the degrees of freedom in a random field generated by IID random sources, it is useful to express the total field variance as a sum of the eigenvalues of , i.e., . With similar arguments as in Section 2.2, we can get an estimate for the number of samples needed to reconstruct the random field, e.g., by truncating the series up to the number of components that capture 99% of the total variance. A similar method has been applied before to study the sampling of EEG (Vaidyanathan and Buckley, 1997). Additionally, the number of components in the dipole fields can be analyzed by projecting them on the eigenbasis of as done with the SF basis in Section 2.1. In Fig. 3B, we compare the SF basis and the eigenbasis of . Noise kernels and SNR Measurement noise can be modeled as a random field with an associated covariance kernel . Sensor noise is typically independent across the sensors with an equal variance , which can be modeled as an equivalent spatial white noise, i.e., a random field with a kernel . Projected to any orthonormal basis, the covariance of white noise is proportional to the identity matrix. Noise that originates from sources other than the sensors is generally colored (i.e., not white) and can be modeled with a covariance function , where are the basis functions for the noise field and is the prior covariance of the noise coefficients. We can also define the signal-to-noise ratio (SNR) of a random field. We first diagonalize the noise kernel as in Eq. (19): , where are eigenfunctions of and contains the associated noise variances on its diagonal. The noise eigenfunctions can be used to compose a whitening kernel , which, when applied to the noise field as , yields spatial white noise. Applying the whitener to a random field, we get a whitened field . The spatial SNR, i.e., the expected SNR for a sample at , is the variance of the whitened fieldWith white noise, the whitening operation only acts as scaling with . With colored noise, it also modifies the covariance structure of the random field.

Optimality criteria of sampling grids

In this and the following Section, we discuss how an optimal sampling grid can be constructed. A classical approach would be to choose that minimizes the expected reconstruction error or the posterior variance, involving the inversion either of (Section 2.2) or (Section 2.3). One way to obtain an optimal sampling is to find an that maximizes the ”size” of either of these matrices. The matrix size can be generally quantified in multiple ways, giving different optimality criteria in the context of experimental design (Chaloner, Verdinelli, 1995, Krause, Singh, Guestrin, 2008). A common criterion is A-optimality (Krause et al., 2008) measured either as or more generally as . However, when studying sampling on a bounded surface as in MEG and EEG, we should rather measure how well the sampling pattern captures the overall variance on the surface. To measure optimality in this sense, we define the fractional explained variance:which ranges from 0 to 1, i.e., from no to all variance explained. This measure is related to I-optimality or integrated optimality (Atkinson, 2014). The matrix size can also be measured using the determinant; this criterion is called D-optimality (Chaloner and Verdinelli, 1995). This leads to maximization of the total information (Appendix B)where is the whitened sample covariance matrix. The D-optimal minimizes the posterior entropy of the random-field coefficients (Sebastiani and Wynn, 2000), i.e., maximizes the information gained, e.g., from the neural sources. Previously, total information has been used in studies comparing MEG sensor arrays (Iivanainen et al., 2017, Kemppainen, Ilmoniemi, 1989, Nenonen, Kajola, Simola, Ahonen, 2004, Riaz, Pfeiffer, Schneiderman, 2017, Schneiderman, 2014).

Sampling grid construction

We now propose a method to obtain sampling grids that maximize the total information for given prior assumptions. In Appendix B, we show that this corresponds to maximizing the diagonal elements in the whitened sample covariance matrix while simultaneously minimizing the absolute values of the non-diagonal elements. In other words, maximizing total information is equivalent to finding the sampling grid with the least correlations and maximal SNR. As an illustrating example, we first discuss how the sample spacing in the Shannon–Nyquist theorem (Jerri, 1977, Shannon, 1949) can be seen to result from information maximization. When treating bandlimited functions as random processes with a uniform prior variance for the spatial-frequency coefficients up to , the kernel can be calculated as the Fourier transformation of a boxcar function. This results in a sinc-function kernel with zeros at equispaced intervals illustrated in Fig. 4. When the samples are placed at the zero crossings, the sample covariance matrix becomes diagonal, maximizing the total information.
Fig. 4

Illustration of covariance kernel and sampling. Left: Coefficient variance as a function of spatial frequency for a strictly bandlimited process and a process with a smooth variance decay. Middle: The covariance functions and samples. The red and cyan curves describe the covariance functions for two samples while the spikes show the sample positions. Equispaced samples fit the zeros of the sinc functions. For a non-sinc covariance, there is no trivial choice for the optimal sampling distance. Right: Sample covariance matrices for the two cases, i.e., the covariance functions sampled at the equispaced locations. Dashed lines indicate the rows corresponding to the covariance functions on the left.

Illustration of covariance kernel and sampling. Left: Coefficient variance as a function of spatial frequency for a strictly bandlimited process and a process with a smooth variance decay. Middle: The covariance functions and samples. The red and cyan curves describe the covariance functions for two samples while the spikes show the sample positions. Equispaced samples fit the zeros of the sinc functions. For a non-sinc covariance, there is no trivial choice for the optimal sampling distance. Right: Sample covariance matrices for the two cases, i.e., the covariance functions sampled at the equispaced locations. Dashed lines indicate the rows corresponding to the covariance functions on the left. For a general case, we reformulate the problem using the kernel eigenfunctions . We consider a (whitened) kernel with an eigendecomposition . The decomposition can be rewritten as , where , i.e., the covariance between points and can be calculated as an Euclidean dot product between and . Rewriting the squared distance between these vectors aswe can see that maximizing this distance with respect to sample positions and corresponds to maximizing the diagonal elements in the sample covariance matrix while simultaneously minimizing the non-diagonal elements. The sample configuration that yields maximal information can be found with the farthest-point sampling algorithm (Eldar, Lindenbaum, Porat, Zeevi, 1997, Schlömer, Heck, Deussen, 2011), which attempts to maximize the pair-wise minimum distances between the sample points. Instead of maximizing distances , we maximize ; otherwise the algorithm works similarly. If are positive for neighbouring samples, the algorithm leads to minimization of the non-diagonal , and thereby to maximization of the total information. Generally, the sampling grid given by the method follows the covariance structure of the kernel (Fig. 3). For example, approximately uniform sampling grids can be generated using spatial-frequency-bandlimited covariance with circularly symmetric sinc-like kernels (Section 3.4). This showcases a connection between our sampling method and the sampling theorems that assume bandlimited functions and use uniform sampling (Pesenson, 2015, Shannon, 1949).

Simulations

In this Section, we simulate spatial-frequency (SF) spectra of dipole fields and covariance kernels of random fields due to random source distributions. Further, we construct optimal sampling grids for different kernels and evaluate their performances. We begin by outlining the numerical methods.

Models and computations

We computed the electric potential and magnetic field using linear Galerkin boundary-element method with the isolated-source approach (Stenroos, Mäntynen, Nenonen, 2007, Stenroos, Sarvas, 2012). The head was assumed a piecewise constant isotropic conductor with four compartments: brain (the white and gray matter as well as the cerebellum), cerebrospinal fluid (CSF), skull and scalp. The conductivity of the soft tissues (brain and scalp) was set to 0.33 S/m, the conductivity of CSF to 1.79 S/m, and skull conductivity to 0.33/50 S/m. The head model was built from the example data of SimNIBS software (version 2.0; Windhoff et al. 2013) in an earlier study by Stenroos and Nummenmaa 2016. In addition to the conductivity interfaces the model included the grey–white-matter boundary. We discretized the source-current distribution into point-like dipolar source-current elements. We placed the dipoles on the gray–white-matter boundary at 3-mm average spacing. The 20 324 sources were oriented normal to the surface following the anatomical orientation of apical dendrites of pyramidal neurons. We calculated the electric potential and the normal component of the magnetic field at the nodes of triangular meshes that represented the measurement surfaces. The measurement surfaces were generated from a dense scalp mesh by cutting the mesh above a plane defined roughly by the ears and nose. The cut mesh was resampled to 5 404 nodes and 10 421 triangles. The mesh for the electric potential (EEG) was obtained by projecting the node positions onto the scalp. An ’on-scalp’ MEG surface was generated by inflating the mesh 4.5 mm away from the scalp. A more distant ’off-scalp’ MEG surface was obtained by further inflating the mesh and by smoothing it with the function smoothsurf in the iso2mesh MATLAB toolbox (Fang and Boas, 2009). The median distance of the nodes of the off-scalp MEG surface to the scalp was 2.4 cm (2.0–4.4 cm). We further computed the lead-field matrices for 102 magnetometers of a commercial SQUID-MEG system (MEGIN Oy, Helsinki, Finland) and for a 60-channel custom version of the ANT waveguardTM EEG cap (ANT Neuro, Hengelo, The Netherlands). The sensor locations were derived from a measurement file of the MEGIN system and were manually coregistered to the used head geometry. 57 electrodes of the 60-channel EEG layout were employed in the measurement; we used those for the lead-field computation. We denote these arrays as SQUID102 and EEG57, respectively (Fig. 10C & D). The SQUID magnetometers were modeled with four integration points as described in the MNE software (Gramfort et al., 2014).
Fig. 10

Spatial sampling of on-scalp MEG (left panel; A, C and E) and EEG (right; B, D and F) fields due to uncorrelated sources in an ROI around the motor cortex (shown on the center in red on the inflated cortical surface). A & B: Distribution of total information (TI) among the eigencomponents and the spatial distribution of SNR computed with spatial white noise as well as with colored noise (brain background activity + white noise). C & D: Example grids constructed with IID source and bandlimited SF (uniform sampling) priors. With IID source prior, two noise priors (white and colored) were considered. Sensor layouts of SQUID102 and EEG57 arrays are also shown. E & F: TI as a function of number of samples in the grids computed using the two noise models. Wholehead 100 refers to a uniform wholehead grid with 100 spatial samples.

The SF basis vectors were computed by discretizing the LB operator to the triangle mesh in the weak form (Reuter et al., 2009). The discrete form of the eigenvalue Eq. (3) iswhere contains the nodal values for the SF function, is a matrix that takes account the overlap in the piecewise-linear basis functions, and is the discrete LB operator. Matrices and were computed using MATLAB functions cotmatrix and massmatrix included in gptoolbox (Jacobson et al., 2018). The zero-Neumann boundary condition, which sets the outwards-facing derivative of to zero, was used in the spectral energy analysis, while the zero-Dirichlet boundary condition, setting to zero, was used in the grid construction. The energy spectra were calculated by discretizing the inner product in Eq. (4) as The dipole-field kernel (Section 2.4) was constructed by assuming the sources on the cortically-constrained source space identically and independently distributed. The eigenbasis of was computed by discretizing the kernel on the surface and solving the discrete eigenvectors .

Energy spectra and decompositions of dipole fields

Methods. We quantified the dipole-field energy spectra to estimate the beneficial numbers of spatial samples as discussed in Section 2.2. Specifically, we analyzed the number of SF components needed to get 99% energy of each dipole field. We denote this number by and define the corresponding wavenumber as the 99% bandwidth, . The sensor spacing in each modality corresponding to the largest values across the sources was estimated using uniform sampling grids (see Section 3.4). Further, in order to quantify compressibility of the dipole fields, we calculated energy spectra in the eigenbasis of ; we denote the number of eigencomponents to achieve 99% energy as . We inspected how many SF components of the dipole fields were above spatial white noise for a given SNR. We defined the dipole-field SNR as , where is energy of the dipole field and is the expected energy of a sensor-noise-equivalent white-noise field . For every modality and source, we set the ratio of dipole amplitude and white noise level so that the source had the given SNR. We then computed the number of SF components that had energy higher than the flat white noise spectrum. Three values of SNR were considered: 0.1, 1 and 10. ResultsFig. 5 displays and for sources on the left cortical hemisphere. Generally, decreases as a function of source depth. For the sources in the most superficial bin, the 97.5th percentile of is 276, 93 and 109 for on-scalp, off-scalp MEG and EEG, respectively. The bandwidths are then about 2.1, 1.0 and 1.3 1/cm, respectively, with corresponding uniform sensor spacings 1.6, 3.3 and 2.6 cm. Generally, is lower than indicating compressibility of the dipole fields, e.g., for the most superficial sources, the 97.5th percentiles of are 177, 63 and 72.
Fig. 5

The effective number of spatial components in dipole fields of on-scalp MEG, off-scalp MEG, and EEG for each source position on the cortical surface. Left: The number of spatial-frequency (SF) components to reach 99% of the energy (). Center: The number of -eigencomponents to reach 99% energy (). Right: and as function of source depth. The sources have been distributed to 2.5-mm-wide bins according to their depth. The solid lines plot the median values of and for the bins while the dashed lines indicate the 2.5 and 97.5 percentiles.

The effective number of spatial components in dipole fields of on-scalp MEG, off-scalp MEG, and EEG for each source position on the cortical surface. Left: The number of spatial-frequency (SF) components to reach 99% of the energy (). Center: The number of -eigencomponents to reach 99% energy (). Right: and as function of source depth. The sources have been distributed to 2.5-mm-wide bins according to their depth. The solid lines plot the median values of and for the bins while the dashed lines indicate the 2.5 and 97.5 percentiles. Field energies and numbers of SF components above noise are shown in Fig. 6. In MEG, the energy–depth curves have inverted V shape for the superficial sources as radial sources on gyri have smaller energies than tangential sources on sulcal walls. Additionally, MEG energy curves decay faster than those of EEG. For sources with SNR = 10, at maximum 205, 109 and 91 components are above noise in on-scalp/off-scalp MEG and EEG.
Fig. 6

Dipole-field energies and numbers of spatial-frequency (SF) components above noise. Left: Normalized dipole-field energies for on-scalp MEG, off-scalp MEG, and EEG for each source on the left cortical hemisphere and the same data as a function of source depth (the bins and the statistics are same as in Fig. 5). Right: The number of SF components in the dipole field above noise floor for each source position at different dipole-field SNR levels. Each dipole field has the same SNR, i.e., the source variance varies between the dipoles.

Dipole-field energies and numbers of spatial-frequency (SF) components above noise. Left: Normalized dipole-field energies for on-scalp MEG, off-scalp MEG, and EEG for each source on the left cortical hemisphere and the same data as a function of source depth (the bins and the statistics are same as in Fig. 5). Right: The number of SF components in the dipole field above noise floor for each source position at different dipole-field SNR levels. Each dipole field has the same SNR, i.e., the source variance varies between the dipoles.

Field kernels

Methods We analyzed the spatial variance (amount of signal) and correlations (sample spacing; Section 2.4). The number of eigencomponents explaining 99% of the total variance was calculated to estimate the spatial degrees of freedom of the random source distribution (Section 2.4). Field correlation length was quantified for each measurement point by computing the distance along the surface at which had decayed to half of its maximum value (the ‘half-maximum width’). The distances along the surfaces were computed using a method based on the heat equation (Crane et al., 2017) implemented as the MATLAB function heat_geodesic in gptoolbox (Jacobson et al., 2018). ResultsFig. 7 illustrates the kernels due to the IID random source distribution for the three modalities. To explain 99% of the total variance, 88, 35 and 26 eigencomponents are needed in on-scalp MEG, off-scalp MEG and EEG, respectively. Variance is distributed nonuniformly on the measurement surface with the highest values around the temporal cortex. The kernels are asymmetric and the correlation length varies across the surface; the least correlated area can be found on top of the temporal cortex. The correlation lengths are shortest in on-scalp MEG as indicated by smaller values of half-maximum widths compared to off-scalp MEG and EEG.
Fig. 7

Analysis of dipole-field kernel due to IID Gaussian sources. A: Variance as a function of the kernel eigencomponent. Inset shows the normalized eigenvalues (variances). B: Distribution of the variance on the measurement surface. C: Spatial profile of the field kernel at one position (red dot) on the measurement surface and the distance at which the covariance has decayed to half (half-maximum width). D: Half-maximum width of the kernel across the measurement surface.

Analysis of dipole-field kernel due to IID Gaussian sources. A: Variance as a function of the kernel eigencomponent. Inset shows the normalized eigenvalues (variances). B: Distribution of the variance on the measurement surface. C: Spatial profile of the field kernel at one position (red dot) on the measurement surface and the distance at which the covariance has decayed to half (half-maximum width). D: Half-maximum width of the kernel across the measurement surface.

Sampling grid construction and evaluation

Methods The sampling grids were constructed by subsampling the nodal positions of the triangle meshes using the method described in Section 2.6. For a given kernel discretized on the mesh, we calculated its eigendecomposition, and utilized a farthest-point sampling algorithm implemented in gptoolbox (Jacobson et al., 2018) to obtain the optimal sampling positions. Grid construction We constructed sampling grids for scenarios where random source distributions were defined in the brain: a global scenario where the whole brain was assumed active and of interest, and for a local scenario where a region of interest (ROI) in the brain was defined. We compared the performance of uniform sampling to model-informed sampling for different numbers of spatial samples . Uniform sampling was generated using bandlimited priors in the SF basis (white covariance for the lowest SF coefficients; SF bandlimited prior; Section 2.4). Model-informed sampling was generated by encoding the prior in the covariance of the cortically-constrained sources (IID source prior; Section 2.4). The grids were evaluated by computing total information (TI) and fractional explained variance FEV according to Eqs. (22) and (21) from the ‘ground-truth’ model of the signal and noise. We ran the optimization algorithm several times with a random initial configuration as the output of the algorithm was sensitive to the initial condition. In the global scenario, the algorithm was run 20 times for , 10 times for and twice for . More iterations were chosen for small due to the higher variance in TI. To seek a globally optimal grid with model-informed sampling, we took the grid with maximum TI for each . For the uniform grids, we computed the average TI across the iterations as we wanted to quantify the general average performance of uniform sampling. In the local scenario, the algorithm was run once for each ; ranged in steps of 1 so that one iteration for each was deemed sufficient to extract a general trend of TI as a function .

Global scenario

Methods Here, the ground-truth model corresponded to the IID source prior and spatial white noise. The grids were constructed on meshes with the regions of eyes cropped out. We performed two analyses. In the first, we compared uniform and model-informed sampling in EEG and MEG as a function of SNR. The white noise variance ranged from to . The reference noise level was fixed in each modality so that it gave an average SNR (Eq. (20)) of 1 for source standard deviation of 7.6 pAm. Using this convention, was determined to be 9 fT, 3.7 fT and 42 nV in on-scalp, off-scalp MEG and EEG, respectively. TI was also computed for MEG102 and EEG57; was calculated to be 2.9 fT and 42 nV, respectively. In the second analysis, we compared uniform sampling of on-scalp and off-scalp MEG as a function of spatial white noise level. We set the reference source variance so that the average SNR of off-scalp MEG was 1 with a white noise level of 3 fT. The white noise level of on-scalp MEG ranged from 1 to 20 fT and the source variance was varied (, and ). On-scalp MEG grids comprising 10–600 points were generated, while off-scalp MEG had 100 or 300 points. ResultsFig. 8 summarizes the first analysis in the global scenario. For model-informed sampling, the sample density is the highest on the temporal lobe, which corresponds to the shorter correlation lengths and higher variance of the field kernel shown in previous analysis (Fig. 7). Model-informed sampling is especially beneficial in MEG at low SNR 1 and small sample numbers, giving about 10–25% increase in TI compared to uniform sampling.
Fig. 8

Comparison of uniform (SF bandlimited prior) and model-informed (IID source prior) sampling of the whole brain in on-scalp MEG, off-scalp MEG and EEG. A: Example on-scalp MEG sampling grids generated with the priors and their average and minimum sample spacing. B: Total information (TI) and fractional explained variance (FEV) of the uniform grids. TI for SQUID102 and EEG57 are also shown. C: Ratio of the TI obtained by model-informed and uniform sampling with different SNRs (top) and the number of additional samples required with uniform sampling to achieve the same TI as with model-informed sampling (bottom).

Comparison of uniform (SF bandlimited prior) and model-informed (IID source prior) sampling of the whole brain in on-scalp MEG, off-scalp MEG and EEG. A: Example on-scalp MEG sampling grids generated with the priors and their average and minimum sample spacing. B: Total information (TI) and fractional explained variance (FEV) of the uniform grids. TI for SQUID102 and EEG57 are also shown. C: Ratio of the TI obtained by model-informed and uniform sampling with different SNRs (top) and the number of additional samples required with uniform sampling to achieve the same TI as with model-informed sampling (bottom). With the same SNR and the same number of samples, the TI is highest in on-scalp MEG among the modalities. When the SNR is low (0.1), approximately 60, 70 and 80% of variance is explained (on-scalp MEG, off-scalp MEG and EEG, respectively) with a sample number as high as 400. When the SNR is high (10), about 80, 30 and 20 samples are needed to explain 90% of the total variance. Figure 9 gives a summary of the second analysis comparing on- and off-scalp MEG. With a similar noise level, SNR as averaged over the whole measurement surface is higher in on-scalp MEG by a factor of 6.0. Figure 9B shows TI and FEV as a function number of spatial samples and noise. With a comparable noise level, on-scalp MEG achieves the same information as 300 off-scalp samples with fewer samples.
Fig. 9

Spatial sampling of independent sources (variance ) distributed over the whole cortex with on-scalp and off-scalp MEG. A: Signal-to-noise ratio averaged over the whole measurement surface as a function of spatial white noise level. B: Total information (TI) and fractional explained variance (FEV). Colored lines give the values for on-scalp MEG as a function of sample number and noise level. The vertical lines show the values for off-scalp MEG with 100 and 300 samples with a noise level of 3 fT. The source variance is . C: Number of spatial samples needed in on-scalp MEG to achieve the same TI as in off-scalp MEG with a noise level of 3 fT as a function of on-scalp MEG white noise level. Comparisons to the TI of 100 off-scalp samples is plotted with the dashed line and to 300 samples with the solid line.

Spatial sampling of independent sources (variance ) distributed over the whole cortex with on-scalp and off-scalp MEG. A: Signal-to-noise ratio averaged over the whole measurement surface as a function of spatial white noise level. B: Total information (TI) and fractional explained variance (FEV). Colored lines give the values for on-scalp MEG as a function of sample number and noise level. The vertical lines show the values for off-scalp MEG with 100 and 300 samples with a noise level of 3 fT. The source variance is . C: Number of spatial samples needed in on-scalp MEG to achieve the same TI as in off-scalp MEG with a noise level of 3 fT as a function of on-scalp MEG white noise level. Comparisons to the TI of 100 off-scalp samples is plotted with the dashed line and to 300 samples with the solid line. Figure 9C gives the number of spatial samples needed in on-scalp MEG for the same TI as in off-scalp MEG as a function of noise level. For example, with a source variance of and noise levels of 3, 7, 10 and 15 fT in on-scalp MEG, about 30, 130, 260 and 590 samples are needed to yield the same TI as 300 off-scalp MEG samples at 3-fT noise level.

Local scenario

Methods In the local scenario, the ground-truth model consisted of IID sources distributed in an ROI defined around the motor cortex. Both white (sensor) and colored (sensor + background brain activity) noise were considered. The ROI consisted of sources within a patch that had a radius of approximately 3 cm along the cortical surface (387 source elements; Fig. 10A). Sampling grids were constructed for on-scalp MEG and EEG. The total information was analyzed and SQUID102 and EEG57 were used as references. Spatial sampling of on-scalp MEG (left panel; A, C and E) and EEG (right; B, D and F) fields due to uncorrelated sources in an ROI around the motor cortex (shown on the center in red on the inflated cortical surface). A & B: Distribution of total information (TI) among the eigencomponents and the spatial distribution of SNR computed with spatial white noise as well as with colored noise (brain background activity + white noise). C & D: Example grids constructed with IID source and bandlimited SF (uniform sampling) priors. With IID source prior, two noise priors (white and colored) were considered. Sensor layouts of SQUID102 and EEG57 arrays are also shown. E & F: TI as a function of number of samples in the grids computed using the two noise models. Wholehead 100 refers to a uniform wholehead grid with 100 spatial samples. The source variance was set so that the maximum SNR of the on-scalp MEG was 3 with a spatial white noise level of 9 fT. The white noise level of EEG (25 nV) was set so that the maximum SNR was also 3. To model colored noise due to brain background activity, the sources outside the ROI were assumed active (IID Gaussian; variance ). The noise level of SQUID102 was set to 3 fT while that of EEG57 was 24 nV. The field kernel was whitened as described in Section 2.4 and SNR was analyzed. The eigenvalues of the whitened kernel were converted to bits as . These values were used to analyze the number of eigencomponents contributing to the total information. Two uniform sampling schemes were considered. In the first, samples were distributed evenly on the whole head. In the second, the SF basis was constrained to a region on the surface that had a distance to the center of the ROI less than 9 cm. For model-informed sampling, a dipole-field kernel corresponding to the ground-truth model was used, while the noise kernel corresponded either to the white or colored noise model. ResultsFig. 10A shows the SNR distributions as well as the TI content across the eigencomponents. With white noise, the TI is 28 bits in MEG and 27 bits in EEG. With colored noise, the maximum SNR is 1.1 in MEG and 0.3 in EEG while the TI is 18 and 12 bits, respectively. The TI and maximum SNR in SQUID102 are 4.6 bits and 1.1 with white noise and 3.6 bits and 0.8 with colored noise, respectively. The corresponding TIs in EEG57 are 5.7 and 3.8 bits, respectively; the maximum SNRs are 2.4 and 1.3. The grids constructed using the different priors for signal and noise are presented in Fig. 10B. Compared to uniform sampling, the model-informed grids are more densely distributed, especially when the colored noise model is used. Compared to whole-head sampling, using the source prior to distribute samples on the region with high SNR is beneficial. With colored noise, 10 samples are needed in the model-informed MEG grids to reach the same TI as with 100 whole-head samples. In EEG, 20 samples are needed.

Discussion

We quantified the number of spatial samples beneficial for MEG and EEG by analyzing the spatial-frequency content of fields due to dipolar sources. Based on a review of Gaussian processes and optimal design, we related the information metric used in MEG sensor-array designs to the covariance of random-field models. This relationship was used to derive an information-maximizing sampling algorithm. We applied the algorithm to generate sampling grids, which we used to quantify the benefit of model-informed sampling of random source distributions in comparison to uniform sampling.

Field analysis

To capture 99% of the field energy of every source in the brain of a representative adult male, approximately 280, 90 and 110 SF components were needed in on-scalp, off-scalp MEG and EEG, respectively. These numbers represent the maximum number of spatial degrees of freedom in any field due to neural activity in the used head model. Additionally, they correspond to the number of uniform samples needed to achieve at most roughly 1% field reconstruction error in noiseless conditions. These results agree with previously published results. For MEG, the numbers corresponds to the ”rule of thumb” presented by Ahonen et al. (1993): the sensor spacing should be approximately the distance of the sensors to the closest source. For EEG, the results are in line with the arguments by Srinivasan et al. (1998): the sensor count should be at least roughly 120 for adequate sampling of the EEG in a typical adult head. The eigenbasis of the dipole-field kernel due to IID sources was found out to reduce the component number needed to capture 99% of the dipole-field energies, compressing the field representations. This compression and the variance analysis of IID sources suggest that the sample numbers given by the SF analysis provide oversampling, and, thereby, give a good upper limit on the beneficial sample number.

Head models

We employed a four-compartment realistically-shaped piecewise homogeneous head model. To our knowledge, previous sampling analyses have been carried out in simpler models such as half-infinite and spherically symmetric conductors. An essential property of the four-compartment model is the inclusion of geometrically complex gyral folding that strongly deviates from simple layer structure represented by spherical models or three-shell (3S) models commonly used in EEG source analysis. Compared to EEG, MEG is only weakly sensitive to inclusion or omission of thin layer-like conductivity structures (Hamalainen, Sarvas, 1989, Stenroos, Hunold, Haueisen, 2014). The inclusion of CSF, however, breaks the simple layer structure: the omission of CSF has roughly as large effects on MEG and EEG dipole fields, when compared with relative-error or correlation metrics (Stenroos and Nummenmaa, 2016). While detailed analysis of different head models is beyond the scope of this manuscript, we did a simple model comparison to see, how the spatial frequency content is affected by the CSF and the skull, or the omission of them in the models. The comparison, presented in Appendix C, shows that the effective number of spatial-frequency components and the overall spatial spectrum in on- and off-scalp MEG are negligibly affected by the CSF. For EEG, we replicated the well-known low-pass filter property of a brain–skull–scalp system (e.g., Srinivasan et al. 1996), or more generally, of a layered structure of high, low and high conductivities. The highly conductive CSF enhances the low-pass phenomenon. The effect of the CSF on spatial-frequency decay, can, however, be partially compensated by lowering the conductivity of the skull in a 3S model, as was already demonstrated for EEG dipole fields in (Stenroos and Nummenmaa, 2016). The remaining difference in the energy spectrum is mainly of constant-gain nature. Most of the smearing of EEG is thus characterized by the parameters of the layer-like low-pass filter system, and the complex folding of CSF plays a smaller role. Our conductor model did not contain separate white and gray matter or white-matter anisotropy, which affect EEG and MEG dipole fields (Guellmar, Haueisen, Reichenbach, 2010, Vorwerk, Cho, Rampp, Hamer, Knösche, Wolters, 2014, Wolters, Anwander, Tricoche, Weinstein, Koch, MacLeod, 2006). In the separation of white and gray matter in a model that contains the CSF, the amount of added conductivity contrast and deviation from layer structure are smaller than in the inclusion of CSF. The separation of white and gray matter should thus have a smaller effect on MEG and EEG energy decays and component counts than the CSF has. The white-matter anisotropy may have a non-negligible effect for sampling deeper sources with EEG. Finally, our simulation was based on normal adult head. Strong deviations from layer structure between the source and sensors, such as fontanels in small children (Lew et al., 2013), surgical holes in the skull, or CSF-filled brain lesions, may have a strong effect on spatial information especially in EEG but also in MEG. These effects are a potential topic for future studies.

Grid construction

To generate sampling grids, we presented a method that utilizes prior information of signal and noise in the form of a covariance kernel. Assuming random spatial-frequency coefficients with a uniform variance up to some bandlimit, the kernel is isotropic and translation-invariant and the method yields uniform sampling grids similar to those in the sampling theorems introduced in Section 2.1. With this approach, uniform grids may be constructed also on more complex surfaces and domains. The kernel constructed from the dipole fields of IID random neural sources has a location-dependent shape and width, and the method yields nonuniform sampling grids. In the whole-head scenario, nonuniform model-informed sampling was only slightly beneficial compared to uniform sampling in terms of total information. When the number of samples was small and the noise level high, nonuniform sampling yielded roughly 10–25% more information than uniform sampling in MEG. When decreasing the noise level or increasing the number of samples, the performance of uniform and nonuniform sampling grids became roughly similar. With nonuniform sampling, the same total information was, however, reached with fewer number of samples. In EEG, the benefits of nonuniform sampling were less clear; EEG may not benefit from it due to long-range correlations. When a cortical region of interest was defined, local high-density (nonuniform) spatial sampling was beneficial; dense sampling of the local components yielded higher total information than uniform sampling covering a larger surface area or the whole head. Dense sampling may be especially useful when colored noise fields (such as those generated by background brain activity) are present and the region of interest is small. Local dense arrays could be beneficial in applications where a certain part of the brain is of interest and the number of sensors is limited (e.g., Iivanainen et al., 2020) or in brain–computer interfacing where simple measurement setups are desirable.

Practical considerations and future directions

Our analysis corroborates the observation that the spatial-frequency content of MEG mostly depends on the source-to-sensor distance (Ahonen et al., 1993). Accordingly, for designing whole-scalp sensor arrays, the rule of thumb (sensor spacing approximately the distance of the sensors to the closest source) should give the approximate uniform sensor spacing. For example, in the case of an infant head where the source-to-sensor distance can be as small as 5 mm, this rule of thumb would suggest a sensor spacing of 5 mm. In addition to the spacing, the sensor array should have adequate coverage of the scalp so that most of the energy of the topographies can be captured by the spatial samples. For example, in the case of SQUID helmets, the limited coverage of frontal sources often leads to a loss of signal as well as spatial information from the frontal regions. Our quantitative results are based on the analysis of a single adult head model and, thereby, cannot be directly generalized as there may be considerable variation in head sizes and shapes. For example, as mentioned earlier, smaller sensor spacing would be beneficial when measuring an infant. Thus, it would be valuable to perform similar computations in different subject populations and extract statistics on the metrics introduced in this paper. We note that there are freely available head-model collections that could be used for this purpose (see, e.g., Htet et al. 2019). We demonstrated the developed method using a single ROI defined around the motor cortex. In future studies, the locations, sizes and number of ROIs could be varied and correlations between the sources could be included. The ROI could be defined to match the cortical area of experimental interest (e.g., V1 of the visual cortex). The sampling grid optimization presented here does not necessarily result in sensor arrays that can be readily implemented in practice. For example, the minimum distance between the sensors was not constrained and the sensor dimensions were ignored. Sensor dimensions limit the minimum distance between the sensors and the sensor noise level is proportional to them in many sensor types (e.g., Kemppainen, Ilmoniemi, 1989, Mitchell, Alvarez, 2020). Spatial integration within a sensor can be viewed as spatial low-pass filtering, an effect that can be used to reduce aliasing due to high spatial frequencies (Roth et al., 1989). These matters should be taken into account in future studies when designing practical sensor arrays. Also, the method could be used with a population of head models to design arrays with the best average performance. We did not include in our simulation the effects of systematic or random sensor-positioning errors. Intuitively, the higher the spatial frequency, the more it will be affected by a given amount of sensor position error. By the same argument, sensor positioning errors might negate the benefits gained by using optimized sensor arrays. However, the sensor positioning error could be included in the optimization to some degree by blurring the kernel. We note that the spatial-frequency representation gives convenient means to assess how the spatial components of the field are affected by the sensor-position errors. We analyzed scalar fields which is sufficient in EEG. In contrast, magnetic field is a vector field, and the orthogonal field components may each provide additional information about the neural sources (Iivanainen et al., 2017). The sensor orientation could be included in the optimization by applying vector basis functions to assemble the kernel. Also (external) interference fields could be taken into account. In this case, vector-spherical harmonics could provide a useful basis (Taulu and Kajola, 2005). Sampling grids with triaxial sensors in a single layer (Brookes et al., 2021) or as multiple layers at different distances from the scalp should be beneficial in separating signals from the external interference (Nurminen, Taulu, Nenonen, Helle, Simola, Ahonen, 2013, Nurminen, Taulu, Okada, 2010). In fact, OPMs can provide dual- or tri-axial measurements of the field, making realistic implementations of such arrays feasible (Brookes et al., 2021). The methods presented here could be further developed and used to guide the design of such arrays. We note that the choice between a single- or multi-axis measurement for each sensor might not be trivial as the sensitivity of the OPM typically decreases when multiple components of the field are measured simultaneously (see, e.g., Osborne et al. 2018). In MEG, it is common to use gradiometers instead of magnetometers. According to Ahonen et al. 1993, gradiometers that sample the tangential gradient of the normal field component can be placed at larger spacing than magnetometers. On the other hand, long-baseline axial gradiometers have similar lead fields as magnetometers (Malmivuo and Plonsey, 1995) suggesting that the results for those would be similar to what we obtained in this paper. However, adding external interference fields to the stochastic model would probably result in differences between the optimal magnetometer and axial-gradiometer arrays. With the generalization of the SF basis to an arbitrary measurement surface, the standard 1-D signal processing methods such as filtering and interpolation should be straightforward to apply to spatial EEG/MEG data. Spatial-frequency filtering could be useful in noise and interference rejection as demonstrated by Graichen et al. (2015); the continuous SF basis functions make the filtering less dependent on the sensor configuration. As the SF basis is both data- and model-independent, the filtering methods may be useful in real-time applications. The SF filtering as a part of preprocessing would be similar to the spline Laplacian (Nunez et al., 2019), providing flexibility for defining the filter coefficients. Filtering and interpolation may also help in data visualization. Estimating the neural sources that could have generated the data, i.e., solving the bioelectromagnetic inverse problem (Sarvas, 1987) is often of interest. We did not explicitly consider the inverse problem. However, the total information should be an estimator of the source-estimation performance as it quantifies the size (or the stability of the inversion) of the modelled or measured measurement covariance matrix. In other terms, when the kernel is generated using dipole fields, total information maximization corresponds to minimization of the posterior entropy of the sources (Section 2.5). We conclude the discussion by summarizing the results from a practical perspective. Considering whole-scalp sampling, our results show that when the sensor count is sufficiently large (hundreds) the benefits from optimized sampling are minimal compared to uniform sampling. With lower sensor counts (tens), sampling that is optimized either for an individual subject or across a subject population might, however, be beneficial. On the other hand, considering the sampling of an a-priori determined cortical region, our results show that the scalp area with the strongest signal is quite compact and, thereby, dense sampling might be beneficial. OPMs that use a single vapor cell with multiple densely-packed channels (e.g., Xia et al. 2006) could be used to assess the benefits of dense sampling over a conventional array, e.g., in a classification task (for such an analysis with EEG, see Robinson et al. 2017).

Conclusions

We analyzed the spatial sampling of EEG and MEG and suggested a method for designing optimal sampling positions. Our simulations suggest that when measuring adult males on-scalp MEG can benefit from up to roughly 300 spatial samples while for off-scalp MEG and EEG this number is around 100. The theoretical framework we present in this manuscript as well as the sample-positioning method we introduce can be used to design sampling grids that convey the most information from the neuronal sources. Such designs may be useful when the sensor number is limited or a certain region of the brain is of interest.

Data and code availability statement

To facilitate further use in the research community, the information-maximizing sampling algorithm is available from the corresponding authors for academic use. The data and code sharing policy adopted by the authors complies with the requirements of Aalto University.

CRediT authorship contribution statement

Joonas Iivanainen: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Software, Writing – original draft, Writing – review & editing. Antti J. Mäkinen: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Software, Writing – original draft, Writing – review & editing. Rasmus Zetter: Conceptualization, Writing – review & editing. Matti Stenroos: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Supervision, Writing – review & editing. Risto J. Ilmoniemi: Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing. Lauri Parkkonen: Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing.

Declaration of Competing Interest

The authors declare that they have no conflict of interest.
  37 in total

1.  A Matlab library for solving quasi-static volume conduction problems using the boundary element method.

Authors:  M Stenroos; V Mäntynen; J Nenonen
Journal:  Comput Methods Programs Biomed       Date:  2007-12       Impact factor: 5.428

2.  Spatial sampling and filtering of EEG with spline laplacians to estimate cortical potentials.

Authors:  R Srinivasan; P L Nunez; D M Tucker; R B Silberstein; P J Cadusch
Journal:  Brain Topogr       Date:  1996       Impact factor: 3.020

3.  A sampling theorem for EEG electrode configuration.

Authors:  C Vaidyanathan; K M Buckley
Journal:  IEEE Trans Biomed Eng       Date:  1997-01       Impact factor: 4.538

4.  A guideline for head volume conductor modeling in EEG and MEG.

Authors:  Johannes Vorwerk; Jae-Hyun Cho; Stefan Rampp; Hajo Hamer; Thomas R Knösche; Carsten H Wolters
Journal:  Neuroimage       Date:  2014-06-25       Impact factor: 6.556

5.  Electroencephalographic source imaging: a prospective study of 152 operated epileptic patients.

Authors:  Verena Brodbeck; Laurent Spinelli; Agustina M Lascano; Michael Wissmeier; Maria-Isabel Vargas; Serge Vulliemoz; Claudio Pollo; Karl Schaller; Christoph M Michel; Margitta Seeck
Journal:  Brain       Date:  2011-10       Impact factor: 13.501

6.  SPHARA--a generalized spatial Fourier analysis for multi-sensor systems with non-uniformly arranged sensors: application to EEG.

Authors:  Uwe Graichen; Roland Eichardt; Patrique Fiedler; Daniel Strohmeier; Frank Zanow; Jens Haueisen
Journal:  PLoS One       Date:  2015-04-17       Impact factor: 3.240

7.  Incorporating and Compensating Cerebrospinal Fluid in Surface-Based Forward Models of Magneto- and Electroencephalography.

Authors:  Matti Stenroos; Aapo Nummenmaa
Journal:  PLoS One       Date:  2016-07-29       Impact factor: 3.240

8.  Very high density EEG elucidates spatiotemporal aspects of early visual processing.

Authors:  Amanda K Robinson; Praveen Venkatesh; Matthew J Boring; Michael J Tarr; Pulkit Grover; Marlene Behrmann
Journal:  Sci Rep       Date:  2017-11-24       Impact factor: 4.379

9.  Pragmatic spatial sampling for wearable MEG arrays.

Authors:  Tim M Tierney; Stephanie Mellor; George C O'Neill; Niall Holmes; Elena Boto; Gillian Roberts; Ryan M Hill; James Leggett; Richard Bowtell; Matthew J Brookes; Gareth R Barnes
Journal:  Sci Rep       Date:  2020-12-10       Impact factor: 4.379

10.  Evaluation of realistic layouts for next generation on-scalp MEG: spatial information density maps.

Authors:  Bushra Riaz; Christoph Pfeiffer; Justin F Schneiderman
Journal:  Sci Rep       Date:  2017-08-01       Impact factor: 4.379

View more
  3 in total

1.  Cross-Axis Dynamic Field Compensation of Optically Pumped Magnetometer Arrays for MEG.

Authors:  Stephen E Robinson; Amaia Benitez Andonegui; Tom Holroyd; K Jeramy Hughes; Orang Alem; Svenja Knappe; Tyler Maydew; Andreas Griesshammer; Allison Nugent
Journal:  Neuroimage       Date:  2022-08-13       Impact factor: 7.400

2.  Source localization using virtual magnetoencephalography helmets: A simulation study toward a prior-based tailored scheme.

Authors:  Oshrit Arviv; Yuval Harpaz; Evgeny Tsizin; Tal Benoliel; Dana Ekstein; Mordekhay Medvedovsky
Journal:  Front Neurosci       Date:  2022-09-06       Impact factor: 5.152

3.  Calibration and Localization of Optically Pumped Magnetometers Using Electromagnetic Coils.

Authors:  Joonas Iivanainen; Amir Borna; Rasmus Zetter; Tony R Carter; Julia M Stephen; Jim McKay; Lauri Parkkonen; Samu Taulu; Peter D D Schwindt
Journal:  Sensors (Basel)       Date:  2022-04-15       Impact factor: 3.576

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.