Literature DB >> 30894611

Orientationally-averaged diffusion-attenuated magnetic resonance signal for locally-anisotropic diffusion.

Magnus Herberthson1, Cem Yolcu2, Hans Knutsson2, Carl-Fredrik Westin2,3, Evren Özarslan2,4.   

Abstract

Diffusion-attenuated MR signal for heterogeneous media has been represented as a sum of signals from anisotropic Gaussian sub-domains to the extent that this approximation is permissible. Any effect of macroscopic (global or ensemble) anisotropy in the signal can be removed by averaging the signal values obtained by differently oriented experimental schemes. The resulting average signal is identical to what one would get if the micro-domains are isotropically (e.g., randomly) distributed with respect to orientation, which is the case for "powdered" specimens. We provide exact expressions for the orientationally-averaged signal obtained via general gradient waveforms when the microdomains are characterized by a general diffusion tensor possibly featuring three distinct eigenvalues. This extends earlier results which covered only axisymmetric diffusion as well as measurement tensors. Our results are expected to be useful in not only multidimensional diffusion MR but also solid-state NMR spectroscopy due to the mathematical similarities in the two fields.

Entities:  

Mesh:

Year:  2019        PMID: 30894611      PMCID: PMC6426978          DOI: 10.1038/s41598-019-41317-8

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Introduction

In MR examinations of porous media as well as biological tissues, one is often confronted with a medium comprising an isotropic ensemble of individually anisotropic domains. The effect of diffusion within such media on the MR signal has thus been considered since the 70s[1-3]. The problem we tackle here is for the situation where diffusion within each microdomain can be taken to be free, thus can be characterized by a microscopic diffusion tensor D. This assumption has been widely employed in the recent development of multidimensional diffusion MR (see ref.[4] for a recent review and the references therein), which employs general gradient waveforms for diffusion sensitization. The level of diffusion-sensitivity is fully captured by a measurement tensor[5] B, yielding the signal attenuationfor the microdomain. When some residual anisotropy is present upon the inherent signal averaging over the sample (or voxel in image acquisitions), a series of diffusion signals can be acquired with rotated versions of the same gradient waveforms. Upon averaging such signals, one obtains the orientationally-averaged signal, which is devoid of any macroscopic (ensemble) anisotropy, i.e., the anisotropy of the orientation distribution function of the microdomains. As demonstrated in ref.[4], there is a close resemblance between the mathematics involved in this problem with that in multidimensional solid-state NMR spectroscopy[6]. In the latter, the local structure is described by the chemical shift tensor, and a measurement tensor can be introduced, which is determined by the orientation of the main magnetic field and manipulations of the sample orientation within it[7-11]. Our interest in this article is the averaged signal obtained by repeating a given measurement protocol in all orientations, which is relevant for both diffusion and solid-state MR applications. Our results permit a more general analysis and modeling when the orientationally-averaged signal has been measured, as the common condition on axisymmetry is relaxed. The problem of determining the set of optimal diffusion gradient orientations for computing the orientationally-averaged signal is addressed elsewhere[12]. Here, we extend the existing literature[3,13,14] by providing explicit expressions for the orientational averages to accommodate measurement and/or structure tensors that are not axisymmetric. With R denoting an arbitrary rotation matrix (), the complete set of such measurements is spanned by the expression , hence yielding the orientationally-averaged signal aswhere the average is over the three dimensional special orthogonal group SO(3), i.e., the space of all rotations. Since the matrix trace operation is invariant under cyclic permutation of the product in its argument, this is equivalent to the expressionwhich demonstrates the utility of such an averaging over rotated protocols: The information is the same as that which would be obtained by a single measurement, had the specimen consisted of microdomains with the same diffusivity (D) distributed uniformly in orientation. For solids, such a specimen is obtained by grinding the material into a powder, eliminating any nonuniformity in the orientational dispersion (macroscopic, global, or ensemble anisotropy) the bulk specimen might have had. Hence we use the terms powder-averaged and orientationally-averaged interchangeably, given the latter effectively achieves the same result. An alternative interpretation of the average in (2b) can be realized when one considers the Laplace transform of a function which takes matrix arguments[15], in our case a tensor distribution p(D),where the integration is performed over Sym3, the space of all symmetric 3 × 3 matrices, and for those matrices where the integral converges. With denoting the set of all positive semi-definite matrices in Sym3, the applications of interest in this work will be when all the matrices . This means that if D is not in , so that the integration in (3) can be performed over rather than Sym3. will then exist for all . In (3), the integration measure is with . Interestingly, when is such that it describes an ensemble of tensors consisting of copies of a given tensor , rotated to lie in all possible orientations, the above expression is the same average as (2b). In this scenario unless , i.e, for some R in SO(3). When we require , which gives us (2b) (with replaced by D). In other words, the orientationally-averaged signal that we are evaluating in this work is nothing but the Laplace transform of the distribution describing a uniform ensemble of rotated copies of a given tensor. Parallels can be drawn with previous works[16-18] that have employed parametric Wishart distributions (or its generalizations) for representing the detected MR signal; in these studies, the resulting expression for the Laplace transform is borrowed from the mathematics literature[19]. However, the Laplace transform of the distribution we are tackling in this work (Eq. (2b)) for a general D is not available to our knowledge. Note that throughout the article, the phrase “general tensor” refers to a tensor whose ellipsoidal representation is not axisymmetric. Thus, a general tensor T is still to be understood as being real-valued and having the index symmetry . Given that the matrices D and B are real and symmetric, and that the averages (2) are insensitive to individual rotations of these matrices, we are free to consider their diagonal forms, possibly in different bases It turns out that the average signal (2) can take a particularly simple form if repeated eigenvalues occur in at least one of the matrices D and B. With an ellipsoidal representation of these matrices in mind, we refer to these cases as axisymmetric or isotropic, depending on whether two or all eigenvalues coincide, respectively. The rank of the matrices appear to have no further significance for the simplicity of the calculation, except for rank-1 being a special case of axisymmetry. In the following sections, we consider various combinations of symmetries of D and B in evaluating the average (2). Keeping in mind that the roles of D and B can be interchanged without changing the average signal, the relevant cases are D general, B isotropic, D and B axisymmetric, B rank 1, D and B axisymmetric, D general, B axisymmetric, D and B general.

Results

Here, we give the resulting expressions for the average signal (2) in the cases listed above, with D and B given as in Eq. (4). All derivations are provided in Supplementary Section I, where is first derived for the case (4) of a general D and axisymmetric B, from which more specialized cases (1–3) are deduced, while the most general case (5) is taken up last.

D general, B isotropic

This is the case where all eigenvalues (a, b, c) of D are possibly different, while B is proportional to identity with . One finds, Indeed, when , and hence the average in Eq. (2) has no effect.

D and B axisymmetric, B rank 1

Here, two eigenvalues of D coincide, which we choose as , while B is rank-1 with . This is a widely-utilized case in diffusion MR[1,3,20-24]. The result follows as,where is the error function and is the imaginary error function; . One can choose either of the formulas in (6), but depending on the sign of , one expression will have real arguments, and the other imaginary. For instance, with , if D is prolate and if D is oblate. It is assumed that and , as this implies that one of the matrices is isotropic, but this case can of course also be included here by continuity arguments.

D and B axisymmetric

When both matrices have axisymmetry, with and , we have[13] With similar remarks as in the previous case, any of the two forms in (7) can be chosen, and if moreover or , one gets the first (isotropic) case above. The first form has real arguments when both D and B are either prolate or oblate, while the second form may be preferable when one matrix is prolate and the other oblate.

D general, B axisymmetric

For the almost-general case where the only condition is axisymmetry of B with , the result is given in four alternative forms by Here, the symbol F stands for a hypergeometric function, in particular with 1F1 being the confluent hypergeometric function. Equations (8a–c) were derived by a rather direct approach; see Supplementary Section I.A. The fourth expression (8d), which is perhaps the most interesting for when one matrix is axisymmetric, is derived from the general expression, i.e, the case accounted for next. As discussed below, it can lead to very efficient numerical evaluation. The expressions above feature hypergeometric functions. However, it is possible to express them using more familiar functions. For example, by using a scheme[25] that involves term by term comparison with the confluent hypergeometric function’s asymptotic behaviour[26], we were able to write Eq. (8b) in the following form:where denotes the product of all odd numbers from 1 up to . Similarly, Eq. (8a) can be expressed via more familiar functions, using Eq. (11). We also note that in the literature the signal for this case is reduced to a one-dimensional integral wherein the integrand is given by a complete elliptic integral[6,14].

D and B general

In the general case where neither matrix (necessarily) has any degenerate eigenvalue, the result can be expressed aswhereand there are three alternatives for Y as follows: The coefficients q contain I, i.e., the modified Bessel function of the first kind for various orders m, and the regularized hypergeometric function . Also note that the floor function indicates the largest integer smaller than or equal to its argument. It is straightforward to check that as given by Eq. (10) is invariant under the rescaling , for nonzero λ, verifying an obvious invariance it should have due to its definition (2). Implied by that definition, must also be unaffected by permutations of and of , as well as swapping . This is not at all evident in the expansions above; in fact the ordering of eigenvalues can have drastic effects on the numerical behaviour of the series, even though the eventual sum does not change. Given D and B, and their eigenvalues, the remaining task is thus to assign them (in some order) to a, b, c and d, e, f in such a way that the series can be approximated well with only a few terms; see the next section.

Numerical Behaviour

In this section we give some comments and guidelines on how to evaluate via the series in Eqs (8) and (10). The numerical behaviour of the series given in Eqs (8) and (10) varies drastically depending on how the eigenvalues of D and B are ordered (i.e., which eigenvalue is named a, b, and so on). In the formulas in Eq. (10), there is also the option of interchanging D and B. The discussion here revolves around an example employing Eq. (8), but the guidelines apply to the general case of Eq. (10) as well, albeit more involved. When B is axisymmetric (), f and d are fixed in Eq. (8). On the other hand, in these formulas, which all give the same result, one is free to permute . Although this does not affect the answer, it affects the number of terms needed to get a good approximation. Hence, given the three eigenvalues of D, one wants to assign them to a, b, c in a clever way, as well as choose the most efficient formula. Looking at the expansions (8), while there appears no obvious “best” choice for assigning a given set of eigenvalues, it is still wise to try and (i) keep the series expansion parameter (e.g., in Eq. 8d) small, and (ii) avoid alternating signs, since an expansion where all terms have the same sign cannot suffer from cancellation effects. For instance, it is worth paying attention to the magnitude and sign of and 2F1 in Eq. (8a), and so on for the rest. Therefore, even though a deep analysis of the properties of hypergeometric functions is beyond the scope of this work, it is useful to note the following as a guideline for their sign and magnitude. With , the functions 2F1 are positive and decreasing functions of , and happen to be polynomials. On the other hand, 2F2 are positive and increasing functions of . Finally, the functions 1F1 are positive and increasing for all . (See Supplementary Section II for some elaboration on this). In Figs 1 and 2, the contributions of individual terms in the series are displayed, with the index of the terms running on the horizontal axis. The figures correspond, respectively, to the alternative forms (8a) and (8d), which were chosen as examples to illustrate clearly that term by term each series exhibits different behaviours depending on the allocation of the eigenvalues to the parameters a, b, c. For the particular case presented in the figures, the matrix D has eigenvalues 0.1, 0.2, 3 and the matrix B has eigenvalues 6, 0.5, 0.5, for which , and all the terms are normalized by so that their sum is 1.
Figure 1

The first terms in the series in Eq. (8a), which uses the hypergeometric function 2F1. The terms are normalized by so that they add up to 1. B has eigenvalues , , while D has eigenvalues . Depending on how the latter are assigned to a, b and c, the trend of the series expansion varies. Displayed are three (instead of six) distinct choices, because each term is invariant under the change , which is not obvious from Eq. (8a), but can be shown using Eq. (19).

Figure 2

The first terms in the series in Eq. (8d), which uses the hypergeometric function 1F1. The terms are normalized by so that they add up to 1. B has eigenvalues , , while D has eigenvalues . Depending on how the latter are assigned to a, b and c, the trend of the series expansion varies. Displayed are three (instead of six) distinct choices, since individual terms of Eq. (8d) are manifestly invariant under swapping .

The first terms in the series in Eq. (8a), which uses the hypergeometric function 2F1. The terms are normalized by so that they add up to 1. B has eigenvalues , , while D has eigenvalues . Depending on how the latter are assigned to a, b and c, the trend of the series expansion varies. Displayed are three (instead of six) distinct choices, because each term is invariant under the change , which is not obvious from Eq. (8a), but can be shown using Eq. (19). The first terms in the series in Eq. (8d), which uses the hypergeometric function 1F1. The terms are normalized by so that they add up to 1. B has eigenvalues , , while D has eigenvalues . Depending on how the latter are assigned to a, b and c, the trend of the series expansion varies. Displayed are three (instead of six) distinct choices, since individual terms of Eq. (8d) are manifestly invariant under swapping . The first alternative (8a) has the property that its individual terms are invariant under the change , which can be shown to follow from relation (11). Therefore, out of the six possible assignments between the elements of the sets and only three are distinct. These are what is depicted in Fig. 1. It is seen clearly that for these parameters this alternative does not afford a series expansion that can be truncated after the first few terms for a usable result. The terms making the most significant contribution to the sum are not even at the beginning, but in this example rather around term number 15. Furthermore, the two latter choices exhibit terms of alternating sign and magnitudes of about 105 times the sum itself. Figure 2 depicts all three distinct choices for the alternative expression (8d). We see a very desirable feature here. When the largest value is assigned to c, the terms are seen to converge quickly (note that the expression is invariant under the exchange ). With this choice (e.g., ) we have 1F1 = 1F1 which is bounded with respect to n. In fact, it follows from termwise comparison in the defining series for 1F1 (see Supplementary Section II) that 1F1 1F1. We also see that only even powers enter, resulting in all terms of the series being positive; no cancellation occurs. This choice leads to a sum that converges so quickly that is approximated within 2% by only the first term, while the first two terms attain an error less than 0.01%. In addition, it should also be noted that for moderate values of n, 1F1 are explicitly expressible in elementary functions and that for 1F1 there are effective recursive relations at hand (see Supplementary Section II).

Swapping a and b

By construction, all series are unaffected by permutations of a, b, c (although it affects the numerical behaviour) but in the series (8a), also the individual terms are unaffected by the change . This follows from the relationwhere is the nth order Legendre polynomial. It is then readily verified that  2F1 and  2F1 are indeed equal. The terms in the series (8d) are also unaffected by the change , but this is immediate from the expression.

Applications

The results in this paper are the exact expressions given in (8–10), which extends the earlier formulas (5–7), together with the asymptotic behaviour given by Theorem 1. These series can give exact answers in more general models where axisymmetry is not insisted upon. However, to further motivate applicability of our findings, we discuss and suggest several possible situations where the new results provided in this work are used. We have presented explicit formulas for the orientationally-averaged signal in Eq. (2) for general (symmetric) matrices D and B. These formulas complement the well-known cases 1, 2, and 3, i.e., the formulas in Eqs (5–7), where D and B have various symmetries. However, even when one matrix, say B, is rank-1, the signal expression for a general D (Eq. 8) is believed to be new. Eq. (10) raises the question of the usefulness of case 5, in which both measurement and diffusion tensors are general. We argue that this solution is indeed useful, for example, when a powdered specimen whose local structure is characterized by a general diffusion tensor is examined. In MRI, the imaging gradients lead to a rank-3 measurement tensor even when a standard Stejskal-Tanner sequence is employed. Moreover, since for a general D, even using a rank-1 measurement tensor , the knowledge of for sufficiently many (at least 3) d determines D. Below, we also discuss power-laws and asymptotic behaviour of the signal in a more general setting. From a practical standpoint, many “independent” measurements are necessary in order to mitigate noise-related effects. With an axisymmetric measurement tensor B, the space of measurements is two dimensional (two degrees of freedom in B), while in the general case the space of measurements is three dimensional. Loosely speaking, this means that in the latter case, there are more “independent” measurement tensors available close to the origin of this space (or any other point common to both spaces, for that matter). Since measurements close to the origin (i.e., B tensors with small eigenvalues) produce higher signal value, this is favorable from a signal-to-noise ratio perspective. On the theoretical side, however, there are situations where a general B tensor actually is crucial in determining the diffusivity properties of the specimen; see subsection below on the estimation of D from the series expansion of .

Remarks on the white-matter signal at large diffusion-weighting

Recently, the orientationally-averaged signal in white-matter regions of the brain was observed to follow the power-law at large d when , e.g., in traditional Stejskal-Tanner measurements[27,28]. Because such decay is predicted for vanishing transverse diffusivity, these results have been interpreted to justify the “stick” model of axons[21,29,30] () while also suggesting that the signal from the extra-axonal space disappears at large diffusion-weighting. In a recent article[24], we showed that such a decay is indeed expected for one-dimensional curvilinear diffusion observed via narrow pulses. On the other hand, in acquisitions involving wide pulses, the curvature of the fibers has to be limited in order for behaviour to emerge. We also noted that the dependence is valid for an intermediate range of diffusion weightings as the true asymptotics of the signal decay is governed by a steeper decay[24]. Here, based on these findings, we consider a rank-1 diffusion tensor for representing intra-axonal diffusion, which is capable of reproducing the intermediate dependence for measurement tensors of the form . For rank-2, axisymmetric measurement tensors, , the orientationally-averaged decay is characterized by the power-law with a being the single nonzero eigenvalue of D, which can be shown to follow from Eq. (6) upon interchanging D and B. For non-axisymmetric, rank-2 measurement tensors, the signal, with obeys the relationship Now, consider the case in which the measurement is tuned by “inflating” the B-matrix while preserving its shape, i.e., varying d while keeping e/d fixed. By this we mean that we calculate (and fix) the ratio so that = . Then, varying only d, both the lower and upper bounds indicated in the above expression decay according to . For , this is clear, and for the upper bound we have (since λ is constant and non-zero). To satisfy these inequalities, it must thus also be that , which establishes the decay of the orientationally-averaged signal decay obtained via general rank-2 (B matrices). Thus, an alternative validation of the stick model could be performed by acquiring data using planar encoding, in which case the expected decay of the orientationally averaged signal would be characterized by a decay regardless of whether or not the B matrix is axisymmetric.

Observation of the (non)axisymmetry of local diffusion tensors

Here, we focus our attention to rank-1 measurement tensors, , e.g., obtained using the Stejskal-Tanner sequence, and investigate to what extent D is determined when is known for some values d. Specifically, we ask the question: Is it possible to distinguish an axisymmetric D from a non-axisymmetric D? The answer to this question is yes; in fact, it is not so hard to see that if and are equal for all , then D1 and D2 must be equal—see for instance the remark near Eq. (17) in the next section. (Also recall that we identify the matrices according to their eigenvalues; two matrices are “equal” for the purposes of this article if one can be rotated into the other.) As an illustration, starting with a rank-1 measurement tensor , Fig. 3 shows the signal as a function of for six different diffusion tensors . In this example, all tensors D, have the same trace yielding the same behaviour near the origin, and their common smallest eigenvalue leads to the same large b behaviour. By varying the remaining two eigenvalues, (whose values are found in Fig. 3) while keeping their sum constant, different curves are produced. The six curves shown in Fig. 3 are samples from a family (indexed by the diffusion tensor D) of curves characterized by their initial and asymptotic behaviour. It is interesting to note that no axisymmetric diffusion tensor D other than D1 and D6 can produce a curve in this family. In this context, the ‘same asymptotic behaviour’ refers to the same exponential decay as , and this is given by the smallest eigenvalue of the diffusion tensor. The initial behaviour, i.e., is given by the trace of the diffusivity, and it is immediate to see that an axisymmetric tensor where the smallest eigenvalue is 0.2 μm2/ms and where the trace is 2.6 μm2/ms has to have the eigenvalues of either D1 or D6.
Figure 3

Given the rank-1 measurement tensor B, with eigenvalues , the signal is plotted as a function of d for six different diffusion tensors . These diffusion tensors all have the same trace, and their eigenvalues are in order  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms.

Given the rank-1 measurement tensor B, with eigenvalues , the signal is plotted as a function of d for six different diffusion tensors . These diffusion tensors all have the same trace, and their eigenvalues are in order  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms. In the remaining part of this section, we consider the intermediate d regime alluded to in the previous section. For a general rank-3 diffusion tensor , the falloff is exponential, which can be inferred from the expression (for )since both the left- and the right-hand-sides of the above expression decay exponentially fast; see (6). Consequently, a reliable inference cannot be made in the large d regime in typical acquisitions due to limited SNR. The only exception is when , i.e., when D is of rank 2. In this case, the arguments from the preceding section can be employed by interchanging the matrices D and B, i.e.,where, for large d. Hence we expect (at least when ) This is indeed the case, as the following theorem shows.

Theorem 1.

Suppose that the diffusion tensor D has eigenvalues a, b, 0 where a, b > 0, and that the measurement tensor B has eigenvalues d, 0, 0. Denoting the corresponding powder average with , it then holds that

Proof.

See Supplementary Section V. This limiting behaviour is illustrated in Fig. 4 where, starting with a measurement tensor , plots of the signal are shown for six different diffusion tensors D, . The eigenvalues of these diffusion tensors are  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms. Note that all diffusion tensors have the same trace. For each such tensor, the conditions of Theorem 1 are met, and the limits (as ) can be calculated. These limits are shown with dots in Fig. 4. The discrepancy between the curves and the corresponding dots are due to the finite values of d.
Figure 4

Given the rank-1 measurement tensor B, with eigenvalues , the product is plotted as a function of d for six different diffusion tensors . These diffusion tensors all have the same trace, and their eigenvalues are in order  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms. Using Theorem 1, are known, and the corresponding values are shown with dots. The discrepancy between the curves and the corresponding dots are due to the finite values of d.

Given the rank-1 measurement tensor B, with eigenvalues , the product is plotted as a function of d for six different diffusion tensors . These diffusion tensors all have the same trace, and their eigenvalues are in order  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms,  μm2/ms. Using Theorem 1, are known, and the corresponding values are shown with dots. The discrepancy between the curves and the corresponding dots are due to the finite values of d. Hence, for large d, with , which evaluates to only for . On the other hand, the small d behaviour of the average signal is always . Thus, can, in principle, be determined from the initial decay of the average signal. Any mismatch between this estimate and the estimate of from the power-law decay of the signal can thus be attributed to transverse anisotropy of D. The above inference is a clear example to how the derived expressions involving general tensors can be utilized to gain new insight into the characterization of the local structure.

Estimation of D from the power series expansion of

One of the main results of this paper consists of the series in Eq. (10), which make it possible to calculate the powder average for general matrices D and B without numerical integration or sampling in SO(3). As the corresponding formula (8) for the case when one of the matrices, B, is axisymmetric is simpler, a relevant question is whether the use of a general B is of any help in determining D from a set of measurements. This question immediately splits into two; namely if there is an advantage with a general B in principle, and also in practice. For instance, for any fixed , and regarding as a function of , the asymptotics of for large λ may be of primary importance in principle, but in practice the actual values of in that regime may be so small that noise and measurement errors make them impossible to use for determining D. To address the question above, we start by considering a general B and then the special case when B is axisymmetric. Since any axisymmetric matrix B is the sum of an isotropic matrix and a rank-one matrix, and since the effect on of an isotropic matrix is trivial, c.f. Eq. (5), it is sufficient to consider rank-one matrices B. So, with a general D, we consider a family of B-matrices given by , where is a fixed matrix, and λ is a scalar strength parameter. For fixed D and , the orientationally-averaged signal is a function of λ, and we seek the dependence for small λ. Consider therefore the expansion , . The coefficients can be estimated from the knowledge of for a suitable set of (small) {λ}. Up to third order (see Supplementary Section III) the coefficients c are For a generic matrix (with the isotropic choice being the singular exception), knowledge of these three coefficients suffice to determine the matrix D: For ease of illustration, consider the special case wherein is rank-1 (i.e., ) yielding the simplified coefficients We see that this is a non-degenerate system that gives in terms of , and since (the eigenvalues of) D is determined from these three traces, c1, c2 and c3 determine D. Returning to a general , a convenient point of view is to consider as decomposed into two parts (c.f., Supplementary Section I) and write, , where . Then the average signal (2) factors into two, each factor calculable from and separately. We find that when , the expansion coefficients (16) attain the simplified formwhile an isotropic on the other hand yields (renaming the coefficients) The resulting signal expansion is the product  . Note how the isotropic part is not all that helpful: It attenuates the signal without being sensitive to anything other than . When using a general as described above, Eq. (18) tells us that D cannot be determined from c1 and c2 alone, even if we are using the extra degree of freedom in B by varying δ and . This is so since c1 and c2 contain information only on two invariants ( and ), which is not sufficient to determine D, which has three eigenvalues. That being said, in practice, varying δ and may be a way of getting “independent” measurements to reduce the influence of noise, i.e., increase SNR and therefore provide more reliable estimates of the coefficients c. Let us also mention that in the situation described by Eq. (18) where the combined freedom of δ and offers no extra information, one can still vary these to test the assumption that the specimen is indeed described by a single diffusivity matrix D. Namely, for different values of δ and , one should always get the same estimated matrix D.

Standard model of white-matter with a general diffusion tensor for the extracellular compartment

The extra degree of freedom provided by the parameters δ and may be crucial in certain relevant situations. In principle, such situations can be addressed by involving higher order coefficients but reliably estimating them would get increasingly difficult, and using for various δ and values is more robust. We give the following example, which could be relevant for simplified models of white-matter microstructure. Suppose that our specimen is a mixture of two different substances, in unknown proportions, where one of the substances is (from a diffusivity perspective) a “stick”. Thus, in proportions p and , with , we have the unknown diffusivity matrices and . This model is similar to the commonly employed white-matter model (sometimes referred to as the “standard model”[31]), but differs from it in two ways: (i) we ignore the isotropic compartment, whose contribution to the orientationally-averaged signal is simply an exponential (see (5)), (ii) the contribution from the extracellular matrix is given by a general diffusion tensor, which is not necessarily axially-symmetric. In this model, we therefore have five unknowns to determine; three invariants of D together with p and q. Since , Eq. (18) now becomes By varying δ and/or , these equations determine “almost uniquely”, in the sense that there are (rare) situations where there are two sets of acceptable solutions to Eq. (20). However, if one also adds the measurements with an isotropic measurement tensor, this possible ambiguity goes away. (See Supplementary Section IV for details.) Thus, it is possible to obtain all 5 unknowns of a multi-compartment white-matter model from the first three terms of the power-series representation of the orientationally-averaged signal.

Discussion and Conclusions

Our elaborations on the numerical behaviour of the main results (8) and (10) were restricted to the case where one matrix is axisymmetric, corresponding to Eq. (8). In the more general case corresponding to Eq. (10), the same guidelines apply regarding how the ordering of the eigenvalues affects the numerical behaviour, with some additional caveats. First of all, since no eigenvalues necessarily coincide, the number of possible distinguishable orderings increases, by a factor of 6 in particular. Also, not only one has to choose between the three alternatives (10c–e) by the same guidelines that apply to the alternatives (8a–d), but also the number of coefficients q has to be specified. A more brute-force approach to calculate the orientational average (2) is to set up the necessary integrations in the space of rotations and perform them by some discretization scheme. However, such an approach would require the integrand to be relatively well-behaved in the integration domain in order to be accurate, which is not always the case. For instance, when the matrices D and B are very similar to each other, and with one eigenvalue dominating the others, the integrand is (virtually) zero for most rotations R, with most of the contribution stemming from a small subset. In such cases, a discretization of the average is not reliable. In this work, we represented local diffusion by employing a diffusion tensor along with the generalization[32] of the Stejskal-Tanner formula[33] for the signal contribution of each microdomain. The underlying assumption is that diffusion in each and every microdomain is unrestricted. This assumption[16] has been the building block of not only the microstructure models mentioned above, but also the techniques[5,34-36] developed within the multidimensional diffusion MRI framework. Introduction of the confinement tensor concept[37,38] provides a viable direction that could achieve the same by accounting for the possible restricted character of the microdomains. This is the subject of future work. Another limitation of the present work is the assumption that there is no variation in the size of the microdomains making up the complex environment—the same assumption employed in many of the microstructure models. Previous studies[39,40] suggest the complexity of accounting for such variations in different contexts. We intend to address this issue in the future. In conclusion, we studied the orientationally-averaged magnetic resonance signal by extending the existing expressions to cases involving general tensors with no axisymmetry. This was accomplished by evaluating a challenging average (Eq. (2)), or equivalently the integral in (3) for the special class of p(D) distributions considered in this article. Although the results are given as sums of infinitely many terms, we showed that with certain arrangements of the parameters, obtaining very accurate estimates is possible by retaining a few terms in the series. These developments led to a number of interesting inferences on the properties of the signal decay curve as well as estimation of relevant parameters from the signal. The findings presented in this work could be useful in many contexts in which the the expression (2) (or (3)) emerges. For example, though we employed the nomenclature of diffusion MR in this paper, our findings are applicable to solid-state NMR spectroscopy as well due to the mathematical similarities of the two fields. Supplementary information
  24 in total

1.  Statistical model for diffusion attenuated MR signal.

Authors:  Dmitriy A Yablonskiy; G Larry Bretthorst; Joseph J H Ackerman
Journal:  Magn Reson Med       Date:  2003-10       Impact factor: 4.668

2.  Characterization and propagation of uncertainty in diffusion-weighted MR imaging.

Authors:  T E J Behrens; M W Woolrich; M Jenkinson; H Johansen-Berg; R G Nunes; S Clare; P M Matthews; J M Brady; S M Smith
Journal:  Magn Reson Med       Date:  2003-11       Impact factor: 4.668

3.  Measurement of fiber orientation distributions using high angular resolution diffusion imaging.

Authors:  Adam W Anderson
Journal:  Magn Reson Med       Date:  2005-11       Impact factor: 4.668

4.  Resolution of complex tissue microarchitecture using the diffusion orientation transform (DOT).

Authors:  Evren Ozarslan; Timothy M Shepherd; Baba C Vemuri; Stephen J Blackband; Thomas H Mareci
Journal:  Neuroimage       Date:  2006-03-20       Impact factor: 6.556

5.  NMR diffusion-encoding with axial symmetry and variable anisotropy: Distinguishing between prolate and oblate microscopic diffusion tensors with unknown orientation distribution.

Authors:  Stefanie Eriksson; Samo Lasič; Markus Nilsson; Carl-Fredrik Westin; Daniel Topgaard
Journal:  J Chem Phys       Date:  2015-03-14       Impact factor: 3.488

6.  NMR signal for particles diffusing under potentials: From path integrals and numerical methods to a model of diffusion anisotropy.

Authors:  Cem Yolcu; Muhammet Memiç; Kadir Şimşek; Carl-Fredrik Westin; Evren Özarslan
Journal:  Phys Rev E       Date:  2016-05-05       Impact factor: 2.529

Review 7.  On modeling.

Authors:  Dmitry S Novikov; Valerij G Kiselev; Sune N Jespersen
Journal:  Magn Reson Med       Date:  2018-03-01       Impact factor: 4.668

8.  Effective potential for magnetic resonance measurements of restricted diffusion.

Authors:  Evren Özarslan; Cem Yolcu; Magnus Herberthson; Carl-Fredrik Westin; Hans Knutsson
Journal:  Front Phys       Date:  2017-12-19

9.  Influence of the size and curvedness of neural projections on the orientationally averaged diffusion MR signal.

Authors:  Evren Özarslan; Cem Yolcu; Magnus Herberthson; Hans Knutsson; Carl-Fredrik Westin
Journal:  Front Phys       Date:  2018-03-02

10.  Multi-compartment microscopic diffusion imaging.

Authors:  Enrico Kaden; Nathaniel D Kelm; Robert P Carson; Mark D Does; Daniel C Alexander
Journal:  Neuroimage       Date:  2016-06-06       Impact factor: 6.556

View more
  2 in total

Review 1.  The sensitivity of diffusion MRI to microstructural properties and experimental factors.

Authors:  Maryam Afzali; Tomasz Pieciak; Sharlene Newman; Eleftherios Garyfallidis; Evren Özarslan; Hu Cheng; Derek K Jones
Journal:  J Neurosci Methods       Date:  2020-10-02       Impact factor: 2.390

2.  Direction-averaged diffusion-weighted MRI signal using different axisymmetric B-tensor encoding schemes.

Authors:  Maryam Afzali; Santiago Aja-Fernández; Derek K Jones
Journal:  Magn Reson Med       Date:  2020-02-21       Impact factor: 3.737

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.