Literature DB >> 34855019

The modern structurator: increased performance for calculating the structure function.

Mojtaba Norouzisadeh1,2, Mohammed Chraga1, Giovanni Cerchiari3, Fabrizio Croccolo1.   

Abstract

The autocorrelation function is a statistical tool that is often combined with dynamic light scattering (DLS) techniques to investigate the dynamical behavior of the scattered light fluctuations in order to measure, for example, the diffusive behavior of transparent particles dispersed in a fluid. An alternative approach to the autocorrelation function for the analysis of DLS data has been proposed decades ago and consists of calculating the autocorrelation function starting from difference of the signal at different times by using the so-called structure function. The structure function approach has been proven to be more robust than the autocorrelation function method in terms of noise and drift rejection. Therefore, the structure function analysis has gained visibility, in particular in combination with imaging techniques such as dynamic shadowgraphy and differential dynamic microscopy. Here, we show how the calculation of the structure function over thousands of images, typical of such techniques, can be accelerated, with the aim of achieving real-time analysis. The acceleration is realized by taking advantage of the Wiener-Khinchin theorem, i.e., by calculating the difference of images through Fourier transform in time. The new algorithm was tested both on CPU and GPU hardware, showing that the acceleration is particularly large in the case of CPU.
© 2021. The Author(s).

Entities:  

Year:  2021        PMID: 34855019      PMCID: PMC8639561          DOI: 10.1140/epje/s10189-021-00146-2

Source DB:  PubMed          Journal:  Eur Phys J E Soft Matter        ISSN: 1292-8941            Impact factor:   1.890


Introduction

Dynamic light scattering (DLS) techniques have been used for decades to obtain information about the dynamical behavior of a variety of samples spanning from soft matter physics to biology [1]. The main idea of DLS is to measure the intensity of the light scattered by a transparent sample at a given angle and statistically analyze its fluctuations in time in order to obtain information on the motion of the components inside the sample. For example, DLS analysis of the Brownian motion of particles dispersed in fluid allows measuring their diffusion coefficient and then, ultimately, their size distribution thanks to the Stokes–Einstein relation between the particles’ mobility and their size. The quantity classically obtained in DLS instruments is the autocorrelation function, i.e., the direct output of “correlators” that compute the scalar product of the intensity signal coming from the light detector by the same quantity at different delay times. An alternative approach to the autocorrelation function has been proposed several decades ago and consists in computing the structure function. The structure function is obtained by analyzing the autocorrelation of the differences of the signal at different times [2]. At the same time, it was proposed to develop “structurators” in place of the more widely known correlators [3]. With the spread of pixelated detectors, imaging techniques like dynamic shadowgraphy, dynamic Schlieren [4-9], and differential dynamic microscopy (DDM) [10-12] have taken advantage of the use of the structure function because of its improved robustness for data analysis in terms of rejection of background signal deriving from steady-state and slow-drift noise sources as compared to the autocorrelation function approach [2, 13]. This is due to the intrinsic nature of the structure function that is based on the difference of signal elements of increasing time delay so that any spurious signal changing on times longer than the utilized time delay is subtracted. By using the spatial Fourier analysis, these imaging techniques allow scientists to investigate the temporal evolution of a sample at the different length scales present in a set of images recorded at different times [13]. For this reason, they have gained popularity, especially in the field of soft matter physics. In fact, the combination of the robustness of the structure function analysis applied to simple and/or already available optical setups has allowed them to be used both in traditional laboratories [10-12], and in orbiting experiments on the ISS [14-16] for investigating the dynamics of rather different samples ranging from colloidal particles [10] to bacteria [17], but also from biological cells [18] to density fluctuations in and outside thermal equilibrium [4, 19], and many others, as also witnessed by several review articles [12, 13, 20, 21]. As stated, the structure function approach can be combined with imaging techniques, thus requiring an optical system like transmitted light microscopy [10], fluorescence-based microscopy [22], dark-field imaging [23] to acquire series of images. The series of images should be processed by custom-made software to compute the structure function, as defined by Schultz-Dubois and Rehberg [2] and later implemented to Schlieren [8] and Shadowgraphy [8] and optical microscopy [10, 24]. However, a rapid evaluation of the structure function is fundamental to achieve real-time analysis in laboratory conditions and may play a crucial role in the utilization of such an approach in industrial and commercial applications. The available software programs calculate the structure function in different ways. Some process the images by calculating the differences between pairs of images first, and then evaluate the bi-dimensional fast Fourier transforms (FFT) of the differences [11]. In other cases, they first compute the FFT of the images and then calculate the differences in Fourier space [25]. Since the number of images that can be acquired and the number of pixels therein have considerably increased in the latest two decades, the computational load to evaluate the structure function has increased consequently. In the meantime, also the computational capabilities of modern computers have grown, but a major breakthrough in reducing the computation time of the structure function was achieved when researchers started to implement the calculation on graphics processing units (GPU) [22, 25]. The implementation of this computational task on GPU allowed a decrease in the computational time by a factor of 10–30, thereby reducing the data analysis time from several hours to a few tens of minutes. In the present article, we present a different route to calculate the structure function of the image series taking advantage of the Wiener–Khinchin theorem  [26, 27]. The calculation is performed by using the Fourier transform in time rather than by calculating differences of spatial FFTs. This approach enables a further optimization step and allows us to compute the structure function faster than state-of-the-art existing software. We obtain a considerable speed up of the calculation time, particularly when GPU acceleration is not available. The article is organized as follows. First, we provide an example of application by means of Shadowgraph images that are later utilized to test the software performances. Then, we discuss our method for calculating the structure function and compare it with state-of-the-art algorithms [25]. Finally, we discuss the results and provide conclusions.

Test case: shadowgraph observation of density fluctuations

In this section, we describe a free diffusion experiment obtained by carefully layering two miscible fluids where the denser one is placed at the bottom of the container, so to obtain a gravitationally stable condition. The fluid system is investigated by shadowgraphy, i.e., an optical technique able to measure density fluctuations within the fluid in terms of series of images from which one can extract the density fluctuation structure function by means of the DDA algorithm. In the classical implementation of the DDA algorithm  [2, 4, 8, 10], the structure function is calculated by first evaluating the differences among all pairs of images and then by computing the power spectra of those differences, and finally, by averaging the power spectra over all the pairs of images acquired with the same time delay. This procedure can be defined as follows:where the indices n and m run from 0 to and indicates the bidimensional FFT of the images in space. The absolute value operation “” is intended for every wave vector component of the FFT. The initial condition is prepared in two steps: First, we introduce pure water by completely filling a glass cylindrical cell (Hellma, 120-OS-20); second, we slowly inject the glycerol and water solution (20 % w/w) until reaching half of the cell. By using this procedure, we obtain a two-layer sample in which the two miscible liquids are separated by a vanishing horizontal interface and are stabilized by the gravitational force while the only mechanism relaxing the concentration gradient with time is mass diffusion. The dissolving concentration gradient provides a non-equilibrium condition that amplifies the spontaneous velocity fluctuations within the fluid [28]. This results in the appearance of non-equilibrium fluctuations at all wavelengths that can be visualized by means of the Shadowgraph setup as done in several publications [4]. For shadowgraph observation, the cell is illuminated by a collimated plane-parallel beam obtained by using a super-luminous diode (Superlum, SLD-MS-261-MP2-SM) with a wavelength of  nm and propagating along the vertical axis. The light propagates through the sample and the density fluctuations induce local fluctuations of the refractive index that scatter the light field. A charged coupled device (CCD) records the interference between the primary laser beam and the light scattered by refractive index fluctuations inside the fluid. We acquired sets of 2000 images of pixels at the frame rate of 25 Hz. Sample data adopted for the tests reported in this article. The data corresponds to a measurement of the non-equilibrium concentration fluctuations in a free-diffusion experiment. a Sample image consisting of pixels, the size of the image in the real space is 19 mm. b Difference of two images taken 4 s apart. c Structure function averaged over 1000 difference of image pairs with 4 s time delay. d Angular average of bi–dimensional structure functions like the one shown in c for different time delays. d–inset Structure function as a function of the time delay for three different wave vectors. e Time decay as obtained by fitting the structure functions with a model function containing a single exponential decay In Fig. 1 we show: (a) a typical Shadowgraph image , (b) a typical image difference with enhanced contrast to make the tiny density fluctuations visible, and (c) its bidimensional power spectrum displayed in logarithmic scale. The two images considered for the difference were taken 4 s apart so that the signal is uncorrelated for most of the wave vectors, and the structure function has already reached its maximum. In panel (d), the azimuthal average of the structure functions are shown for many delay times m. The inset of panel (d) shows the structure function as a function of the delay time for three selected wave vectors. The strong oscillations as a function of the wave vector are related to the shadowgraph transfer function as described in literature [9]. The structure function increases as a function of the delay time at any wave vector and can be analyzed to investigate the diffusive behavior of concentration fluctuations during the diffusion process. The structure function is modeled as detailed in the literature [5], by providing a suitable model for the intermediate scattering function, which in the present case is a single exponential decay. The fitting procedure thus provides a measurement of the time decay for any wave vector q as shown in panel (e). The right part of the plot shows the typical behavior of concentration non-equilibrium fluctuations from which one can extract the value of the mass diffusion coefficient, like it has been performed for thermodiffusion experiments [5]. The left part of the plot shows the effect of gravity on the decay times of concentration non-equilibrium fluctuations already reported in several ground-based experiments [8]. The latest part of such analysis is out of the scope of the present paper and will be published in a separate work.
Fig. 1

Sample data adopted for the tests reported in this article. The data corresponds to a measurement of the non-equilibrium concentration fluctuations in a free-diffusion experiment. a Sample image consisting of pixels, the size of the image in the real space is 19 mm. b Difference of two images taken 4 s apart. c Structure function averaged over 1000 difference of image pairs with 4 s time delay. d Angular average of bi–dimensional structure functions like the one shown in c for different time delays. d–inset Structure function as a function of the time delay for three different wave vectors. e Time decay as obtained by fitting the structure functions with a model function containing a single exponential decay

Different approaches to the structure function

The calculation of the structure function involves evaluating differences, FFTs and averages that can be performed efficiently on a GPU as parallel operations [22]. This approach can be optimized by exploiting the linearity of the FFT and the available hardware memory as described in ref. [25]. The calculation of Eq. 1 can be approached via a two-step algorithm. First, all FFTs of the images are calculated and stored in the local memory. Second, each matrix d(m) is evaluated by averaging differences of the FFTs of images rather than FFTs of image differences exploiting the linearity of the FFT operation. This approach reduces the number of operations to be performed because the matrices can be used several times for different d(m). Thus, for N images, the number of FFTs to be computed is reduced from to O(N). While this optimization allows reducing the number of FFTs, the overall algorithm has a global computational complexity of . We see this from Eq. 1 because there are as many time delays m as images, and for each m the matrix d(m) is obtained via a sum over images. In this work, we present a new approach to reduce the global computational complexity of the algorithm to by using the Wiener–Khinchin theorem [26, 27], which states that, for a stationary random process, the autocorrelation function can be calculated by the power spectrum (in time) of the process. We expand the square modulus operation of Eq. 1 in the following way:where the symbol “” indicates complex conjugation. In the sum, the first term is the average of the first spatial power spectra, while the second term is the average of the last spatial power spectra. Both terms have a computational complexity of . The last term, identified by the product , is the autocorrelation function of the image FFTs. The autocorrelation is the only term in Eq. 2 which has computational complexity of . By applying the Wiener–Khinchin theorem [26, 27], the autocorrelation function can be evaluated via the power spectrum in the temporal frequency Fourier space. The advantage of computing the autocorrelation function via the Fourier transform in time is given by the speedup provided by the FFT algorithm allowing to reduce the computational complexity from to () [29].

Performance analysis

To compare the new algorithm with other available reference software [25, 30–32], we developed a new software program that implements the algorithm described in ref. [25] and the new algorithm on CPU and GPU hardware, for a total of four execution modes. Further comparisons with other software are presented in “Appendix D” showing that the method reported in [25] was already one of the fastest approaches to calculate the structure function before the present work. To distinguish the two algorithms, we will refer to the method reported in ref. [25] as WITHOUT_FT and the technique discussed in this article as WITH_FT, where the label FT stands for Fourier transform in time. Both methods calculate the final result in two steps. The first step is common and consists of calculating and storing the FFTs of the images in the available free memory: RAM for the CPU versions and G-RAM (global RAM) for the GPU implementations. In the second step, the wave vectors are analyzed independently according to the different schemes. If the wave vector data exceeds the capacity of the available memory, both algorithms split the job into several groups at the price of recalculating the image FFTs several times (see “Appendix C” for more details). The program is written in C++11 and CUDA v.10.2 with graphical support of the OpenCV 3.0 library. We tested the program with the Fourier transform libraries CUFFT (version provided in CUDA v.10.2) for GPU execution and FFTW 3.3.3 [33] for the CPU implementations. The code was compiled with MS compiler v120 and the compiler of CUDA v.10.2 in Visual Studio 2019. The program was executed on a machine with the following specifications: CPU: Intel® i9-9880H, 32 GB DDR4 RAM, Graphic card: NVIDIA Quadro RTX 4000 with 8GB of dedicated G-RAM memory, 512 GB SSD drive—PCIe, performance class 40. Execution time as a function of the total number of images for images of pixels. The data points corresponding to the WITH_FT algorithm have square markers. The data points corresponding to the WITHOUT_FT algorithm have circular markers. The markers have colored filling for the GPU modes, and have white filling for the CPU modes Test images of pixels were taken from the experiment described in Sect. 2. For other sizes, synthetic images were generated with pixels having 16 bit depth, similar to the real images. In our tests, we considered image sets composed by maximum images, and we limited the execution time of the program to less than  s. In the first test, we ran all the algorithms on CPU and GPU with images composed of pixels. For comparison, we made use of 8 GB of RAM for executing the program on the CPU. In this way, the CPU and the GPU could access the same amount of RAM and G-RAM, respectively. The execution times of the program are presented in Fig. 2, in which the times for all four execution modes are plotted as a function of the number of images used for the test. As expected from the results reported in ref. [25], the WITHOUT_FT algorithm executes more than 30 times faster on GPU than CPU. The GPU hardware is also faster than the CPU in executing the WITH_FT algorithm, but the speed-up factor never exceeds a factor of two. We see that the WITH_FT scheme is faster than the WITHOUT_FT method when the image number processed in one run of the program is larger than . If the condition is met, both CPU and GPU versions of the WITH_FT algorithm execute quicker than the GPU–WITHOUT_FT implementation, reaching a maximum speed-up factor of 10–12 for 16384 images.
Fig. 2

Execution time as a function of the total number of images for images of pixels. The data points corresponding to the WITH_FT algorithm have square markers. The data points corresponding to the WITHOUT_FT algorithm have circular markers. The markers have colored filling for the GPU modes, and have white filling for the CPU modes

Fractional execution time of the different tasks of the program as a function of the total number of images. The length of the colored bars represents the fractional time spent by the program to execute the different operations: disk IO, host–device data transfers, step 1 and step 2. In this test, we used images of pixels. The first row (graphs (a), b) presents the fractional times spent in CPU mode and the second row (graphs c, d) in GPU mode. The first column shows the fractional times of the WITH_FT algorithm (graphs a, c) and the second column of the WITHOUT_FT algorithm (graphs b, d). Data of the CPU-WITHOUT_FT version for 16384 images are no reported because the total execution time was exceeding  s Figure 3 presents the fractional time spent by the program in the four modes to compute the images’ FFTs (step one), process the time sequences (step two) and perform memory IO operations (disk and host-device). The IO operations named host-device include the data transfers between the RAM and the G-RAM, and it exists only in the GPU implementations. In the figure, we normalized the fractional times by the total execution time to highlight the different workloads for executing each part of the program. As a function of an increasing number of images, the workload of step two compared to the other operations remains balanced in the CPU-WITH_FT implementation, and it reduces in the GPU-WITH_FT implementation. Conversely, the WITHOUT_FT algorithm spends more fractional time during the second step as the number of images increases both in the CPU and the GPU modes. Combining the information of Figs. 2 and 3, we see the advantage of the new implementation applied to the problem of calculating the structure function. The WITH_FT algorithm is faster than the WITHOUT_FT scheme for a large number of images as a consequence of the reduction in computational complexity in processing the time sequences of the wave vectors.
Fig. 3

Fractional execution time of the different tasks of the program as a function of the total number of images. The length of the colored bars represents the fractional time spent by the program to execute the different operations: disk IO, host–device data transfers, step 1 and step 2. In this test, we used images of pixels. The first row (graphs (a), b) presents the fractional times spent in CPU mode and the second row (graphs c, d) in GPU mode. The first column shows the fractional times of the WITH_FT algorithm (graphs a, c) and the second column of the WITHOUT_FT algorithm (graphs b, d). Data of the CPU-WITHOUT_FT version for 16384 images are no reported because the total execution time was exceeding  s

In a second test, we analyzed the execution performance of the GPU-WITH_FT and GPU-WITH OUT_FT algorithms for squared images of different sizes. Figure 4 presents the ratio of execution times between the GPU-WITH_FT execution over GPU-WITHOUT_FT execution for a different number of images and different image sizes. In analogy to the pixels example, the WITH_FT method is faster than the WITHOUT_FT technique for more than images. The red plane in the figure marks the condition in which both algorithms complete execution in the same amount of time. We notice that small image sizes obtain a larger speedup gain as compared to large images. For example, images composed of pixels obtain up to a speed-up gain in the execution time, against only obtained with images composed of pixels. In fact, the number of pixels per image affects the load of data transfer operations and FFT of the images in two ways. First, calculating the bidimensional FFT requires more time for images composed of many pixels. Second, the FFTs are calculated several times if the wave vector components of all the images exceed the available memory. Therefore, at processing images composed of many pixels, both the WITH_FT and WITHOUT_FT algorithm must spend a large fraction of time preparing the time sequences before their analysis. Considering for example the WITH_FT at processing 16384 images, the first step and memory IO operations occupy 44% of the execution time with images composed of pixels, and they occupy 62% of the execution time for the images composed of pixels.
Fig. 4

Ratio of execution times on GPU of the WITHOUT_FT against the WITH_FT algorithm as a function of different numbers and sizes of images. The transparent red plane marks the condition in which both algorithms process the images within the same time

Ratio of execution times on GPU of the WITHOUT_FT against the WITH_FT algorithm as a function of different numbers and sizes of images. The transparent red plane marks the condition in which both algorithms process the images within the same time The performance loss caused by large datasets can be partially mitigated by adopting larger memory areas to store the image FFTs. As a final test, we processed 16384 images of pixels with the CPU–WITH_FT algorithm releasing to the program 23 GB of RAM. In this configuration, we obtained a speedup of a factor of 2 compared to the previous tests in which the RAM was limited to 8 GB, thanks to the larger available memory area. In fact, the image’s FFTs are recalculated six times by using 8 GB of RAM, but only two times by using 23 GB of RAM. We describe this test in more detail in “Appendix C”.

Conclusion

In this article, we presented a new algorithm to calculate the structure function for image sets obtained by means of suitable optical techniques, like dynamic shadowgraphy, dynamic Schlieren, or differential dynamic microscopy. The algorithm is based on the temporal FFT of the image 2D spatial FFTs, rather than on differences of the latter. The software developed to implement the new algorithm has been tested against several other software available and it outperforms all of them by different factors depending on the image size and number. In particular, we tested the new software with the one we developed a few years ago  [25]. While the old approach executes  times faster in the GPU mode as compared to the CPU mode, the new method executes all the calculations within a similar amount of time on the GPU and the CPU. This result can be a valuable one for all the scientists that are not equipped with GPU hardware. The increased performance in terms of time-saving is in itself a non-negligible advantage. However, the main reason for developing more performing software is to try to achieve real-time analysis of the images, so that the scientist can judge the quality of the measurement and thus modify the experimental parameters during the measurement itself. Analyzing with a delay of some hour means that the experiment must be performed again if the resulting data are not good in terms of signal-to-noise ratio or affected by other experimental issues. The source code of the program developed in this work, which executes the algorithm both on CPU and GPU, is released under the GNU General Public License v.3 [34] and is freely available for download at [35]. The program can readily be used for calculating the structure function from an arbitrary set of images. A more efficient version of the code (about 10 times faster on GPU) is currently under development and will be commercially available in the next future.
Table 1

Execution times of the programs Soft_1, Soft_2, Soft_3, GPU-WITHOUT_FT (label “NO_FT”), GPU-WITH_FT (label “GPU”) and CPU-WITH_FT (label “CPU”)

NTime (s)
Soft_1Soft_2Soft_3NO_FTGPUCPU
642496 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$<1$$\end{document}<1 121
1287017111142
25624635513164
512923801139208
10243516181215312717
2048144944306301133835
4096513431022137044686117
8192 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$>10^5$$\end{document}>105 209128891740190314
16384 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$>10^5$$\end{document}>105 463857836921574922

The label “N” indicates the number of images. For the comparison, we used images composed by pixels

  18 in total

1.  Dark-field differential dynamic microscopy.

Authors:  Alexandra V Bayles; Todd M Squires; Matthew E Helgeson
Journal:  Soft Matter       Date:  2016-02-28       Impact factor: 3.679

2.  Use of dynamic schlieren interferometry to study fluctuations during free diffusion.

Authors:  Fabrizio Croccolo; Doriano Brogioli; Alberto Vailati; Marzio Giglio; David S Cannell
Journal:  Appl Opt       Date:  2006-04-01       Impact factor: 1.980

3.  Scattering information obtained by optical microscopy: differential dynamic microscopy and beyond.

Authors:  Fabio Giavazzi; Doriano Brogioli; Veronique Trappe; Tommaso Bellini; Roberto Cerbino
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2009-09-21

4.  Nonlinear rheology of colloidal dispersions.

Authors:  J M Brader
Journal:  J Phys Condens Matter       Date:  2010-08-18       Impact factor: 2.333

Review 5.  European Space Agency experiments on thermodiffusion of fluid mixtures in space.

Authors:  M Braibanti; P -A Artola; P Baaske; H Bataller; J -P Bazile; M M Bou-Ali; D S Cannell; M Carpineti; R Cerbino; F Croccolo; J Diaz; A Donev; A Errarte; J M Ezquerro; A Frutos-Pastor; Q Galand; G Galliero; Y Gaponenko; L García-Fernández; J Gavaldá; F Giavazzi; M Giglio; C Giraudet; H Hoang; E Kufner; W Köhler; E Lapeira; A Laverón-Simavilla; J -C Legros; I Lizarraga; T Lyubimova; S Mazzoni; N Melville; A Mialdun; O Minster; F Montel; F J Molster; J M Ortiz de Zárate; J Rodríguez; B Rousseau; X Ruiz; I I Ryzhkov; M Schraml; V Shevtsova; C J Takacs; T Triller; S Van Vaerenbergh; A Vailati; A Verga; R Vermorel; V Vesovic; V Yasnou; S Xu; D Zapf; K Zhang
Journal:  Eur Phys J E Soft Matter       Date:  2019-07-11       Impact factor: 1.890

Review 6.  Non-local fluctuation phenomena in liquids.

Authors:  F Croccolo; J M Ortiz de Zárate; J V Sengers
Journal:  Eur Phys J E Soft Matter       Date:  2016-12-20       Impact factor: 1.890

7.  A light scattering study of non equilibrium fluctuations in liquid mixtures to measure the Soret and mass diffusion coefficient.

Authors:  F Croccolo; H Bataller; F Scheffold
Journal:  J Chem Phys       Date:  2012-12-21       Impact factor: 3.488

8.  Note: Quasi-real-time analysis of dynamic near field scattering data using a graphics processing unit.

Authors:  G Cerchiari; F Croccolo; F Cardinaux; F Scheffold
Journal:  Rev Sci Instrum       Date:  2012-10       Impact factor: 1.523

9.  Perspective: Differential dynamic microscopy extracts multi-scale activity in complex fluids and biological systems.

Authors:  Roberto Cerbino; Pietro Cicuta
Journal:  J Chem Phys       Date:  2017-09-21       Impact factor: 3.488

10.  Coupled non-equilibrium fluctuations in a polymeric ternary mixture.

Authors:  L García-Fernández; P Fruton; H Bataller; J M Ortiz de Zárate; F Croccolo
Journal:  Eur Phys J E Soft Matter       Date:  2019-09-12       Impact factor: 1.890

View more
  2 in total

1.  Nonlinearities in shadowgraphy experiments on non-equilibrium fluctuations in polymer solutions.

Authors:  D Zapf; J Kantelhardt; W Köhler
Journal:  Eur Phys J E Soft Matter       Date:  2022-04-26       Impact factor: 1.624

2.  The Soret coefficients of the ternary system water/ethanol/triethylene glycol and its corresponding binary mixtures.

Authors:  M Schraml; H Bataller; C Bauer; M M Bou-Ali; F Croccolo; E Lapeira; A Mialdun; P Möckel; A T Ndjaka; V Shevtsova; W Köhler
Journal:  Eur Phys J E Soft Matter       Date:  2021-10-18       Impact factor: 1.624

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.