Literature DB >> 35590840

Computational Methods for Parameter Identification in 2D Fractional System with Riemann-Liouville Derivative.

Rafał Brociek1, Agata Wajda2, Grazia Lo Sciuto3,4, Damian Słota1, Giacomo Capizzi4.   

Abstract

In recent times, many different types of systems have been based on fractional derivatives. Thanks to this type of derivatives, it is possible to model certain phenomena in a more precise and desirable way. This article presents a system consisting of a two-dimensional fractional differential equation with the Riemann-Liouville derivative with a numerical algorithm for its solution. The presented algorithm uses the alternating direction implicit method (ADIM). Further, the algorithm for solving the inverse problem consisting of the determination of unknown parameters of the model is also described. For this purpose, the objective function was minimized using the ant algorithm and the Hooke-Jeeves method. Inverse problems with fractional derivatives are important in many engineering applications, such as modeling the phenomenon of anomalous diffusion, designing electrical circuits with a supercapacitor, and application of fractional-order control theory. This paper presents a numerical example illustrating the effectiveness and accuracy of the described methods. The introduction of the example made possible a comparison of the methods of searching for the minimum of the objective function. The presented algorithms can be used as a tool for parameter training in artificial neural networks.

Entities:  

Keywords:  computational methods; fractional derivative; fractional differential equation; fractional system; heuristic algorithm; inverse problem; parameter identification

Mesh:

Year:  2022        PMID: 35590840      PMCID: PMC9104792          DOI: 10.3390/s22093153

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.847


1. Introduction

Fractional calculus is widely used in various fields of science and technology, e.g., in the design of sensors, in signal processing, and network sensors [1,2,3,4,5]. In the paper [2], authors describe the use of fractional calculus for artificial neural networks. Fractional derivatives are mainly used for parameter training using optimization algorithms, system synchronization, and system stabilization. As the authors quote, such systems have been used in unmanned aerial vehicles (UAVs), circuit realization robotics, and many other engineering applications. The paper [3] covers applications of fractional calculus in sensing and filtering domains. The authors present the most important achievements in the fields of fractional-order sensors, fractional-order analogs, and digital filters. In [5], they present a new fractional sensor based on a classical accelerometer and the concepts of fractional calculus. In order to achieve this, two synthesis methods were presented: the successive stages follow an identical analytical recursive formulation, and in the second method, a PSO algorithm determines the fractional system elements numerically. In addition to applications in electronics, neural networks, and sensors, fractional calculus is also used in modeling of thermal processes [6,7], in modeling of anomalous diffusion [8,9], in medicine [10], and also in control theory [11,12]. Authors of the study in [6] model heat transfer in a two-dimensional plate using Caputo operator. Theoretical results are verified by experimental data from a thermal camera. It is shown that the fractional model is more accurate than the integer-order model in the sense of mean square error cost function. Often in applications of fractional calculus, differential equations with fractional derivatives have to be solved numerically. This is the reason for the importance of developing algorithms for solving this type of problem. A lot of papers presenting numerical solutions of fractional partial differential equations have been published in recent years. In the paper [13], the author used the artificial neural network in the construction of a solution method for the one-phase Stefan problem. In turn, Ref. [14] presented an algorithm for the solution of fractional-order delay differential equations. Bu et al., in [15], presented a space–time finite element method to solve a two-dimensional diffusion equation. The paper describes a fully discrete scheme for the considered equation. Authors also presented a theorem regarding existence, stability of the presented method, and error estimation with numerical examples. Another interesting study is [16], in which the ADI method to solve fractional reaction–diffusion equations with Dirichlet boundary conditions was described. The authors used a new fractional version of the alternating direction implicit method. A numerical example was also presented. In the paper, authors present a solution to the inverse problem consisting of the appropriate selection of the model input parameters in such a way that the system response adjusts to the measurement data. Inverse problems are a very important part of all sorts of engineering problems [17]. In [18], the inverse problem is considered for fractional partial differential equation with a nonlocal condition on the integral type. The considered equation is a generalization of the Barenblatt–Zheltov–Kochina differential equation, which simulates the filtration of a viscoelastic fluid in fractured porous media. In [19], the authors considered two inverse problems with a fractional derivative. The first problem is to reconstruct the state function based on the knowledge of its value and the value of its derivative in the final moments of time. The second problem consists of recreating the source function in fractional diffusion and wave equations. Additional information are the measurements in a neighborhood of final time. The authors prove the uniqueness of the solution to these problems. Finally, the authors derive the explicit solution for some particular cases. In the paper [20], the fractional heat conduction inverse problem is considered, consisting of finding heat conductivity in presented model. The authors also compare two optimization methods: iteration method and swarm algorithm. The learning algorithm constitutes the main part of deep learning. The number of layers differentiates the deep neural network from shallow ones. The higher the number of layers, the deeper it becomes. Each layer can be specialized to detect a specific aspect or feature. The goal of the learning algorithm is to find the optimal values for the weight vectors to solve a class of problem in a domain. Training algorithms aim to achieve the end goal by reducing the cost function. While weights are learned by training on the dataset, there are additional crucial parameters, referred to as hyperparameters, that are not directly learned from the training dataset. These hyperparameters can take a range of values and add complexity of finding the optimal architecturenand model [21]. Deep learning can be optimized in different areas. The training algorithms can be fine-tuned at different levels by incorporating heuristics, e.g., for hyperparameter optimization. The time to train a deep learning network model is a major factor to gauge the performance of an algorithm or network, so the problem of the training optimization in a deep learning application can be seen as the solution of an inverse problem. In fact, the inverse problem consists of selecting the appropriate model input parameters in order to obtain the desired data on the output. To solve the problem, we create an objective function that compares the desired values (target) with the network outputs calculated for the determined values of the searched parameters (weights). Finding the minimum of the objective function, we find the sought weights. In this paper, in Section 2, a system consisting of a 2D fractional partial differential diffusion equation with Riemann–Liouville derivative is presented. Dirichlet boundary conditions were added to the equation. This type of model can be used for the designing process of heat conduction in porous media. In Section 2.2, a numerical scheme of the considered equation is presented based on the alternating direction implicit method (ADIM). In Section 3, the inverse problem is formulated. It consists of identification of two parameters of the presented model based on measurements of state function in selected points of the domain. The inverse problem has been reduced to solving the optimization problem. For this purpose, two algorithms were used and compared: probabilistic ant colony optimization (ACO) algorithm and deterministic Hooke–Jeeves (HJ) method. Section 4 presents a numerical example illustrating the operation of the described methods. Section 5 provides the conclusions.

2. Fractional Model

This section consists of a description of the considered anomalous diffusion model which is considered with a fractional derivative, and then we present a numerical algorithm solving the presented differential equation.

2.1. Model Description

Models using fractional derivatives have recently been widely used in various engineering problems, e.g., in electronics for modeling a supercapacitor, in mechanics for modeling heat flow in porous materials, in automation for describing problems in control theory, or in biology for modeling drug transport. In this study, we consider the following model of anomalous diffusion: The differential Equation (1) describes the anomalous diffusion phenomenon (e.g., heat conduction in porous materials [22,23,24]), and is defined in the area , where , are parameters defining material properties, u is a state function, and f is an additional component in the model. Using the terminology taken from the theory of heat conduction, we can write that c is the specific heat, is the density, is the heat conduction coefficient, and the function f describes the additional heat source. All parameters are multiplied by the constants by the value of one and the units that ensure the compatibility of the units of the entire equation. The state function u describes the temperature distribution in time and space. The Equation (2) define the initial boundary conditions necessary to uniquely solve the differential equation. It is assumed that at the boundary the u state function has the value 0, and at the initial moment the value of the u function is determined by the well-known function. In the Equation (1), there also occurs fractional derivative of and order. In the model under consideration, these derivatives are defined as Riemann–Liouville [25] derivatives: The Formula (3) defines the left derivative, and the Formula (4) defines the right derivative. In both cases, they assume that . In addition, the derivative of y of order in the Equation (1) is defined as the Riemann–Liouville derivative.

2.2. Numerical Solution of Direct Problem

Now, let us present the numerical solution of the model defined by Equations (1) and (2). If we have all the data about the model, such as parameters , initial boundary conditions, and geometry of the area, by solving the Equation (1), we solve the direct problem. In order to solve the problem under consideration, we write the Equation (1) as follows: Then, we discretize the area by creating an uniform mesh in each of the dimensions. Let us assume the following symbols: , , , , , , , , , where are mesh sizes, and  are points of mesh. The values of the functions in the grid points are labeled as . We approximate the Riemann–Liouville derivative using the shifted Grünwald formula [26]: where Similarly, we can approximate the fractional derivative to the spatial variable y. In the case of the derivative over time, we use the difference quotient: Let us use the following notation: where denotes the first-order derivative (at ) over the function with respect to the x variable. We assume analogous symbols for the y variable. After using the Formulas (6)–(10) and some transformations, the difference scheme for the Equation (5) can be written in the following form: where , and . In order to simplify the description of the numerical algorithm to be implemented, we present the difference schema (11) in matrix form, so we introduce the following matrices: where Now we define two block matrices, S and H. First, we create the matrix S of dimension , which is a diagonal block matrix containing matrices on the main diagonal, and zeros in other places. Second, we create matrix H, which has the same dimension as matrix S, in the following form: Now it is possible to write the difference scheme (11) in matrix form: where The matrices from the difference scheme (16) are large, so the obtainedsystem of equations is time-consuming to solve. Hence, we applied the alternating direction implicit method (ADIM) to the difference scheme (11), which significantly reduces the computation time (details can be found in [27]). This is an important issue in the case of inverse problems, where a direct problem should be solved many times. Let us write the scheme (11) in the form of the directional separation product: Numerical scheme (17) is split into two parts and solved, respectively, first in the direction x, and afterwards in the direction y. With this approach, the resulting matrices for the systems of equations have significantly lower dimensions than in the case of the scheme (11). The numerical algorithm has two main steps: For each fixed , solve the numerical scheme in the direction x. As a consequence, we will obtain a temporary solution: : Then, for each fixed , solve the numerical scheme in the direction y: This process can be symbolically depicted as in Figure 1. For the boundary nodes and the initial condition, we applied:
Figure 1

Numerical solution in horizontal direction (for a fixed node ) (a) and vertical direction (for a fixed node ) (b).

In the case of the ADIM method, it is also possible to present the equations in a matrix form, which has been executed below. First, for each , we define auxiliary vectors : where . Hence, we obtain an auxiliary matrix dimension . Then, the numerical scheme (18) can be written in the following matrix form (for ): where the temporary solution has the form , and , . We obtain systems of equations, each of dimension. Next, we present the scheme (19) in the direction y in matrix form (for ): where and . At this stage of the algorithm, we can solve systems of equations with dimensions each. The Bi-CGSTAB [28,29] method is used to solve the equation systems, which has significance influences on the computation time. More implementation details and a comparison of times for the described method can be found in the papers [27,30].

3. Inverse Problem

In many engineering problems, in particular in various types of simulations and mathematical modeling, there is a need to solve the inverse problem. In this case, the inverse problem consists of selecting the appropriate model input parameters (1) and (2) to obtain the desired data on the output. Values of the state function u at selected points (so-called measurement points) of the domain are treated as input data for the inverse problem. The task consists of selecting unknown parameters of the model in such a way that the u function assumes the given values at the measurement points. Problems of this type are badly conditioned, which may result in the instability of the solution or the ambiguity of it [31,32]. Details of the solving algorithm are presented in the following sections.

3.1. Parameter Identification

In the model (1) and (2), the following data are assumed: where . The inverse problem deals with finding the and parameters appropriately. The input data for the inverse problem are values of the u function at selected points in the area. Additionally, in order to test the algorithm, the following is assumed: Location of the measuring points (see Figure 2):
Figure 2

Arrangements of measuring points.

Two different grids (): , , Different levels of measurement data disturbances (errors with a normal distribution): . To solve the problem, we create an objective function that compares the values of the u function calculated for the determined values of the searched parameters (at measurement points) with the measurement data. Therefore, we define the objective function as follows: where and are the number of measuring points and the number of measurements in a given measuring point, respectively. In the considered example, , and  depends on the used mesh. By , we denote the values of the u function obtained in the algorithm for the fixed parameters , and by measurement data. Finding the minimum of the objective function (25), we find the sought parameters.

3.2. Function Minimization

In the case of the minimization objective function, we can use any heuristic algorithm (e.g., swarming algorithms). In this paper, we decided to use two algorithms: Ant colony optimization algorithm (ACO). Hooke–Jeeves algorithm (HJ). In this section, we describe both algorithms.

3.2.1. Ant Colony Optimization Algorithm

The presented ACO algorithm is a probabilistic one, so we obtain a different result in each execution. Proper selection of algorithm parameters should make the obtained results give convergent solutions. The algorithm is inspired by the behavior of an ant swarm in nature. More about the ACO algorithm and its applications can be found in the articles [33,34,35]. In order to describe the algorithm, we introduce the following notations: Algorithm 1 presents ACO algorithm step by step. Number of execution objective function in case of ACO algorithm is equal to .

3.2.2. Hooke–Jeeves Algorithm

The Hooke–Jeeves algorithm is a deterministic algorithm for searching for the minimum of an objective function. It is based on two main operations: Exploratory move. It is used to test the behavior of the objective function in a small selected area with the use of test steps along all directions of the orthogonal base. Pattern move. It consists of moving in a strictly determined manner to the next area where the next trial step is considered, but only if at least one of the steps performed was successful. In this algorithm, we consider the following parameters: Pseudocode for the Hooke–Jeeves method is presented in Algorithm 2. The only drawback of the discussed method is the possibility of falling into the local minimum with more complicated objective functions. More details about the algorithm itself and its applications can be found in the papers [36,37]. Random generation of L vectors from the domain of solving problem (the so-called pheromone spots):  . Calculating the value of the objective function for each of the pheromone spot (for each solution vector). Sorting the set of solutions in descending order by the quality of solutions (the lower the value of the objective function, the better the solution). Each solution is assigned an index. for iteration = 1, 2, …, I do Each pheromone spot (solution vector) is assigned a probability according to the formula: where are weights related to the solution index l and expressed by the formula: for k = 1, 2, …, M do Ant randomly chooses the l-th solution with a probability of . Then ant transforms each of the coordinates () of the selected solution using Gauss function: where . end for M new solutions are obtained. Divide set of new solutions into groups and calculate value of objective function J for each solution in each group in separate thread. From the two sets of solutions (new one and previous one) remove M worst solutions and rest sort according to the quality (value of objective function). end for Search the space around the current point along directions from the orthogonal base with step  . This is an exploratory move. If a better point is found, continue in that direction. This is a pattern move. If no better point is found then narrow down the search space using the narrowing parameter .

4. Results—Numerical Examples

We consider the inverse problem described in the Section 3.1. In the models (1) and (2), we set data described by the Equations (23) and (24). We used two different grids and and different levels of measurement data disturbances (input data for the inverse problem): . The unknown data in the model are and —these data need to be identified using the presented algorithm. To examine and test the algorithm, we know exact values of these parameters, which are , . First, we present the results obtained using the ACO algorithm. We set the following parameters of the ant algorithm: Based on the parameters, we can determine the number of calls to the objective function, which in our example is . Obtained results are presented in Table 1. The best results were obtained for exact input data and mesh, the relative errors of reconstruction parameters and are and , respectively, and for the , mesh these errors are equal to and . In the case of the input data with a pseudo-random error, the obtained results are also very good, and the errors of reconstructed parameters do not exceed the input data disturbance errors. In particular, the errors of reconstruction of the coefficient are very small and do not exceed (except in the case of disturbing the input data with an error of and the grid). Relative errors of reconstructed parameter have values greater than errors, most likely due to the fact that the sought value is significantly lower than . Of course, along with the increase in input data disturbances, the values of the minimized objective function also increased. Except for in a few cases, the mesh density did not significantly affect the results.
Table 1

Results of calculations in case of ACO algorithm. —reconstructed value of thermal conductivity coefficient; —reconstructed value of x-direction derivative order; —the relative error of reconstruction; J—the value of objective function; —standard deviation of objective function.

Mesh SizeNoise λ¯ δλ¯[%] α¯ δα¯[%] J σJ
100 × 100 × 2000%240.062.83 × 10−20.80465.84 × 10−12.248.72
2%240.712.95 × 10−10.79348.14 × 10−1725.135.23
5%241.496.21 × 10−10.77353.314994.2114.72
10%236.611.410.77982.5219,424.616.44
160 × 160 × 2500%239.631.51 × 10−10.80546.87 × 10−11.7219.17
2%239.113.71 × 10−10.81311.641020.8411.39
5%241.285.36 × 10−10.79437.03 × 10−15396.345.41
10%241.767.34 × 10−10.77612.9823,675.22.66
Figure 3 shows how the value of the objective function changed depending on the iteration number for four input data cases. The figures do not include the objective function values for the initial iterations. This is due to the fact that these values were relatively high, and inclusion in the figures would reduce their legibility. We can see that in the last few iterations (2–5), the values of the objective function do not change anymore. The appropriate selection of the parameters for the ACO algorithm affects the computation time and is not always a simple task. It depends on the complication of the objective function and the number of sought parameters (size of the problem). In particular, a situation in which the algorithm does not change the solution in the next dozen iterations should be avoided. As we can observe in the presented example, the selection of ACO parameters, such as the number of iterations, as well as the size of the population, seems appropriate.
Figure 3

Values of objective function J in iterations of ACO algorithm for different levels of input data noise: (a) 0%, (b) 2%, (c) 5%, (d) 10%.

For comparison, we now use the deterministic Hooke–Jeeves algorithm. The following parameters are set in it: It is a deterministic algorithm, and the resulting solution, as well as the number of calls to the objective function, depend on the starting point and stop criterion . In our example, we consider four different starting points:  . It turned out that regardless of the selected starting point, the same solution was always obtained, but it should be noted that in the case that the value of any of the reconstructed parameters exceeded the predetermined limits, then we execute the so-called penalty function. It was significant in the case of the starting point, for which the algorithm exceeded the limits and stopped at the local minimum; e.g., for the grid and disturbances, we obtained the results . Similar results were obtained for the remaining cases and the start. Table 2 shows the results obtained using the Hooke–Jeeves algorithm. Comparing the results obtained from both algorithms, we can see that in most cases the errors in reconstruction of the parameters are smaller for the Hooke–Jeeves algorithm; e.g., for the and input data disturbance errors, errors in sought parameters and for the HJ algorithm were and , respectively, while for the ACO algorithm, these errors were and . In addition, the value of the objective function for the HJ algorithm was smaller , . As mentioned earlier, the failure to apply the penalty function caused the HJ algorithm for the starting point to return unsatisfactory results. This should be noted when the objective function is complicated, for example, by increasing the number of parameters to be found.
Table 2

Results of calculations in case of Hooke–Jeeves algorithm: —reconstructed value of thermal conductivity coefficient; —reconstructed value of x-direction derivative order; —the relative error of reconstruction; J—the value of objective function; —number of evaluation objective function; —starting point.

Mesh SizeNoiseSP λ¯ δλ¯[%] α¯ δα¯[%] J fe
100 × 100 × 2000%(100, 0.2)240.156.57 × 10−20.79938.33 × 10−20.0182272
(300, 0.1)246
(450, 0.5)240
(500, 0.9)299
2%(100, 0.2)240.381.59 × 10−10.79713.61 × 10−1724.57254
(300, 0.1)217
(450, 0.5)235
(500, 0.9)270
5%(100, 0.2)241.446.03 × 10−10.77573.034993.85230
(300, 0.1)203
(450, 0.5)257
(500, 0.9)255
10%(100, 0.2)236.861.310.77812.7319,424.36217
(300, 0.1)199
(450, 0.5)239
(500, 0.9)245
160 × 160 × 2500%(100, 0.2)240.062.51 × 10−20.79973.21 × 10−20.0036265
(300, 0.1)225
(450, 0.5)221
(500, 0.9)292
2%(100, 0.2)239.951.98 × 10−20.80182.31 × 10−11014.21257
(300, 0.1)231
(450, 0.5)233
(500, 0.9)284
5%(100, 0.2)240.853.55 × 10−10.79358.11 × 10−15393.44241
(300, 0.1)213
(450, 0.5)243
(500, 0.9)266
10%(100, 0.2)241.446.02 × 10−10.78172.2823,673.38255
(300, 0.1)227
(450, 0.5)273
(500, 0.9)280
Now we present the error of reconstruction of the u state function in the grid points. These results are summarized in Table 3. The mean errors of reconstruction of the u state function are at a low level and do not exceed in each of the analyzed cases. We can also observe that the maximum errors in most cases are greater for the grid; in particular, it is visible for the input data noised by the and errors.
Table 3

Errors of reconstruction function u in grid points in case of reconstruction of two parameters (—average absolute error; —maximal absolute error).

AlgorithmErrorsMesh 100 × 100 × 200
0%2%5%10%
ACOΔavg[K]3.04 × 10−22.94 × 10−21.37 × 10−12.59 × 10−1
Δmax[K]1.95 × 10−12.68 × 10−11.132.46
HJΔavg[K]6.28 × 10−31.36 × 10−21.24 × 10−12.59 × 10−1
Δmax[K]1.11 × 10−11.24 × 10−11.042.42
mesh 160 × 160 × 250
0%2%5%10%
ACOΔavg[K]2.77 × 10−26.55 × 10−24.65 × 10−21.77 × 10−1
Δmax[K]2.19 × 10−15.27 × 10−13.11 × 10−19.96 × 10−1
HJΔavg[K]2.68 × 10−31.08 × 10−23.36 × 10−28.84 × 10−2
Δmax[K]4.72 × 10−27.43 × 10−22.53 × 10−17.55 × 10−1
Figure 4 and Figure 5 show error plots of reconstruction of the u state function at the measurement points . The graphs of these errors for both the ACO and HJ algorithms are quite similar. It can be noticed that for the measurement points , greater errors were obtained for the input data noised by the error than for the input data disturbed by the error of . Levels of the u reconstruction errors for the input data unaffected and affected by the error (red and green colors) are on a much lower level than for the other input data (blue and black colors).
Figure 4

Errors of reconstruction of u state function in points for ACO algorithm.

Figure 5

Errors of reconstruction of u state function in points for HJ algorithm.

Sensitivity Analysis

A sensitivity analysis was also performed for both reproduced parameters [38]. Sensitivity coefficients are derived from the measured quantity according to the reproduced quantity: In the calculations, both of the above derivatives are approximated by central difference quotients: where [39], and denotes the state function determined for a given value of p. We considered a test case with and . Figure 6 shows the variability of the sensitivity coefficients at measurement points over the entire analyzed period of time. The obtained results were symmetrical with respect to the vertical axis of symmetry of the area—the line . Therefore, the measurement coefficients in points , , and are equal to the coefficients in points , , and , respectively. The performed sensitivity analysis showed that the positions selected for the measurement points are correct. They ensure the appropriate sensitivity of the state function to changes in the values of the restored parameters.
Figure 6

Sensitivity coefficient in measurement points along the time domain: (a) , (b) .

5. Conclusions

This paper presents algorithms for direct and inverse solutions for a model consisting of a differential equation with a fractional derivative with respect to a space of the Riemann–Liouville type. Equations of this type are used to describe the phenomena of anomalous diffusion, e.g., anomalous heat transfer in porous media. The inverse problem has been reduced to the search for the minimum of a properly created objective function. Two algorithms were used to deal with this problem: ant colony optimization algorithm and Hooke–Jeeves method. From the presented numerical example, we can draw the following conclusions: The obtained results are satisfactory and errors of parameters reconstruction are minimal. Both presented algorithms returned similar results, but in the case of the HJ algorithm, it was necessary to use the penalty function for one of the starting points. The number of evaluation of the objective function was smaller for the HJ algorithm (250–300) than for the ACO algorithm (656). The used differential scheme is unconditionally stable and has the approximation order equal to [26]. The convergence of the differential scheme is fast; already for sparse meshes, the approximation errors for the solution of the direct problem are small [27]. In addition, in the case of the inverse problem considered in this paper, it is enough to use a relatively sparse mesh to very well reconstruct the searched parameters. The presented method can be used as a tool for parameter training in artificial neural networks.
  8 in total

Review 1.  Modeling Heat Transfer in Tumors: A Review of Thermal Therapies.

Authors:  Assunta Andreozzi; Luca Brunese; Marcello Iasiello; Claudio Tucci; Giuseppe Peter Vanoli
Journal:  Ann Biomed Eng       Date:  2018-12-10       Impact factor: 3.934

2.  'Electrical viscosity' of piezoresistive sensors: Novel signal processing method, assessment of manufacturing quality, and proposal of an industrial standard.

Authors:  Franz Konstantin Fuss; Adin Ming Tan; Yehuda Weizman
Journal:  Biosens Bioelectron       Date:  2019-06-05       Impact factor: 10.618

3.  Anomalous diffusion and Noether's second theorem.

Authors:  Matteo Baggioli; Gabriele La Nave; Philip W Phillips
Journal:  Phys Rev E       Date:  2021-03       Impact factor: 2.529

4.  Modelling heat transfer in heterogeneous media using fractional calculus.

Authors:  Dominik Sierociuk; Andrzej Dzielinski; Grzegorz Sarwas; Ivo Petras; Igor Podlubny; Tomas Skovranek
Journal:  Philos Trans A Math Phys Eng Sci       Date:  2013-04-01       Impact factor: 4.226

5.  Comparison of the Probabilistic Ant Colony Optimization Algorithm and Some Iteration Method in Application for Solving the Inverse Problem on Model With the Caputo Type Fractional Derivative.

Authors:  Rafał Brociek; Agata Chmielowska; Damian Słota
Journal:  Entropy (Basel)       Date:  2020-05-15       Impact factor: 2.524

6.  Artificial neural networks: a practical review of applications involving fractional calculus.

Authors:  E Viera-Martin; J F Gómez-Aguilar; J E Solís-Pérez; J A Hernández-Pérez; R F Escobar-Jiménez
Journal:  Eur Phys J Spec Top       Date:  2022-02-12       Impact factor: 2.891

7.  Regularization, Bayesian Inference, and Machine Learning Methods for Inverse Problems.

Authors:  Ali Mohammad-Djafari
Journal:  Entropy (Basel)       Date:  2021-12-13       Impact factor: 2.524

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.