| Literature DB >> 33004941 |
A Alexiadis1, M J H Simmons2, K Stamatopoulos3, H K Batchelor4,5, I Moulitsas6.
Abstract
The algorithm behind particle methods is extremely versatile and used in a variety of applications that range from molecular dynamics to astrophysics. For continuum mechanics applications, the concept of 'particle' can be generalized to include discrete portions of solid and liquid matter. This study shows that it is possible to further extend the concept of 'particle' to include artificial neurons used in Artificial Intelligence. This produces a new class of computational methods based on 'particle-neuron duals' that combines the ability of computational particles to model physical systems and the ability of artificial neurons to learn from data. The method is validated with a multiphysics model of the intestine that autonomously learns how to coordinate its contractions to propel the luminal content forward (peristalsis). Training is achieved with Deep Reinforcement Learning. The particle-neuron duality has the advantage of extending particle methods to systems where the underlying physics is only partially known, but we have observations that allow us to empirically describe the missing features in terms of reward function. During the simulation, the model evolves autonomously adapting its response to the available observations, while remaining consistent with the known physics of the system.Entities:
Year: 2020 PMID: 33004941 PMCID: PMC7530753 DOI: 10.1038/s41598-020-73329-0
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Functioning of particle methods and link with Artificial Neural Networks. (a) Typical flow of particle methods algorithms. (b) Particle methods like molecular dynamics, smoothed particle hydrodynamics or the discrete element method exchange forces among non-bonded particles. (c) Methods like the lattice spring model or peridynamics exchange forces among bonded particles. (d) In discrete multiphysics, heat transfer occurs by exchanging heat among neighbouring particles. (e) Artificial neural networks exchange information among interconnected neurons. (f) The algorithm for forward propagation in ANNs has the same flow as the particle methods algorithm.
Figure 2The DeepMP model combines a DMP model with an ANN. (a) DMP model: SPH particles model the fluid, LSM particles model the membrane. (b) Liquid particles are computational particles that only exchange forces; hidden neurons are computational neurons that only exchange (non-physical) information; solid particles are particle-neuron duals that exchange forces with the other computational particles, and information with the hidden neurons. Given the state of the membrane at time T, the ANN calculates which section of the membrane (and for how long) should be contracted next to maximize the amount of fluid moved from left to right.
Figure 3Training of the DeepMP model. Evolution of dimensionless cumulative reward during training showing catastrophic forgetting.
Architecture of the ANN and Hhyperparameters used for training.
| Input layer | N = 10 | |
|---|---|---|
| Hidden layer 1 | N = 50 | |
| Hidden layer 2 | N = 50 | |
| Output layer | N = 10 | |
| Hyperparameters | Loss = mse | Optimizer = adam |
| Metrics = mae | ||
| α = 1.0–0.1 | Episodes = 30,000 |