The use of neural networks (NN) has become very widespread in the past decade. For the classical field of machine learning (image classification, speech recognition, …) many different NN topologies exist. In the field of scientific computing NNs are used to replace traditional modeling, ODEs, PDEs and the corresponding solvers. Hence, NNs are used as surrogate models. This is often motivated by the need to speed up calculations.
In this project we will develop specialized NNs for different scientific problems and analyse the effects of the incorporation of problem-dependent constraints into the neural networks. The example constraints will be laws from fluid mechanics as considered in subproject 5 and laws from celestial mechanics from subproject 6.
The physics-constrained neural networks will be studied through mathematical analyses. Incorporation of the constraints will make the search space of training algorithms smaller, but will make the optimization problem more complicated; an optimization problem in itself.
The optimization algorithms will be hierarchical, i.e., the multiple scales present in the fluid-flow and astronomical systems at hand, will be respected and exploited in the optimization algorithms. Well-known hierarchical methods are the so-called multigrid methods. Multigrid methods are mathematically well-founded for the efficient solution of computing-intensive forward problems, such as large-scale nonlinear boundary-value problems, and large linear systems of equations. Multigrid methods respect the different scales present in the underlying physical problems. For backward (inverse) problems (for instance a problem as: given a solution of a boundary-value problem, how do the boundary conditions look like?), multigrid methods are still less well-founded. We will carry over expertise in multigrid methods for forward problems to the inverse problems at hand in the physics-constrained optimization of the neuron weights.