Mathematics is the language of the physical world, and Alex Townsend sees mathematical patterns everywhere: in the weather, in the way sound waves move, and even in spots or stripes as zebrafish develop into embryos.
“Since Newton wrote calculus, we are getting a calculus equation called differential equations to model physical phenomena,” said Townsend, associate professor of mathematics in the College of Arts and Sciences.
This way of deriving the laws of calculus works, Townsend said, if you already know the physics of the system. But what about learning physical systems for which physics is unknown?
In the new and growing field of partial differential equation (PDE) learning, mathematicians collect data from natural systems and then use trained computer neural networks to attempt to derive the underlying mathematical equations.
In a new paper, Townsend, with co-authors Nicholas Boule and Christopher Earles from the University of Oxford, Professor of Civil and Environmental Engineering in the College of Engineering, advances PDE learning with a novel “rational” neural network, which unfolds its findings in a way that mathematicians can understand: through the functions of Green – Calculus In a right inverse of a differential equation.
This machine-human partnership is a step toward the day when deep learning will enhance the scientific exploration of natural phenomena such as weather systems, climate change, fluid dynamics, genetics and more. “Data-Driven Discovery of Green’s Functions with Human-Understandable Deep Learning” was published in Scientific Reports, Nature.
Townsend said that a subset of machine learning, neural networks are inspired by the simple animal brain system of neurons and synapses — inputs and outputs. Neurons – called “activation functions” in the context of computerized neural networks – collect input from other neurons. There are synapses between neurons, called weights, that send signals to the next neuron.
Townsend said, “By combining these activation functions and weights, you can come up with very complex maps that take input into output, just as the brain can take a signal from the eye and turn it into a thought.” “Specifically here, we’re looking at a system, a PDE, and trying to make it infer Green’s function pattern which would predict what we’re seeing.”
Mathematicians have been working with Green’s works for nearly 200 years, Townsend said, who specializes in them. He usually uses Green’s function to solve a differential equation faster. Earls proposed using Green’s functions to understand a differential equation, rather than solving it, an inverse.
To do this, the researchers created an optimized rational neural network, in which activation functions are more complex but can capture the extreme physical behavior of Greene’s functions. Townsend and Boule introduced rational neural networks in a separate study in 2021.
“Like neurons in the brain, there are different types of neurons from different parts of the brain. They are not all the same,” Townsend said. “In a neural network, that activation function – corresponds to the selection of inputs.”
Rational neural networks are potentially more flexible than standard neural networks because researchers can select different inputs.
“One of the important mathematical considerations here is that we can transform that activation function into something that can actually capture what we expect from Green’s function,” Townsend said. “Machine learns Green’s work for a natural system. It doesn’t know what it means; can’t explain it. But we as humans can now see Green’s work because we’ve learned something that We can understand mathematically.”
For each system, there is a different physics, Townsend said. He is excited about this research because it puts his expertise in Green’s work to work in a cutting-edge direction with new applications.
Source: Cornell University