New geoscientific modeling tool gives more holistic results in predictions

Geoscientific models allow researchers to test potential scenarios with numerical representations of the Earth and relevant systems, from predicting large-scale climate change effects to helping inform land management practices. Estimating parameters for traditional models, however, is computationally costly and calculates results for specific locations and scenarios that are difficult to extrapolate out to other scenarios well, according to Chaopeng Shen, associate professor of civil and environmental engineering at Penn State.

To address these issues, Shen and other researchers have developed a new model known as differentiable parameter learning that combines elements of both the traditional process-based models and machine learning for a method that can be applied broadly and lead to more aggregated solutions. Their model, published in Nature Communications, is publicly available for researchers to use.

“A problem that traditional process-based models face has been that they all need some kind of parameters—the variables in the equation that describe certain attributes of the geophysical system, such as such as conductivity of an aquifer or rainwater runoff—that they don’t have direct observations for,” Shen said. “Normally, you’d have to go through this process called parameter inversion or parameter estimation where you have some observations of the variables that the models are going to predict and then you go back and ask, “What should be my parameter?'”

A common process-based model is an evolutionary algorithm, which evolves across many iterations of operating so that it can better tune the parameters. These algorithms, however, are not able to handle large scales or be generalized to other contexts.

“It’s like I’m trying to fix my house, and my neighbor has a similar problem and is trying to fix his house, and there’s no communication between us,” Shen said. “Everyone is trying to do their own thing. Likewise, when you apply evolutionary algorithms to an area—let’s say to the United States—you will solve a separate problem for every little piece of land, and there’s no communication between them, so there is a lot of effort wasted. Further, everyone can solve their problem in their own inconsistent ways, and that introduces lots of physical unrealism.”

To solve issues for wider regions, Shen’s model takes in the data from all locations to get one solution. Instead of inputting location A data and getting location A solution, then inputting location B data for location B’s solution, Shen inputs locations A and B data for one solution that is more comprehensive.

“Our algorithm is much more holistic, because we use a global loss function,” he said. “This means that during the parameter estimation process, every location’s loss function—the discrepancy between the output of your model and the observations—is aggregated together. The problems are solved together at the same time. I’m looking for one solution to the entire continent. And when you bring more data points into this workflow, everyone is getting better results. While there were also some other methods that used a global loss function, humans were deriving the formula, so the results were not optimal. “

Shen also noted that his method is much more computationally cost-effective than the traditional methods. What would normally take a super cluster of 100 processors two to three days now requires only one graphical processing unit one hour.

“The cost per grid cell dropped enormously,” he said. “It’s like economies of scale. If you have one factory that builds one car, but now you have the same one factory build 10,000 cars, your cost per unit declines dramatically. And that same thing happens as you bring more points into this workflow. At the same time, every location is now getting better service as a result of other locations’ participation.”

Pure machine learning methods can make good predictions for extensively observed variables, but they can produce results that are difficult to interpret because they do not include causal relationship assessment.

“A deep learning model might make a good prediction, but we don’t know how it did it,” Shen said, explaining that while a model may do a good job making predictions, researchers can misinterpret the apparent causal relationship. “With our approach, we are able to organically link process-based models and machine learning at a fundamental level to leverage all the benefits of machine learning and also the insights that come from the physical side.”

More information:
Wen-Ping Tsai et al, From calibration to parameter learning: Harnessing the scaling effects of big data in geoscientific modeling, Nature Communications (2021). DOI: 10.1038/s41467-021-26107-z

Provided by
Pennsylvania State University

Citation:
New geoscientific modeling tool gives more holistic results in predictions (2022, March 4)

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :