Increasingly, biologists are turning to computational modeling to make sense of complex systems. In neuroscience, researchers are adapting the kinds of algorithms used to forecast the weather or filter spam from your email to seek insight into how the brain’s neural networks process information.
New research from Cold Spring Harbor Laboratory Assistant Professor Tatiana Engel offers crucial guidance to biologists using such models. Testing various computational models of the nervous system, she and postdoctoral researcher Mikhail Genkin have found that just because a model can make good predictions about data does not mean it reflects the underlying logic of the biological system it represents. Relying on such models without carefully evaluating their validity could lead to wrong conclusions about how the actual system works, they say.
The work, published October 26, 2020 in Nature Machine Intelligence, concerns a type of machine learning known as flexible modeling, which gives users the freedom to explore a wide range of possibilities without formulating specific hypotheses beforehand. Engel’s lab has turned to such models to investigate how signaling in the brain gives rise to decision-making.
When it comes to forecasting the weather or predicting trends in the stock market, any model that makes good predictions is valuable. But Engel says that for biologists, the goals are different:
“Because we are interested in scientific interpretation and actually discover hypotheses from the data, we not only need to fit the model to the data, but we need to analyze or understand the model which we get, right? So we want to look, as I said, we want to look into model structure and the model mechanism to make inference that this is maybe how the brain works.”
It’s possible to make good predictions using wrong assumptions, Engel said, pointing to the ancient model of the solar system that accurately predicted the movements of celestial bodies while positing that those bodies revolved around the Earth, not the Sun. So it was important to consider how well particular models of neural networks could be trusted.
By building and comparing several models of neural signaling, Engel and Genkin found that good predictive power does not necessarily indicate that a model is a good representation of real neural networks. They found that the best models were instead those that were most consistent across multiple datasets. This approach won’t necessarily work for all situations, however, and biologists may need alternative methods of evaluating their models. Most importantly, Genkin said, “We shouldn’t take anything for granted. We should check every assumption we have.”
Knowing the model you can trust: The key to better decision-making
More information:
Genkin, M., Engel, T.A. Moving beyond generalization to accurate interpretation of flexible models. Nat Mach Intell (2020). doi.org/10.1038/s42256-020-00242-6
Provided by
Cold Spring Harbor Laboratory
Citation:
How to figure out what you don’t know (2020, October 26)
retrieved 26 October 2020
from https://techxplore.com/news/2020-10-figure-dont.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.