A new method to reason about uncertainty might help artificial intelligence to find safer options faster, for example in self-driving cars, according to a new study to be published shortly in AAAI involving researchers at Radboud University, the University of Austin, the University of California, Berkeley, and the Eindhoven University of Technology.
The researchers have defined a new approach to so-called ‘uncertain partially observable Markov decision processes,” or uPOMDPs. In layman’s terms, these are models of the real world that estimate the probability of events. A self-driving car, for example, will face many unknown situations when it starts driving. To validate the artificial intelligence of self-driving cars, extensive calculations are run to analyze how the AI would approach various situations. The researchers argue that with their new approach, these modeling exercises can become far more realistic, and thus allows AI to make better, safer decisions quicker.
Making the theoretical real
POMDPs are already used to simulate and model many situations. They can help to predict the spread of an epidemic, calculate how air- and spacecrafts avoid collision, and can even be used to survey and protect endangered species. “We know that these models are very good at providing a realistic capture of the real world. However, the high amount of processing power needed to use them means their use in practical applications is often still limited,” says Nils Jansen, one of the main authors. “This new approach allows us to take all our calculations and theoretical information and use it in the real world on a more consistent, regular basis.”
Self-driving cars
The breakthrough of the team arises through explicitly including the uncertainty of the real world into the models. Jansen: “For example, current models might just tell you that there is an 80 percent chance that a drive in a self-driving car will be fully safe. It’s unclear what might happen in the other 20 percent, and what type of risk can be expected. That is an unclear and vague approximation of risk. With this new approach, a system could give far more detailed explanations of what could go wrong and take those into account when making calculations. For users, this means they have more specific examples of what could go wrong, and make better and more adequate adjustments to avoid those specific risks.”
The approach that the researchers have taken to these uPOMPDPs has been considered by other researchers previously, but only in specific limited situations and thought experiments. “However, for the first time we have been able to take these previous theoretical thought experiments into a practical and realistic approach,” explains Jansen. “It was considered a unique, difficult problem, but thanks to an interdisciplinary approach we were able to make real breakthroughs.
A neural network learns when it should not be trusted
More information:
Murat Cubuktepe, et al. Robust Finite-State Controllers for Uncertain POMDPs. arxiv.org/abs/2009.11459 arXiv:2009.11459v1 [cs.AI]
Provided by
Radboud University
Citation:
New approach to AI offers more certainty in the face of uncertainty (2021, January 21)
retrieved 21 January 2021
from https://techxplore.com/news/2021-01-approach-ai-certainty-uncertainty.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.