Researchers from the Skoltech AI Center, together with colleagues from the Institute for Information Transmission Problems of the Russian Academy of Sciences, have developed a method that allows neural networks to more accurately assess their “confidence” in forecasts.
The method uses a special set of confidence-aware training data and aims to improve the reliability of neural network models in high-risk tasks, for example, in medicine or manufacturing.
The results were presented at the International Winter Conference on Applications of Computer Vision (WACV-2025) and in the conference proceedings.
Modern neural network models often demonstrate high accuracy, but sometimes they are too confident in their predictions, even in situations where the data is ambiguous or contains noise. This can be critical in areas such as medicine, industrial safety, or autonomous systems. By controlling their behavior more precisely in complex and borderline scenarios, the developed approach can increase the reliability of models.
The new method assists the neural network in identifying cases where its forecast might require human verification. The research team tested the technology on real-world data, including medical diagnostic tasks for blood typing, and obtained a significant increase in the accuracy of uncertainty assessment in classification and segmentation tasks.
Unlike classical approaches, where only binary labels (0 or 1) are used in training samples, the new method additionally introduces “soft” labels—values ranging from 0 to 1, reflecting the experts’ confidence in the accuracy of the data markup. This helps the model build a more careful decision-making strategy and respond more effectively to situations with a high degree of uncertainty.
Also, the method allows considering two types of uncertainty—epistemic, associated with the insufficiency and incompleteness of the training data, and aleatory, arising from natural noise or ambiguity in the data itself.
“Our method helps the neural network understand where to be careful. In practice, this reduces the risk of its overconfidence when handling complex or borderline cases. We tested the method on real data and confirmed its effectiveness in estimating uncertainty,” said Aleksandr Yugay, a junior research engineer at the Skoltech AI Center.
The new technology can be applied in critical areas where the reliability of artificial intelligence is important, including medical diagnostic systems, industrial automation, technical control systems, and autonomous solutions.
“We focused on teaching the model not only to make decisions, but also to identify cases where the risk of error is particularly high. Thanks to the use of confidence markup, our solution stands out from the existing ones. An assessment of ‘caution’ is critical for decision-making in medicine and other areas with a high cost of error,” commented Alexey Zaytsev, an associate professor at Skoltech, the head of the Skoltech-Sberbank Applied Research Laboratory.
More information:
Sergey Korchagin, et al. Improving Uncertainty Estimation with Confidence-Aware Training Data. Proceedings of the Winter Conference on Applications of Computer Vision (WACV) (2025). api.semanticscholar.org/CorpusID:276676168
Provided by
Skolkovo Institute of Science and Technology
Citation:
Neural network learns to hesitate for improved accuracy (2025, March 26)