When groups make decisions—whether it’s humans aligning on a shared idea, robots coordinating tasks, or fish deciding where to swim—not everyone contributes equally. Some individuals have more reliable information, whereas others are more connected and have higher social influence.
A new study by researchers at the Cluster of Excellence Science of Intelligence shows that a combination of uncertainty and heterogeneity plays a crucial role in how groups reach consensus.
The findings, published in Scientific Reports by Vito Mengers, Mohsen Raoufi, Oliver Brock, Heiko Hamann, and Pawel Romanczuk, show that groups make faster and more accurate decisions when individuals factor in not only the opinions of their neighbors but also their confidence about these opinions and how connected those others are within the group.
However, more confidence does not always equal smarter decisions. The study also shows that overconfident group members with wrong information might mislead the group.
Why some groups make better decisions than others: Diversity meets uncertainty
Classic models of decision-making assume that all individuals contribute equally to consensus, but in reality, groups are diverse and heterogeneous in both knowledge and influence. Just as some people are experts in a topic, or certain fish in a school have a clearer view of predators, some individuals have more accurate or reliable information than the rest of the group. Others might be more “connected,” which causes their opinions to spread more widely. Think of influential figures on social media, or robots with central roles in a swarm.
These two types of diversity, namely level of knowledge and number of connections, are not independent, as uncertainty influences how the two shape decision-making. In other words, individuals with more initial knowledge tend to become more central and influential, helping others reduce uncertainty, while those who interact with many others obtain more information and thus become less uncertain over time. This dynamic allows groups to naturally filter out weak or biased information and converge on reliable conclusions—as long as central individuals don’t become overconfident too quickly.
“From animal groups to human societies to robot swarms, collectives are inherently heterogeneous in many ways, with each individual bringing differences shaped by their individuality,” explains Mohsen Raoufi, one of the lead authors of the study. “And the really exciting part is that groups can harness this heterogeneity–without any central control–simply through making use of uncertainty.”
How the study tested group decision-making
To explore these effects, the researchers built a model where individuals—whether robots, fish, or humans—adjust their beliefs and certainty dynamically as new information comes in. Uncertain individuals leaned more on their peers, while confident ones shaped the group’s direction of opinion. But position within the network mattered just as much—highly connected agents spread their opinions widely, whether they were right or wrong.
The researchers found that a mix of perspectives wasn’t enough to improve decisions. Groups reached smarter and faster decisions when guided by uncertainty. When everyone had equal certainty and connections, consensus was slow and unreliable. But in heterogeneous groups, uncertainty helped weigh opinions, so that decisions were faster and more accurate.
One of the biggest surprises? When highly connected individuals became too certain too soon and thus overconfident, they began to dominate the group, even when they were wrong. This imbalance meant that bias could spread easily, flawed AI models could drive incorrect decisions, and groups could end up following a leader who was sure of themselves but ultimately misguided.
“We often assume that influential individuals should be confident in their decisions,” says Vito Mengers, the other lead author, “but our study shows that overconfidence is quickly detrimental. If central figures or algorithms become certain too soon, they can mislead the entire system, whether it’s a human group or a network of robots.”
Why this matters: AI, human networks, and lessons from nature
In artificial intelligence and robotics, this research offers a new way to design systems that make better collective decisions. Self-driving cars could assess not just sensor inputs, but also the confidence of other nearby vehicles, improving safety.
Many natural systems already follow the principle of adapting to uncertainty. Schools of fish, flocks of birds, and ant colonies don’t treat all input equally but adapt dynamically. By studying how that works, we can better understand our own world and use that knowledge to build better AI and improve human collaboration.
This study suggests that good decision-making doesn’t come from eliminating uncertainty—it comes from using it correctly. Groups—whether human, robotic, or biological—function best when they recognize and adjust for differences in knowledge and influence rather than ignoring them. The researchers showed that uncertainty isn’t a flaw—it’s a feature. And when groups use it wisely, they become far better at navigating complexity and reaching the right decisions.
More information:
Vito Mengers et al, Leveraging uncertainty in collective opinion dynamics with heterogeneity, Scientific Reports (2024). DOI: 10.1038/s41598-024-78856-8
Provided by
Technical University of Berlin
Citation:
From robot swarms to human societies, good decisions rely on the right mix of perspectives (2025, March 19)