Within teams, organizations, markets and online communities, ideas from a larger group can help to solve complex problems. Large language models (LLMs) are emerging as powerful tools to unlock even greater potential. Picture an online forum where thousands of voices contribute to a solution, and an LLM synthesizes these diverse insights into a cohesive, actionable plan.
A new paper highlights how LLMs can reshape collective intelligence, which is a shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making; offering enhanced capabilities and potential risks. This paper, co-authored by researchers at the Tepper School of Business at Carnegie Mellon University and several other institutions, articulates the impact of LLMs on group decision making and problem-solving.
Published in Nature Human Behavior, the research highlights the dual nature of LLMs as tools and products of collective intelligence, emphasizing their role in information aggregation and communication.
“LLMs provide unique opportunities for enhancing group collaboration and decision making, but their use requires careful consideration to maintain diversity and avoid potential pitfalls,” said Anita Williams Woolley, a co-author on the paper and professor of organizational behavior at the Tepper School.
Woolley and her co-authors considered how LLMs process and create text, particularly their impact on collective intelligence. For example, LLMs can make it easier for people from different backgrounds and languages to communicate, which means groups can collaborate more effectively. This technology helps share ideas and information smoothly, leading to more inclusive and productive online interactions.
While LLMs offer many benefits, they also present challenges, such as ensuring that all voices are heard equally.
“Because LLMs learn from available online information, they can sometimes overlook minority perspectives or emphasize the most common opinions, which can create a false sense of agreement,” said Jason Burton, an assistant professor at Copenhagen Business School.
Another issue is that LLMs can spread incorrect information if not properly managed because they learn from the vast and varied content available online, which often includes false or misleading data.
Without careful oversight and regular updates to ensure data accuracy, LLMs can perpetuate and even amplify misinformation, making it crucial to manage these tools responsibly to avoid misleading outcomes in collective decision-making processes.
The article emphasizes the importance of further exploring the ethical and practical implications of LLMs, especially in policymaking and public discussions. The researchers advocate for the development of guidelines for using LLMs responsibly, supporting group intelligence while maintaining individual diversity and expression.
More information:
Jason W. Burton et al, How large language models can reshape collective intelligence, Nature Human Behaviour (2024). DOI: 10.1038/s41562-024-01959-9
Provided by
Carnegie Mellon University
Citation:
How large language models are changing collective intelligence (2024, September 27)