Researchers from the Monash University Malaysia campus have announced they are developing a platform that aims to moderate and verify content shared to online forums and discussion boards.
The platform was developed by the Monash University Malaysia School of Information Technology. According to the researchers, it uses a combination of graph algorithms and machine learning to extract tacit information from platforms like Reddit, StackExchange, and Quora, to apply a score that estimates the reliability of someone’s post.
The university explained that with people increasingly relying on social media for information, the dissemination of fake news related to issues like health and politics will “remain a constant challenge unless there is an urgent application that can appropriately moderate and verify content online”.
See also: QUT develops algorithm aiming to block misogyny from Twitter
“By assigning numbers to users of various online discussion forums we’re able to reward those people who are sharing credible and trustworthy content, while punishing others who are pushing incorrect and misinformed content. The reward or punishment aspect is tied to the visibility and engagement of someone’s profile or content,” Dr Ian Lim Wern Han from the university’s School of IT said.
“If users are credible, their content will be placed higher up on the page for more visibility and their Reddit votes will be worth more when they vote on other threads or comments. If a user is deemed untrustworthy, their post will be placed lower on the page or even in some cases hidden from the public altogether and their votes have less worth.”
Lim collected over 700,000 threads across a bunch of online forums from almost two million users. His research profiled each individual with a rating and these numbers were then used to predict a user’s contribution on a subsequent day. The university said the figures were updated daily, and the process was repeated in the ensuing days.
“Using measures of confidence and volatility on a complex network of interactions ensures the most credible sources of information or questions appear at the very top of a thread on online forums. On the other hand, the very same rating can be used to match the questioner with suitable and reliable responses,” the university added.
Lim said the methodology used could also be applied to online social media influencers to ensure the celebrities, sportspeople, or others with influence do not disseminate incorrect or misleading information or public service announcements.
“Social media platforms and online discussion forums have given a voice to users without holding them accountable for the accuracy of what they say,” the university said. “As a result, these platforms have become a fertile ground for individuals intentionally spreading misinformation and fake news.”
Elsewhere, Google has launched its latest attempt to help the media industry.
After announcing its $1 billion commitment to build partnerships with news publishers and invest in the “future of news” through News Showcase, the search giant on Wednesday launched Journalist Studio, which is touted as a suite of tools that uses technology to “help reporters do their work more efficiently, securely, and creatively”, and two new products for reporters.
“Quality journalism is critical to our societies. In launching these tools, we look forward to continuing to use the best of Google to support that important work,” Google said in a blog post announcing the initiative.