Hundreds of thousands of Twitter accounts that amplified fake news and disinformation in the lead-up to the 2016 presidential election are still active on the site, tweeting about other fake news and conspiracies more than a million times every day, according to a report released Thursday by the Knight Foundation.

The results of the study, conducted by George Washington University associate professor Matthew Hindman and Vlad Barash, the science director at the network analysis service Graphika, showed that Twitter hasn’t cracked down on many of its fake news amplifiers. Eighty percent of Twitter accounts that were spreading false information during the campaign were still active on the platform, researchers found.

The researchers analyzed more than 10 million tweets from 700,000 accounts in an effort to better understand how the fake news ecosystem on Twitter has evolved over the past two years.

“There was a lot of confusion and a lot of uncertainty around basic questions about fake news: How much was there? Where does it come from? How is it spreading on platforms like Facebook and Twitter?” Hindman said. “This research project was really designed to answer some of those basic questions.”

The study found that most of the fake and conspiracy tweets on Twitter linked to only about 1o websites, including The Gateway Pundit and Truthfeed. That trend was largely unchanged from 2016. Additionally, about 60 percent of the accounts that shared and amplified fake news were estimated by researchers to have been automated accounts. Those accounts were densely connected, following each other at high rates and retweeting each other frequently, intensifying the impact and reach of each post.

“It raises questions about whether [tackling misinformation] is really a game of whack-a-mole,” Hindman said. “That’s what we expected going into this, but that’s not really what we found. We found that a large portion of fake and conspiracy news online really was from the same accounts and with links to the same sites.”

Other findings saw that fake and conspiracy news that was right-leaning and conservative became more pronounced after the election, primarily because the amount of left-leaning fake news decreased substantially. Researchers also found evidence of coordinated efforts to share and amplify fake news stories on the platform, especially coming from accounts that researchers associated with Russian propaganda.

Hindman said that the report suggests that Twitter CEO Jack Dorsey’s comment in September that the company was considering labeling bots could be a useful tool in the fight to limit the influence of fake news.

“Our report is consistent with the notion that labeling of bots would make a big difference, or at least could make a noticeable dent and step toward managing the problem,” Hindman said.

A Twitter spokesperson did not immediately respond to a request for comment.

Sam Gill, the vice president of communities and impact at the Knight Foundation, said he hoped the results would help inform the public conversation about fake news and about the best ways with which to tackle the problem.

He was optimistic about a case study included in the research that suggested aggressive action against accounts that link to and spread conspiracies can have a major impact on reducing the spread of the misinformation.

“I think that’s a promising sign that we can actually get our arms around this challenge,” Gill said.

Original Source

Advertisement