If social networks and other platforms are to get a handle on disinformation, it’s not enough to know what it is — you have to know how people react to it. Researchers at MIT and Cornell have some surprising but subtle findings that may affect how Twitter and Facebook should go about treating this problematic content.
MIT’s contribution is a counterintuitive one. When someone encounters a misleading headline in their timeline, the logical thing to do would be to put a warning before it so that the reader knows it’s disputed from the start. Turns out that’s not quite the case.
The study of nearly 3,000 people had them evaluating the accuracy of headlines after receiving different (or no) warnings about them.
“Going into the project, I had anticipated it would work best to give the correction beforehand, so that people already knew to disbelieve the false claim when they came into contact with it. To my surprise, we actually found the opposite,” said study co-author David Rand in an MIT news article. “Debunking the claim after they were exposed to it was the most effective.”
When a person was warned beforehand that the headline was misleading, they improved in their classification accuracy by 5.7 percent. When the warning came simultaneously with the headline, that improvement grew to 8.6 percent. But if shown the warning afterwards, they were 25 percent better. In other words, debunking beat “prebunking” by a fair margin.
The team speculated as to the cause of this, suggesting that it fits with other indications that people are more likely to incorporate feedback into a preexisting judgment rather than alter that judgment as it’s being formed. They warned that the problem is far deeper than a tweak like this can fix.
“There is no single magic bullet that can cure the problem of misinformation,” said co-author Adam Berinsky. “Studying basic questions in a systematic way is a critical step toward a portfolio of effective solutions.”
The study from Cornell is equal parts reassuring and frustrating. People viewing potentially misleading information were reliably influenced by the opinions of large groups — whether or not those groups were politically aligned with the reader.
It’s reassuring because it suggests that people are willing to trust that if 80 out of 100 people thought a story was a little fishy, even if 70 of those 80 were from the other party, there might just be something to it. It’s frustrating because of how seemingly easy it is to sway an opinion simply by saying that a large group thinks it’s one way or the other.
“In a practical way, we’re showing that people’s minds can be changed through social influence independent of politics,” said graduate student Maurice Jakesch, lead author of the paper. “This opens doors to use social influence in a way that may de-polarize online spaces and bring people together.”
Partisanship still played a role, it must be said — people were about 21 percent less likely to have their view swayed if the group opinion was led by people belonging to the other party. But even so people were very likely to be affected by the group’s judgment.
Part of why misinformation is so prevalent is because we don’t really understand why it’s so appealing to people, and what measures reduce that appeal, among other simple questions. As long as social media is blundering around in darkness they’re unlikely to stumble upon a solution, but every study like this makes a little more light.
from Social – TechCrunch https://ift.tt/3iK1psD
No comments:
Post a Comment