The spread of misinformation on social media is a serious societal problem that tech companies and policy makers grapple with, yet those who study the issue still lack a deep understanding of why and how false news spreads. .
To shed some light on this obscure topic, researchers at MIT developed a theoretical model of a Twitter-like social network to study how news is shared and detect situations where a non-verbal A credible news item will spread more widely than the truth. Agents in the model are motivated by a desire to persuade others to have their say: The main assumption in the model is that people are bothered to share something with their followers if they feel it is persuasive and encourages others to speak their mind. What is likely to take you closer is the mindset. Otherwise they won’t share.
The researchers found that in such a setting, when a network is highly connected or the views of its members are increasingly polarized, news that is likely to be false will spread more widely and tend to be more reliable than news with high credibility. will travel deeper into the network in comparison.
This theoretical work could inform empirical studies of the relationship between the credibility of news and the size of its dissemination, which could help social media companies optimize networks to limit the spread of false information.
“We show that, even if people are rational in how they decide to share news, it can lead to the amplification of information with less credibility. With this persuasion motive, I believe No matter how extreme – given that the more extreme they are, the more I gain from pushing others’ opinions – there is always someone who elevates it [the information]says senior author Ali Jadbaby, professor and head of the Department of Civil and Environmental Engineering and a core faculty member at the Institute for Data, Systems and Society (IDSS) and a principal investigator in the Laboratory for Information and Decision Systems. Cap).
The first authors to join forces on the paper are Chin-Chia Soo, a graduate student in the Social and Engineering Systems Program at IDSS, and Amir Azorlu, a LIDS research scientist. The research will be presented this week at the IEEE Conference on Decision and Control.
The research builds on a 2018 study by Sinan Aral, the David Austin Professor of Management at the MIT Sloan School of Management; Deb Roy, Professor of Media Arts and Sciences in the Media Lab; and former postdoc Soroush Vosofi (now assistant professor of computer science at Dartmouth University). Their empirical study of Twitter’s data found that false news spreads wider, faster and deeper than real news.
Jadbai and her colleagues wanted to dig deeper into why this happens.
They hypothesized that persuasion may be a strong motive for sharing news – perhaps agents in the network want to persuade others to obey them – and decided to build a theoretical model that would let them explore this possibility.
In their model, agents have some prior beliefs about the policy, and their goal is to persuade followers to move their beliefs closer to the agent side of the spectrum.
A news item is initially issued to a small, random subset of agents, who must decide whether or not to share the news with their followers. An agent weighs the newsworthiness and credibility of the item, and updates its belief based on how surprising or reassuring the news is.
“They will do a cost-benefit analysis to see whether, on average, this news will drive people closer to what they think or drive them away. And we include a marginal cost to share. For example, taking some action. Well, if you’re scrolling through social media, you’ll have to stop to do so. Think of it as a cost. Or if I share something that’s embarrassing it may cost reputation. Everyone It costs money, so the more extreme and interesting the news is, the more likely you are to share it,” says Jadbai.
If the news confirms the agent’s point of view and has a driving force that exceeds the nominal cost, the agent will always share the news. But if an agent feels that the news is something that other people have already seen, the agent is discouraged from sharing it.
Since an agent’s willingness to share news is a product of his perspective and how persuasive the news is, the more extreme the agent’s approach or the more surprising the news, the more likely the agent will share it.
The researchers used this model to study how information spreads during the news cascade, an unbroken sharing chain that rapidly penetrates networks.
Connectivity and Polarization
The team found that when a network has high connectivity and the news is surprising, the reliability threshold for initiating a news cascade is low. High connectivity means that there are many connections between many users in the network.
Similarly, when the network becomes substantially polarized, there are many agents who are highly opinionated who want to share the news item, starting a news cascade. In both of these cases, news with low credibility forms the largest cascade.
“For any news, there is a natural network speed limit, a limit of connectivity, which facilitates good transmission of information where the size of the cascade is maximized by true news. If you cross the line, you will end up in situations where there is a lot of false news or news with low credibility,” says Jadbaby.
If the views of users in the network become more diverse, there is less chance that a poorly credible news will spread more widely than the truth.
Jadabi and his colleagues designed the agents in the network to behave rationally, so the model would better capture the actions that real humans might perform if they wanted to persuade others.
“One might say that’s why people don’t share, and that’s valid. Why people do certain things is a subject of intense debate in cognitive science, social psychology, neuroscience, economics, and political science,” he says. “Depending on your assumptions, you get different results. But it seems to me that this notion of persuasion being the motive is a natural assumption.”
Their model also shows how costs can be manipulated to reduce the spread of false information. Agents conduct a cost-benefit analysis and will not share the news if the cost of doing so exceeds the benefit of sharing.
“We don’t make any policy prescriptions, but one thing this work shows is that, perhaps, having some cost associated with news sharing isn’t a bad idea. The reason you get a lot of these cascades is that news sharing The cost of doing it is actually very low,” he says.
“The role of social networks in shaping thoughts and influencing behavior has been widely noted. Empirical research conducted by Sinan Aral among his colleagues at MIT shows that false news is more widespread than true news.” In their new paper, Ali Jadbaby and colleagues help us an elegant model for this puzzle, says Sanjeev Goel, professor of economics at the University of Cambridge, who was not involved in this research. provide an explanation”
This work was supported by an Army Research Office Multidisciplinary University Research Initiative grant and a Vannevar Bush fellowship from the Office of the Secretary of Defense.