Our latest paper “Neutral bots probe political bias on social media” by Wen Chen, Diogo Pacheco, Kai-Cheng Yang & Fil Menczer just came out in Nature Communications. We find strong evidence of political bias on Twitter, but not as many think: (1) it is conservative rather than liberal bias, and (2) it results from user interactions (and abuse) rather than platform algorithms. We tracked neutral “drifter” bots to probe political biases. In the figure, we see the drifters in yellow and a sample of their friends and followers colored according to political alignment. Large nodes are accounts sharing a lot of low-credibility links.Continue reading Probing political bias on Twitter with drifter bots
UPDATE: This paper is ranked #3 most read among all articles published by Nature Communications in 2018
Analysis by CNetS researchers of information shared on Twitter during the 2016 U.S. presidential election has found that social bots played a disproportionate role in spreading misinformation online. The study, published in the journal Nature Communications, analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017 — a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017. Among the findings: A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the low-credibility information on the network. These accounts were also responsible for 34 percent of all articles shared from low-credibility sources. The study also found that bots played a major role promoting low-credibility content in the first few moments before a story goes viral. Continue reading Twitter bots play disproportionate role spreading misinformation
UPDATE (21 Dec 2016): we just launched Hoaxy, our open platform to visualize the online spread of claims and fact checking.
Did more people see #thedress as blue and black or white and gold? How many Twitter users wanted pop star Katy Perry to take the #icebucketchallenge? The power to explore online social media movements — from the pop cultural to the political — with the same algorithmic sophistication as top experts in the field is now available to journalists, researchers and members of the public from a free, user-friendly online software suite released today by scientists at Indiana University. The Web-based tools, called the Observatory on Social Media, or “OSoMe” (pronounced “awesome”), provide anyone with an Internet connection the power to analyze online trends, memes and other online bursts of viral activity. An academic pre-print paper on the tools is available in the open-access journal PeerJ.
“This software and data mark a major goal in our work on Internet memes and trends over the past six years,” said Filippo Menczer, director of the Center for Complex Networks and Systems Research and a professor in the IU School of Informatics and Computing. “We are beginning to learn how information spreads in social networks, what causes a meme to go viral and what factors affect the long-term survival of misinformation online. The observatory provides an easy way to access these insights from a large, multi-year dataset.” Read more.
In an interview aired on the ABC (Australian) evening news program “The World” on April 4, 2016, Filippo Menczer discussed with host Beverley O’Connor how information and misinformation spread throughout the Internet and the roles of network structure and social bubbles in determining meme virality. Video here.
The DESPIC team at the Center for Complex Systems and Networks Research (CNetS) presented a demo of a new tool named BotOrNot at a DoD meeting held in Arlington, Virginia on April 23-25, 2014. BotOrNot (truthy.indiana.edu/botornot) is a tool to automatically detect whether a given Twitter user is a social bot or a human. Trained on Twitter bots collected by our lab and the infolab at Texas A&M University, BotOrNot analyzes over a thousand features from the user’s friendship network, content, and temporal information in real time and estimates the degree to which the account may be a bot. In addition to the demo, the DESPIC team (including colleagues at the University of Michigan) presented several posters on Scalable Architecture for Social Media Observatory, Meme Clustering in Streaming Data, Persuasion Detection in Social Streams, High-Resolution Anomaly Detection in Social Streams, and Early Detection and Analysis of Rumors. See more coverage of BotOrNot on PCWorld, IDS, BBC, Politico, and MIT Technology Review.
Congratulations to Lilian Weng, who successfully defended her Informatics PhD dissertation titled Information diffusion on online social networks. The thesis provides insights into information diffusion on online social networks from three aspects: people who share information, features of transmissible content, and the mutual effects between network structure and diffusion process. The first part delves into the limited human attention. The second part of Dr. Weng’s dissertation investigates properties of transmissible content, particularly into the topic space. Finally, the thesis presents studies of how network structure, particularly community structure, influences the propagation of Internet memes and how the information flow in turn affects social link formation. Dr. Weng’s work can contribute to a better and more comprehensive understanding of information diffusion among online social-technical systems and yield applications to viral marketing, advertisement, and social media analytics. Congratulations from her colleagues and committee members: Alessandro Flammini, YY Ahn, Steve Myers, and Fil Menczer!
Congratulations to Przemyslaw Grabowicz, Luca Aiello, and Fil Menczer for winning the WICI Data Challenge. A prize of $10,000 CAD accompanies this award from the Waterloo Institute for Complexity and Innovation at the University of Waterloo. The Challenge called for tools and methods that improve the exploration, analysis, and visualization of complex-systems data. The winning entry, titled Fast visualization of relevant portions of large dynamic networks, is an algorithm that selects subsets of nodes and edges that best represent an evolving graph and visualizes it either by creating a movie, or by streaming it to an interactive network visualization tool. The algorithm is deployed in the movie generation tool of the Truthy system, which allows users to create, in near-real time, YouTube videos that illustrate the spread and co-occurrence of memes on Twitter. Przemek and Luca worked on this project while visiting CNetS in 2011 and collaborating with the Truthy team. Bravo!
Read our latest paper titled Social Dynamics of Science in Nature Scientific Reports. Authors Xiaoling Sun, Jasleen Kaur, Staša Milojević, Alessandro Flammini & Filippo Menczer ask, How do scientific disciplines emerge? No quantitative model to date allows us to validate competing theories on the different roles of endogenous processes, such as social collaborations, and exogenous events, such as scientific discoveries. Here we propose an agent-based model in which the evolution of disciplines is guided mainly by social interactions among agents representing scientists. Disciplines emerge from splitting and merging of social communities in a collaboration network. We find that this social model can account for a number of stylized facts about the relationships between disciplines, scholars, and publications. These results provide strong quantitative support for the key role of social interactions in shaping the dynamics of science. While several “science of science” theories exist, this is the first account for the emergence of disciplines that is validated on the basis of empirical data.
Research by our Truthy team was recently featured in New Scientist, USA Today, and the cover story of Science News. The Truthy project, developed by CNetS researchers and doctoral students, aims to study the factors affecting the spread of information — and misinformation — in social media.
The Truthy site charts tweet sentiment and volume related to themes such as social movements and news. It also monitors Twitter activity to build interactive networks that let visitors visualize the diffusion networks of memes, identify the most influential information spreaders, and explore those influential feeds and other information about their online activity, such as sentiment and language. Other tools let you map the geo-temporal diffusion of memes, generate YouTube movies that display how hashtags emerge and connect, and download data directly from Twitter. With these analytics, one can begin to ask question such as: How does sentiment change in response to events and memes? What memes survive over time? Who are the most influential users on a particular topic?
For more press coverage go to the Truthy press page.