CNetS and the Observatory on Social Media invite applications for a Postdoctoral Fellow position. The anticipated start date is August 1, 2022. The position is initially for 12 months and can be renewed for up to 24 additional months, depending on performance and funds availability.
The Postdoctoral Fellow will work with Alessandro Flammini, Filippo Menczer, and collaborators at IU and other universities on sponsored research related to online influence campaigns, at the intersection of machine learning, network, data, and computational social science. The Fellow will be expected to coordinate a team of graduate students, conduct research on significant projects in the areas of online information diffusion and manipulation, present work in progress at professional conferences and sponsored workshops, and assist with the development of funding proposals and scientific papers.
Our latest paper “Neutral bots probe political bias on social media” by Wen Chen, Diogo Pacheco, Kai-Cheng Yang & Fil Menczer just came out in Nature Communications. We find strong evidence of political bias on Twitter, but not as many think: (1) it is conservative rather than liberal bias, and (2) it results from user interactions (and abuse) rather than platform algorithms. We tracked neutral “drifter” bots to probe political biases. In the figure, we see the drifters in yellow and a sample of their friends and followers colored according to political alignment. Large nodes are accounts sharing a lot of low-credibility links.
UPDATE: This paper is ranked #3 most read among all articles published by Nature Communications in 2018
Analysis by CNetS researchers of information shared on Twitter during the 2016 U.S. presidential election has found that social bots played a disproportionate role in spreading misinformation online. The study, published in the journal Nature Communications, analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017 — a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017. Among the findings: A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the low-credibility information on the network. These accounts were also responsible for 34 percent of all articles shared from low-credibility sources. The study also found that bots played a major role promoting low-credibility content in the first few moments before a story goes viral. Continue reading Twitter bots play disproportionate role spreading misinformation→
Did more people see #thedress as blue and black or white and gold? How many Twitter users wanted pop star Katy Perry to take the #icebucketchallenge? The power to explore online social media movements — from the pop cultural to the political — with the same algorithmic sophistication as top experts in the field is now available to journalists, researchers and members of the public from a free, user-friendly online software suite released today by scientists at Indiana University. The Web-based tools, called the Observatory on Social Media, or “OSoMe” (pronounced “awesome”), provide anyone with an Internet connection the power to analyze online trends, memes and other online bursts of viral activity. An academic pre-print paper on the tools is available in the open-access journal PeerJ.
“This software and data mark a major goal in our work on Internet memes and trends over the past six years,” said Filippo Menczer, director of the Center for Complex Networks and Systems Research and a professor in the IU School of Informatics and Computing. “We are beginning to learn how information spreads in social networks, what causes a meme to go viral and what factors affect the long-term survival of misinformation online. The observatory provides an easy way to access these insights from a large, multi-year dataset.” Read more.
In an interview aired on the ABC (Australian) evening news program “The World” on April 4, 2016, Filippo Menczer discussed with host Beverley O’Connor how information and misinformation spread throughout the Internet and the roles of network structure and social bubbles in determining meme virality. Video here.
The DESPIC team at the Center for Complex Systems and Networks Research (CNetS) presented a demo of a new tool named BotOrNot at a DoD meeting held in Arlington, Virginia on April 23-25, 2014. BotOrNot (truthy.indiana.edu/botornot) is a tool to automatically detect whether a given Twitter user is a social bot or a human. Trained on Twitter bots collected by our lab and the infolab at Texas A&M University, BotOrNot analyzes over a thousand features from the user’s friendship network, content, and temporal information in real time and estimates the degree to which the account may be a bot. In addition to the demo, the DESPIC team (including colleagues at the University of Michigan) presented several posters on Scalable Architecture for Social Media Observatory, Meme Clustering in Streaming Data, Persuasion Detection in Social Streams, High-Resolution Anomaly Detection in Social Streams, and Early Detection and Analysis of Rumors. See more coverage of BotOrNot on PCWorld, IDS, BBC, Politico, and MIT Technology Review.
Congratulations to Lilian Weng, who successfully defended her Informatics PhD dissertation titled Information diffusion on online social networks. The thesis provides insights into information diffusion on online social networks from three aspects: people who share information, features of transmissible content, and the mutual effects between network structure and diffusion process. The first part delves into the limited human attention. The second part of Dr. Weng’s dissertation investigates properties of transmissible content, particularly into the topic space. Finally, the thesis presents studies of how network structure, particularly community structure, influences the propagation of Internet memes and how the information flow in turn affects social link formation. Dr. Weng’s work can contribute to a better and more comprehensive understanding of information diffusion among online social-technical systems and yield applications to viral marketing, advertisement, and social media analytics. Congratulations from her colleagues and committee members: Alessandro Flammini, YY Ahn, Steve Myers, and Fil Menczer!
Read our latest paper titled Social Dynamics of Science in Nature Scientific Reports. Authors Xiaoling Sun, Jasleen Kaur, Staša Milojević, Alessandro Flammini & Filippo Menczer ask, How do scientific disciplines emerge? No quantitative model to date allows us to validate competing theories on the different roles of endogenous processes, such as social collaborations, and exogenous events, such as scientific discoveries. Here we propose an agent-based model in which the evolution of disciplines is guided mainly by social interactions among agents representing scientists. Disciplines emerge from splitting and merging of social communities in a collaboration network. We find that this social model can account for a number of stylized facts about the relationships between disciplines, scholars, and publications. These results provide strong quantitative support for the key role of social interactions in shaping the dynamics of science. While several “science of science” theories exist, this is the first account for the emergence of disciplines that is validated on the basis of empirical data.