A preprint of the paper titled “Anatomy of an AI-powered malicious social botnet” by Yang and Menczer was posted on arXiv. Concerns have been raised that large language models (LLMs) could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a coordinated inauthentic network of over a thousand fake Twitter accounts that employ ChatGPT to post machine-generated content and stolen images, and to engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious crypto and news websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots and have been covered by Tech Policy Press, Business Insider, Wired, and Mashable, among others. And to no one’s surprise, versions of these articles likely summarized by ChatGPT already appear on plagiarized “news websites.”
Our latest paper “Neutral bots probe political bias on social media” by Wen Chen, Diogo Pacheco, Kai-Cheng Yang & Fil Menczer just came out in Nature Communications. We find strong evidence of political bias on Twitter, but not as many think: (1) it is conservative rather than liberal bias, and (2) it results from user interactions (and abuse) rather than platform algorithms. We tracked neutral “drifter” bots to probe political biases. In the figure, we see the drifters in yellow and a sample of their friends and followers colored according to political alignment. Large nodes are accounts sharing a lot of low-credibility links.Continue reading Probing political bias on Twitter with drifter bots
CNetS alumnus Mihai Avram is the recipient of the 2020 Indiana University Distinguished Master’s Thesis Award for his work on Hoaxy and Fakey: Tools to Analyze and Mitigate the Spread of Misinformation in Social Media. This award recognizes a “truly outstanding” Master’s thesis based on criteria such as originality, documentation, significance, accuracy, organization, and style. Some of the findings in Mihai’s thesis have recently been published in the paper Exposure to social engagement metrics increases vulnerability to misinformation, in The Harvard Kennedy School Misinformation Review. Congratulations Mihai!
On 15 September 2020, The Washington Post published an article by Isaac Stanley-Becker titled “Pro-Trump youth group enlists teens in secretive campaign likened to a ‘troll farm,’ prompting rebuke by Facebook and Twitter.” The article reported on a network of accounts run by teenagers in Phoenix, who were coordinated and paid by an affiliate of conservative youth organization Turning Point USA. These accounts posted identical messages amplifying political narratives, including false claims about COVID-19 and electoral fraud. The same campaign was run on Twitter and Facebook, and both platforms suspended some of the accounts following queries from Stanley-Becker. The report was based in part on a preliminary analysis we conducted at the request of The Post. In this brief we provide further details about our analysis.Continue reading Evidence of a coordinated network amplifying inauthentic narratives in the 2020 US election
We are excited to announce the new v.1.3 of BotSlayer, our OSoMe cloud tool that lets journalists, researchers, citizens, & civil society organizations track narratives and detect potentially coordinated inauthentic information networks on Twitter in real-time. Improvements and new features include better stability, a new alert system, a Mac installer, and many additions to the interface. This version is released in time for those who would like to use BotSlayer to monitor #Election2020 manipulation.Continue reading UPDATE: BotSlayer tool to expose disinformation networks
Indiana University’s Observatory on Social Media, funded in part last year with a $3 million grant from the John S. and James L. Knight Foundation, has named two new Knight Fellows. Matthew DeVerna and Harry Yaojun Yan will help advance the center’s ongoing investigations into how information and misinformation spread online. The Observatory on Social Media, or OSoMe (pronounced “awesome”), is a collaboration between CNetS in the Luddy School of Informatics, Computing and Engineering; The Media School; and the IU Network Science Institute. Congratulations to Harry and Matt! More…
Indiana University will establish a $6 million research center to study the role of media and technology in society. With leadership by CNetS faculty, the Observatory on Social Media will investigate how information and misinformation spread online. It will also provide students, journalists and citizens with resources, data and training to identify and counter attempts to intentionally manipulate public opinion. Major support for the center comes from the John S. and James L. Knight Foundation, which will contribute $3 million, as well as funds from the university. The center is a collaboration between the IU School of Informatics, Computing and Engineering, The Media School and the IU Network Science Institute. More…
UPDATE: This paper is ranked #3 most read among all articles published by Nature Communications in 2018
Analysis by CNetS researchers of information shared on Twitter during the 2016 U.S. presidential election has found that social bots played a disproportionate role in spreading misinformation online. The study, published in the journal Nature Communications, analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017 — a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017. Among the findings: A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the low-credibility information on the network. These accounts were also responsible for 34 percent of all articles shared from low-credibility sources. The study also found that bots played a major role promoting low-credibility content in the first few moments before a story goes viral. Continue reading Twitter bots play disproportionate role spreading misinformation
Congratulations to Rion Correia, who successfully defended his PhD dissertation on Prediction of Drug Interaction and Adverse Reactions, with data from Electronic Health Records, Clinical Reporting, Scientific Literature, and Social Media, using Complexity Science Methods. Dr. Correia’s research used network science, machine learning, and data science to uncover population-level associations of drugs and symptoms, useful for public health surveillance. His findings show that Social Media (Instagram and Twitter) and Electronic Health Records of an entire city in Southern Brazil, are very useful to reveal how the Drug interaction phenomenon varies across distinct groups. For instance, he identifying gender biases and specific communities of interest in chronic disease (e.g. Epilepsy and Depression). In addition to Complex Networks and Systems, his dissertation contributes to the fields of biomedical informatics and precision public health by leveraging heterogeneous data sources at multiple levels to understand population and individual pharmacology differences and other public health problems.
Congratulations to Dimitar Nikolov, who successfully defended his PhD dissertation on Information Exposure Biases in Online Behaviors. Dr. Nikolov’s research explored the unintentional biases introduced by filtering, ranking, and recommendation algorithms that mediate our online consumption of information. His findings show that our reliance on modern online technologies limits exposure to diverse points of view and makes us vulnerable to misinformation. In particular, he analyzed two massive Web traffic datasets to quantify the popularity and homogeneity bias of several popular online platforms including social media, email, personalized news, and search engines. He also leveraged Twitter data to characterize the link between political partisanship and vulnerability to online pollution, such as fake news, conspiracy theories, and junk science. His dissertation contributes to the field of computational social science by putting the study of bias in information consumption and derived phenomena like political polarization, echo chambers, and online pollution on a more firm quantitative foundation.