A preprint of the paper titled “Anatomy of an AI-powered malicious social botnet” by Yang and Menczer was posted on arXiv. Concerns have been raised that large language models (LLMs) could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a coordinated inauthentic network of over a thousand fake Twitter accounts that employ ChatGPT to post machine-generated content and stolen images, and to engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious crypto and news websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots and have been covered by Tech Policy Press, Business Insider, Wired, and Mashable, among others. And to no one’s surprise, versions of these articles likely summarized by ChatGPT already appear on plagiarized “news websites.”
Two of our latest works got accepted for the 15th ACM Web Science Conference (WebSci’23)! Web Science is an interdisciplinary field to study socio-technical systems, particularly on the web, and ACM WebSci is the premier conference for Web Science research.Continue reading Two papers got accepted for ACM WebSci’23
Today the Observatory on Social Media and CNetS launched a revamped research tool to give journalists, other researchers, and the public a broad view of what’s happening on social media. The tool helps overcome some of the biggest challenges of interpreting information flow online, which is often difficult to understand because it’s so fast-paced and experienced from the perspective of an individual account’s newsfeed.Continue reading New network visualization tool maps information spread
Our latest paper “Neutral bots probe political bias on social media” by Wen Chen, Diogo Pacheco, Kai-Cheng Yang & Fil Menczer just came out in Nature Communications. We find strong evidence of political bias on Twitter, but not as many think: (1) it is conservative rather than liberal bias, and (2) it results from user interactions (and abuse) rather than platform algorithms. We tracked neutral “drifter” bots to probe political biases. In the figure, we see the drifters in yellow and a sample of their friends and followers colored according to political alignment. Large nodes are accounts sharing a lot of low-credibility links.Continue reading Probing political bias on Twitter with drifter bots
Our 2011 paper Political Polarization on Twitter was recognized at the 2021 AAAI International Conference on Web and Social Media (ICWSM) with the Test of Time Award. First author Mike Conover, who was then a PhD student and is now Director of Machine Learning Engineering at Workday, accepted the award at a ceremony at the end of the ICWSM conference. Other authors are Jacob Ratkiewicz (now a Tech Lead at Google), Bruno Gonçalves (now VP at JPMorgan Chase), Matt Francisco (now Lecturer at IU Luddy School), Alessandro Flammini (Professor of Informatics at IU Luddy), and Filippo Menczer (Distinguished Professor and Director of the Observatory on Social Media at IU).Continue reading ICWSM Test of Time Award
CNetS alumnus Mihai Avram is the recipient of the 2020 Indiana University Distinguished Master’s Thesis Award for his work on Hoaxy and Fakey: Tools to Analyze and Mitigate the Spread of Misinformation in Social Media. This award recognizes a “truly outstanding” Master’s thesis based on criteria such as originality, documentation, significance, accuracy, organization, and style. Some of the findings in Mihai’s thesis have recently been published in the paper Exposure to social engagement metrics increases vulnerability to misinformation, in The Harvard Kennedy School Misinformation Review. Congratulations Mihai!
We are excited to announce the new v.1.3 of BotSlayer, our OSoMe cloud tool that lets journalists, researchers, citizens, & civil society organizations track narratives and detect potentially coordinated inauthentic information networks on Twitter in real-time. Improvements and new features include better stability, a new alert system, a Mac installer, and many additions to the interface. This version is released in time for those who would like to use BotSlayer to monitor #Election2020 manipulation.Continue reading UPDATE: BotSlayer tool to expose disinformation networks
In September 2020, we are introducing a major upgrade for Botometer. This post explains the changes and motivations behind them.Continue reading Botometer V4
In the groundbreaking new PBS series “NetWorld,” Niall Ferguson visits network theorists, social scientists and data analysts (including at CNetS!) to explore the intersection of social media, technology and the spread of cultural movements. Reviewing classic experiments and cutting-edge research, NetWorld demonstrates how human behavior, disruptive technology and profit can energize ideas and communication, ultimately changing the world.