A preprint of the paper titled “Anatomy of an AI-powered malicious social botnet” by Yang and Menczer was posted on arXiv. Concerns have been raised that large language models (LLMs) could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a coordinated inauthentic network of over a thousand fake Twitter accounts that employ ChatGPT to post machine-generated content and stolen images, and to engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious crypto and news websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots and have been covered by Tech Policy Press, Business Insider, Wired, and Mashable, among others. And to no one’s surprise, versions of these articles likely summarized by ChatGPT already appear on plagiarized “news websites.”
Today the Observatory on Social Media and CNetS launched a revamped research tool to give journalists, other researchers, and the public a broad view of what’s happening on social media. The tool helps overcome some of the biggest challenges of interpreting information flow online, which is often difficult to understand because it’s so fast-paced and experienced from the perspective of an individual account’s newsfeed.Continue reading New network visualization tool maps information spread
Our latest paper “Neutral bots probe political bias on social media” by Wen Chen, Diogo Pacheco, Kai-Cheng Yang & Fil Menczer just came out in Nature Communications. We find strong evidence of political bias on Twitter, but not as many think: (1) it is conservative rather than liberal bias, and (2) it results from user interactions (and abuse) rather than platform algorithms. We tracked neutral “drifter” bots to probe political biases. In the figure, we see the drifters in yellow and a sample of their friends and followers colored according to political alignment. Large nodes are accounts sharing a lot of low-credibility links.Continue reading Probing political bias on Twitter with drifter bots
Our 2011 paper Political Polarization on Twitter was recognized at the 2021 AAAI International Conference on Web and Social Media (ICWSM) with the Test of Time Award. First author Mike Conover, who was then a PhD student and is now Director of Machine Learning Engineering at Workday, accepted the award at a ceremony at the end of the ICWSM conference. Other authors are Jacob Ratkiewicz (now a Tech Lead at Google), Bruno Gonçalves (now VP at JPMorgan Chase), Matt Francisco (now Lecturer at IU Luddy School), Alessandro Flammini (Professor of Informatics at IU Luddy), and Filippo Menczer (Distinguished Professor and Director of the Observatory on Social Media at IU).Continue reading ICWSM Test of Time Award
On 15 September 2020, The Washington Post published an article by Isaac Stanley-Becker titled “Pro-Trump youth group enlists teens in secretive campaign likened to a ‘troll farm,’ prompting rebuke by Facebook and Twitter.” The article reported on a network of accounts run by teenagers in Phoenix, who were coordinated and paid by an affiliate of conservative youth organization Turning Point USA. These accounts posted identical messages amplifying political narratives, including false claims about COVID-19 and electoral fraud. The same campaign was run on Twitter and Facebook, and both platforms suspended some of the accounts following queries from Stanley-Becker. The report was based in part on a preliminary analysis we conducted at the request of The Post. In this brief we provide further details about our analysis.Continue reading Evidence of a coordinated network amplifying inauthentic narratives in the 2020 US election
In September 2020, we are introducing a major upgrade for Botometer. This post explains the changes and motivations behind them.Continue reading Botometer V4
Researchers at CNetS, IUNI, and the Indiana University Observatory on Social Media have launched upgrades to two tools playing a major role in countering the spread of misinformation online: Hoaxy and Botometer. A third tool Fakey — an educational game designed to make people smarter news consumers — also launches with the upgrades. Continue reading 3 new tools to study and counter online disinformation
First global analysis of human birth-rate cycles reveals that post-holiday ‘baby boom’ persists across cultures, hemispheres. CNetS PhD student Ian Wood and Professors Luis Rocha and Johan Bollen, in collaboration with Joana Sá, used data science and computational social science methods to demonstrate that “Human Sexual Cycles are Driven by Culture and Match Collective Moods.” See full article at IU News and media coverage in many venues such as The Independent, Time, Newsweek, Publico, ScienceDaily, Phys.org, The National Post, DailyMail, The Hindustan Times, Men’s Fitness, Mother Jones, Drive with Yasmeen Khan (at 17:30) (audio of interview), etc. Discussion of the paper was a top trending topic on Reddit. Watch a short video about the research.
Thanks to support from the Indiana University Network Science Institute (IUNI) and Digital Science Center (DSC), the full content of the Twitter data repository from the Observatory on Social Media (OSoMe) is now available to all IU researchers. Many tools to detect social bots, study the spread of fake news, visualize meme diffusion networks, trends, and maps, as well as APIs to access this data, have been available to the general public since mid-2016. Now, however, the IU research community can access enhanced data and content from the large collection, based on a 10% sample of all public tweets. A dedicated portal allows IU faculty and students to submit queries to the OSoMe cluster based on hashtags, URLs, keywords, geo-coordinates, and other criteria. At any time the system can search and retrieve data from the previous 18 months. We hope this resource will spur and support new research projects in all areas of computing, natural, and social sciences. Click here to read how to get access and learn more about the data, or attend our Open Science Forum!
Among the millions of real people tweeting about the presidential race, there are also a lot accounts operated by fake people, or “bots.” Politicians and regular users alike use these accounts to increase their follower bases and push messages. PBS NewsHour science correspondent Miles O’Brien reports on how CNetS computer scientists can analyze Twitter handles to determine whether or not they are bots.