Tag Archives: Twitter

Anatomy of an AI-powered malicious social botnet

A preprint of the paper titled “Anatomy of an AI-powered malicious social botnet” by Yang and Menczer was posted on arXiv. Concerns have been raised that large language models (LLMs) could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a coordinated inauthentic network of over a thousand fake Twitter accounts that employ ChatGPT to post machine-generated content and stolen images, and to engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious crypto and news websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots and have been covered by Tech Policy Press, Business Insider, Wired, and Mashable, among others. And to no one’s surprise, versions of these articles likely summarized by ChatGPT already appear on plagiarized “news websites.”

Networks Tool Visualization

New network visualization tool maps information spread

Today the Observatory on Social Media and CNetS launched a revamped research tool to give journalists, other researchers, and the public a broad view of what’s happening on social media. The tool helps overcome some of the biggest challenges of interpreting information flow online, which is often difficult to understand because it’s so fast-paced and experienced from the perspective of an individual account’s newsfeed.

Continue reading New network visualization tool maps information spread
drifter bots

Probing political bias on Twitter with drifter bots

Our latest paper “Neutral bots probe political bias on social media” by Wen Chen, Diogo Pacheco, Kai-Cheng Yang & Fil Menczer just came out in Nature Communications. We find strong evidence of political bias on Twitter, but not as many think: (1) it is conservative rather than liberal bias, and (2) it results from user interactions (and abuse) rather than platform algorithms. We tracked neutral “drifter” bots to probe political biases. In the figure, we see the drifters in yellow and a sample of their friends and followers colored according to political alignment. Large nodes are accounts sharing a lot of low-credibility links.

Continue reading Probing political bias on Twitter with drifter bots

ICWSM Test of Time Award

Twitter echo chambers

Our 2011 paper Political Polarization on Twitter was recognized at the 2021 AAAI International Conference on Web and Social Media (ICWSM) with the Test of Time Award. First author Mike Conover, who was then a PhD student and is now Director of Machine Learning Engineering at Workday, accepted the award at a ceremony at the end of the ICWSM conference. Other authors are Jacob Ratkiewicz (now a Tech Lead at Google), Bruno Gonçalves (now VP at JPMorgan Chase), Matt Francisco (now Lecturer at IU Luddy School), Alessandro Flammini (Professor of Informatics at IU Luddy), and Filippo Menczer (Distinguished Professor and Director of the Observatory on Social Media at IU).

Continue reading ICWSM Test of Time Award

Evidence of a coordinated network amplifying inauthentic narratives in the 2020 US election

Coordinated network amplifying inauthentic narratives in the 2020 US election

On 15 September 2020, The Washington Post published an article by Isaac Stanley-Becker titled “Pro-Trump youth group enlists teens in secretive campaign likened to a ‘troll farm,’ prompting rebuke by Facebook and Twitter.” The article reported on a network of accounts run by teenagers in Phoenix, who were coordinated and paid by an affiliate of conservative youth organization Turning Point USA. These accounts posted identical messages amplifying political narratives, including false claims about COVID-19 and electoral fraud. The same campaign was run on Twitter and Facebook, and both platforms suspended some of the accounts following queries from Stanley-Becker. The report was based in part on a preliminary analysis we conducted at the request of The Post. In this brief we provide further details about our analysis.

Continue reading Evidence of a coordinated network amplifying inauthentic narratives in the 2020 US election

3 new tools to study and counter online disinformation

Researchers at CNetS, IUNI, and the Indiana University Observatory on Social Media have launched upgrades to two tools playing a major role in countering the spread of misinformation online: Hoaxy and Botometer.  A third tool Fakey — an educational game designed to make people smarter news consumers — also launches with the upgrades. Continue reading 3 new tools to study and counter online disinformation

Cracking the stealth political influence of bots

Among the millions of real people tweeting about the presidential race, there are also a lot accounts operated by fake people, or “bots.” Politicians and regular users alike use these accounts to increase their follower bases and push messages. PBS NewsHour science correspondent Miles O’Brien reports on how CNetS computer scientists can analyze Twitter handles to determine whether or not they are bots.