A preprint of the paper titled “Anatomy of an AI-powered malicious social botnet” by Yang and Menczer was posted on arXiv. Concerns have been raised that large language models (LLMs) could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a coordinated inauthentic network of over a thousand fake Twitter accounts that employ ChatGPT to post machine-generated content and stolen images, and to engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious crypto and news websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots and have been covered by Tech Policy Press, Business Insider, Wired, and Mashable, among others. And to no one’s surprise, versions of these articles likely summarized by ChatGPT already appear on plagiarized “news websites.”
The CNetS poster “The Rise of Social Bots in Online Social Networks” by Emilio Ferrara, Onur Varol, Prashant Shiralkar, Clayton Davis, Filippo Menczer, and Alessandro Flammini won a Best Poster Award at CCS 2015. The poster was presented by Clayton Davis. The results will also appear in the paper “The Rise of Social Bots” to be published in Comm. ACM (in press, preprint).
The paper “Modularity and the Spread of Perturbations in Complex Dynamical Systems” by Artemy Kolchinsky, Alexander J. Gates and Luis M. Rocha, and the poster “Information Theoretic Structures of the French Revolution” by Alexander Barron, Simon DeDeo and Rebecca Spang won additional awards.
Finally, our former postdoctoral scientist Bruno Gonçalves (now tenured faculty member at Aix-Marseille Université) received a Junior Scientist Award from the Complex Systems Society for his contributions to the study of human social behavior from large-scale online attention and behavioral data. This is the second Junior Scientist Award for CNetS (the first was won by Filippo Radicchi).
Congratulations to the CNetS team!
Predicting popularity and success in cultural markets is hard due to strong inequalities and inherent unpredictability. A good example comes from the world of fashion, where industry professionals face every season the difficult challenge of guessing who will be the next seasons’ top models. A recent study by CNetS graduate student Jaehyuk Park, research scientist Giovanni Luca Ciampaglia (also at the IU Network Science Institute), and research scientist Emilio Ferrara (now at the University of Southern California) is now showing that early success in modeling can be predicted from the digital traces left by the buzz on social media such as Instagram. The study has been accepted for presentation at the 19th ACM conference on Computer-Supported Cooperative Work and Social Computing (CSCW’16). The work has been covered in the media by the MIT Technology Review, Die Welt, Fusion, and iTNews.
LinkedIn announced that YY Ahn and his team of Ph.D. students from the Center for Complex Networks and Systems Research, including Yizhi Jing, Adazeh Nematzadeh, Jaehyuk Park, and Ian Wood, is one of the 11 winners of the LinkedIn Economic Graph Challenge.
Their project, “Forecasting large-scale industrial evolution,” aims to understand the macro-evolution of industries to track businesses and emerging skills. This data would be used to forecast economic trends and guide professionals toward promising career paths.
“This is a fascinating opportunity to study the network of industries and people with unprecedented details and size. All of us are very excited to collaborate with LinkedIn and our LinkedIn mentor, Mike Conover, who is a recent Informatics PhD alumnus, on this topic,” said Ahn. Read more…
For the past four years, researchers at the Center for Complex Networks and Systems Research at the Indiana University School of Informatics and Computing have been studying the ways in which information spreads on social media networks such as Twitter. This basic research project is federally funded, like a large percentage of university research across the country.
The project, informally dubbed “Truthy,” makes use of complex computer models to analyze the sharing of information on social media to determine how popular sentiment, user influence, attention, social network structure, and other factors affect the manner in which information is disseminated. Additionally, an important goal of the Truthy project is to better understand how social media can be abused.
Since 25 Aug 2014, when a first misleading article was posted on a conservative blog, the Truthy project has come under criticism from some, including The Kelly File and Fox and Friends broadcasts by Fox News on 26 and 28 Aug 2014, who have misrepresented its goals. Contrary to these claims, the target is the study of the structural patterns of information diffusion. For example, an email sent simultaneously to a million addresses is likely spam, even if we have no automatic way to determine whether its content is true or false. The assumption behind the Truthy effort is that an understanding of the spreading patterns may facilitate the identification of abuse, independent from the nature or political color of the communication.
While the Truthy platform provides support to study the evolution of communication in all portions of the political spectrum, it is not informed by political partisanship. The machine learning algorithms used to identify suspicious patterns of information diffusion are entirely oblivious to the possibly political partisanship of the messages.
Timeline and updates:
8/28/2014: Despite the clarifications in this post, Fox News and others continued to perpetrate their attacks to our research project and to the PI personally. Their accusations are based on false claims, supported by bits of text and figures selectively extracted from our writings and presented completely out of context, in misleading ways. None of the researchers were contacted for comments before these outlandish conspiracy theories were aired and published. There is a good dose of irony in a research project that studies the diffusion of misinformation becoming the target of such a powerful disinformation machine. (The video of the first segment on “The Kelly File” with misinformation about our project was later removed from the Fox News website.)
9/3/2014: David Uberti wrote an accurate account of recent events in Columbia Journalism Review.
10/18/2014: Unfortunately, the smear campaign against our research project continues, with unsupported allegations echoed in an misleading op-ed by FCC Commissioner Ajit Pai, who did not contact any of the researchers with questions about the accuracy of his allegations.
10/22/2014: Amid news reports that the chairman of the House Science, Space and Technology Committee initiated an investigation into the NSF grant supporting our project, read our interview in the Washington Post’s Monkey Cage setting the record straight about our research.
10/24/2014: Fox News and FCC Commissioner Pai continue to spread disinformation about our research. (The video of the interview about our project, to which we were not invited, was later removed from the Fox News website.)
11/3/2014: Jeffrey Mervis covers the controversy about this project in Science. We also provided additional information about our research in a slide deck embedded at the bottom of this post.
11/4/2014: Five leading computing societies and associations (CRA, ACM, AAAI, USENIX, and SIAM) wrote a joint letter to the chairman and the committee ranking member of the House Committee on Science, Space, and Technology expressing their concern over mischaracterizations of our research.
11/7/2014: Over the past few days we have seen more coverage in Computer World, The Hill, Information Week, and Science about the reactions of the computing and science communities to the Truthy controversy.
11/11/2014: The House Science Committee Chairman sent a letter to the director of the NSF on November 10, stating that our grant “was intended to create standards for online political discussion” and that a web service developed under the grant “targeted conservative social media messages.” These allegations are false, as we have explained in this post, in the slides embedded below, and in our publications — including the one quoted in the Chairman’s letter. On the same day, the Association of American Universities released a statement on the grant inquires by the House Science Committee.
11/21/2014: False rumors about our research continue to be spread. Some of the questions we have received suggested that our two separate project and demo websites were generating confusion, so we merged them into a redesigned research website with information and highlights about the research project, publications, demos, data, etc.
11/25/2014: Rep. Johnson and Rep. Lofgren, respectively ranking member and member of the House Committee on Science, write a letter to the committee chairman, Rep. Smith, in response to his accusations.
Facts about Truthy:
- Truthy is an informal nickname associated with a research project of the Center for Complex Networks and Systems Research at the IU School of Informatics and Computing. The project aims to study how information spreads on social media, such as Twitter.
- The project has focused on domains such as news, politics, social movements, scientific results, and trending social media topics. Researchers develop theoretical computer models and validate them by analyzing public data, mainly from the Twitter streaming API.
- Social media posts available through public APIs are processed without human intervention or judgment to visualize and study the spread of millions of memes. We aim to build a platform to make these analytic tools easily accessible to social scientists, reporters, and the general public.
- An important goal of the project is to help mitigate misuse and abuse of social media by helping us better understand how social media can be potentially abused. For example: when social bots are used to create the appearance of human-generated communication (hence the name “truthy”). We study whether it is possible to automatically differentiate between organic content and so-called “astroturf.”
- Examples of research to date include analyses of geographic and temporal patterns in movements like Occupy Wall Street, societal unrest in Turkey, the polarization of online political discourse, the use of social media data to predict election outcomes and stock market movements, and the geographic diffusion of trending topics.
- On the more theoretical side, we have studied how individuals’ limited attention span affects what information we propagate and what social connections we make, and how the structure of social networks can help predict which memes are likely to become viral.
- Hundreds of researchers across the U.S. and the world are studying similar issues based on the same data and with analogous goals — these topics were studied well before the advent of social media. In the US these research efforts are supported not only by the NSF but also by other federal funding agencies such as DoD, DARPA, and IARPA.
- The results of our research have been covered widely in the press, published in top peer-reviewed journals, and presented at top conferences worldwide. All papers are publicly available.
Finally, the Truthy research project is not and never was:
- a political watchdog
- a database to be used by the federal government to monitor the activities of those who oppose its policies
- a government probe of social media
- an attempt to suppress free speech or limit political speech or develop standards for online political speech
- a way to define “misinformation”
- a partisan political effort
- a system targeting political messages and commentary connected to conservative groups
- a mechanism to terminate any social media accounts
- a database tracking hate speech
The DESPIC team at the Center for Complex Systems and Networks Research (CNetS) presented a demo of a new tool named BotOrNot at a DoD meeting held in Arlington, Virginia on April 23-25, 2014. BotOrNot (truthy.indiana.edu/botornot) is a tool to automatically detect whether a given Twitter user is a social bot or a human. Trained on Twitter bots collected by our lab and the infolab at Texas A&M University, BotOrNot analyzes over a thousand features from the user’s friendship network, content, and temporal information in real time and estimates the degree to which the account may be a bot. In addition to the demo, the DESPIC team (including colleagues at the University of Michigan) presented several posters on Scalable Architecture for Social Media Observatory, Meme Clustering in Streaming Data, Persuasion Detection in Social Streams, High-Resolution Anomaly Detection in Social Streams, and Early Detection and Analysis of Rumors. See more coverage of BotOrNot on PCWorld, IDS, BBC, Politico, and MIT Technology Review.
Congratulations to Lilian Weng, who successfully defended her Informatics PhD dissertation titled Information diffusion on online social networks. The thesis provides insights into information diffusion on online social networks from three aspects: people who share information, features of transmissible content, and the mutual effects between network structure and diffusion process. The first part delves into the limited human attention. The second part of Dr. Weng’s dissertation investigates properties of transmissible content, particularly into the topic space. Finally, the thesis presents studies of how network structure, particularly community structure, influences the propagation of Internet memes and how the information flow in turn affects social link formation. Dr. Weng’s work can contribute to a better and more comprehensive understanding of information diffusion among online social-technical systems and yield applications to viral marketing, advertisement, and social media analytics. Congratulations from her colleagues and committee members: Alessandro Flammini, YY Ahn, Steve Myers, and Fil Menczer!
On August 11, 2013, the New York Times published an article by Ian Urbina with the headline: I Flirt and Tweet. Follow Me at #Socialbot. The article reports on how socialbots (software simulating people on social media) are being designed to sway elections, to influence the stock market, even to flirt with people and one another. Fil Menczer is quoted: “Bots are getting smarter and easier to create, and people are more susceptible to being fooled by them because we’re more inundated with information.” The article also mentions the Truthy project and some of our 2010 findings on political astroturf.
Inspired by this, the writers of The Good Wife consulted with us on an episode in which the main character finds that a social news site is using a socialbot to bring traffic to the site, defaming her client. The episode aired on November 24, 2013, on CBS (Season 5 Episode 9, “Whack-a-Mole”). Good show!
Findings by CNetS researchers on social media indicators of election results received significant coverage in the national press. The paper More Tweets, More Votes: Social Media as a Quantitative Indicator of Political Behavior by Joseph Digrazia, Karissa McKelvey, Johan Bollen, and Fabio Rojas was presented at the 2013 Meeting of the American Sociological Association in NYC. It was covered by NPR, The Wall Street Journal, MSNBC, C-SPAN, The Washington Post, The Atlantic, and many other media.
Read our latest paper titled Social Dynamics of Science in Nature Scientific Reports. Authors Xiaoling Sun, Jasleen Kaur, Staša Milojević, Alessandro Flammini & Filippo Menczer ask, How do scientific disciplines emerge? No quantitative model to date allows us to validate competing theories on the different roles of endogenous processes, such as social collaborations, and exogenous events, such as scientific discoveries. Here we propose an agent-based model in which the evolution of disciplines is guided mainly by social interactions among agents representing scientists. Disciplines emerge from splitting and merging of social communities in a collaboration network. We find that this social model can account for a number of stylized facts about the relationships between disciplines, scholars, and publications. These results provide strong quantitative support for the key role of social interactions in shaping the dynamics of science. While several “science of science” theories exist, this is the first account for the emergence of disciplines that is validated on the basis of empirical data.