A preprint of the paper titled “Anatomy of an AI-powered malicious social botnet” by Yang and Menczer was posted on arXiv. Concerns have been raised that large language models (LLMs) could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a coordinated inauthentic network of over a thousand fake Twitter accounts that employ ChatGPT to post machine-generated content and stolen images, and to engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious crypto and news websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots and have been covered by Tech Policy Press, Business Insider, Wired, and Mashable, among others. And to no one’s surprise, versions of these articles likely summarized by ChatGPT already appear on plagiarized “news websites.”
On the 7th of March 2019, CNETS Professor Luis Rocha will participate in a panel organized by Nova SBE’s Executive Education, Instituto Gulbenkian da Ciência and ISI Foundation with the theme of AI, society and organisations: experiences from applied projects in governments, companies and NGO’s, where the role of data science in today’s world will be discussed.
Other guest speakers, include Rayid Ghani, director of the Center for Data Science and Public Policy in the University of Chicago, founder of the Data Science for Social Good fellowship and Chief Scientist at the Obama for America 2012, Daniela Paolotti, Ciro Cattuto, Joana Gonçalves-Sá and Leid Zejnilovic.