Researchers at CNetS, IUNI, and the Indiana University Observatory on Social Media have launched upgrades to two tools playing a major role in countering the spread of misinformation online: Hoaxy and Botometer. A third tool Fakey — an educational game designed to make people smarter news consumers — also launches with the upgrades. Continue reading 3 new tools to study and counter online disinformation
A project from NaN and IUNI was among 20 selected (out of over 800 applications) to address the spread of misinformation with support from the Knight Prototype Fund. Led by Fil Menczer, Giovanni Ciampaglia, Alessandro Flammini and Val Pentchev, the project will integrate the Hoaxy and Botometer tools and uncover attempts to use Internet bots to boost the spread of misinformation and shape public opinion. The tool aims to reveal how this information is generated and broadcasted, how it becomes viral, its overall reach, and how it competes with accurate information for placement on user feeds. The project will be supported by the Democracy Fund, which in March, along with partners Knight Foundation and Rita Allen Foundation, launched an open call for ideas around the question: How might we improve the flow of accurate information? The call sought projects that could be quickly built to respond to the challenges affecting the health of our news ecosystem and ultimately our democracy. The winning projects will receive a share of $1 million through the Knight Prototype Fund, a program focused on human-centered approaches to solving difficult problems.
Congratulations to Onur Varol for successfully defending his dissertation entitled “Analyzing Social Big Data to Study Online Discourse and its Manipulation” on April 25th 2017, supervised by Filippo Menczer. Onur completed a PhD degree in the Complex Systems track of the Informatics PhD Program. Onur has accepted a postdoctoral position at Northeastern University at the Center for Complex Network Research.
Update: workshop report available (AI Magazine Spring 2018 | DOI:10.1609/aimag.v39i1.2783 | Preprint)
The deluge of online and offline misinformation is overloading the exchange of ideas upon which democracies depend. Many have argued that echo chambers are increasingly constricting the ability of alternative perspectives to provide a check on one’s viewpoints. Suffering fragmentation and declining public trust, the Fourth Estate struggles to carry out its traditional editorial role distinguishing facts from fiction. Without those safeguards, fake news, conspiracy theories, and deceptive social bots proliferate, facilitating the manipulation of public opinion. Countering misinformation while protecting freedom of speech will require collaboration between stakeholders across industry, journalism, and academia. To foster such collaboration, the Workshop on Digital Misinformation will be held in conjunction with the 2017 International Conference on Web and Social Media (ICWSM) in Montreal, on May 15, 2017. Continue reading ICWSM 2017 Workshop on Digital Misinformation
If you get your news from social media, as most Americans do, you are exposed to a daily dose of hoaxes, rumors, conspiracy theories and misleading news. When it’s all mixed in with reliable information from honest sources, the truth can be very hard to discern.
Among the millions of real people tweeting about the presidential race, there are also a lot accounts operated by fake people, or “bots.” Politicians and regular users alike use these accounts to increase their follower bases and push messages. PBS NewsHour science correspondent Miles O’Brien reports on how CNetS computer scientists can analyze Twitter handles to determine whether or not they are bots.
Research on detection of social bots by CNetS faculty members Alessandro Flammini and Filippo Menczer, former IUNI research scientist Emilio Ferrara, and graduate students Clayton Davis, Onur Varol, and Prashant Shiralkar was featured on the covers of the two top computing venues: the June issue of Computer (flagship magazine of the IEEE Computer Society) and the July issue of Communications of the ACM (flagship publication of the ACM). Continue reading Social bot research featured on CACM, IEEE Computer covers
Congratulations to Clayton Davis, who won the best presenter prize at WWW 2016 Developers Day! Clayton presented BotOrNot: A system to evaluate social bots, a paper coauthored with Onur Varol, Emilio Ferrara, Alessandro Flammini and Filippo Menczer, that describes our latest API developments with the BotOrNot system.
The CNetS poster “The Rise of Social Bots in Online Social Networks” by Emilio Ferrara, Onur Varol, Prashant Shiralkar, Clayton Davis, Filippo Menczer, and Alessandro Flammini won a Best Poster Award at CCS 2015. The poster was presented by Clayton Davis. The results will also appear in the paper “The Rise of Social Bots” to be published in Comm. ACM (in press, preprint).
The paper “Modularity and the Spread of Perturbations in Complex Dynamical Systems” by Artemy Kolchinsky, Alexander J. Gates and Luis M. Rocha, and the poster “Information Theoretic Structures of the French Revolution” by Alexander Barron, Simon DeDeo and Rebecca Spang won additional awards.
Finally, our former postdoctoral scientist Bruno Gonçalves (now tenured faculty member at Aix-Marseille Université) received a Junior Scientist Award from the Complex Systems Society for his contributions to the study of human social behavior from large-scale online attention and behavioral data. This is the second Junior Scientist Award for CNetS (the first was won by Filippo Radicchi).
Congratulations to the CNetS team!
For the past four years, researchers at the Center for Complex Networks and Systems Research at the Indiana University School of Informatics and Computing have been studying the ways in which information spreads on social media networks such as Twitter. This basic research project is federally funded, like a large percentage of university research across the country.
The project, informally dubbed “Truthy,” makes use of complex computer models to analyze the sharing of information on social media to determine how popular sentiment, user influence, attention, social network structure, and other factors affect the manner in which information is disseminated. Additionally, an important goal of the Truthy project is to better understand how social media can be abused.
Since 25 Aug 2014, when a first misleading article was posted on a conservative blog, the Truthy project has come under criticism from some, including The Kelly File and Fox and Friends broadcasts by Fox News on 26 and 28 Aug 2014, who have misrepresented its goals. Contrary to these claims, the target is the study of the structural patterns of information diffusion. For example, an email sent simultaneously to a million addresses is likely spam, even if we have no automatic way to determine whether its content is true or false. The assumption behind the Truthy effort is that an understanding of the spreading patterns may facilitate the identification of abuse, independent from the nature or political color of the communication.
While the Truthy platform provides support to study the evolution of communication in all portions of the political spectrum, it is not informed by political partisanship. The machine learning algorithms used to identify suspicious patterns of information diffusion are entirely oblivious to the possibly political partisanship of the messages.
Timeline and updates:
8/28/2014: Despite the clarifications in this post, Fox News and others continued to perpetrate their attacks to our research project and to the PI personally. Their accusations are based on false claims, supported by bits of text and figures selectively extracted from our writings and presented completely out of context, in misleading ways. None of the researchers were contacted for comments before these outlandish conspiracy theories were aired and published. There is a good dose of irony in a research project that studies the diffusion of misinformation becoming the target of such a powerful disinformation machine. (The video of the first segment on “The Kelly File” with misinformation about our project was later removed from the Fox News website.)
9/3/2014: David Uberti wrote an accurate account of recent events in Columbia Journalism Review.
10/18/2014: Unfortunately, the smear campaign against our research project continues, with unsupported allegations echoed in an misleading op-ed by FCC Commissioner Ajit Pai, who did not contact any of the researchers with questions about the accuracy of his allegations.
10/22/2014: Amid news reports that the chairman of the House Science, Space and Technology Committee initiated an investigation into the NSF grant supporting our project, read our interview in the Washington Post’s Monkey Cage setting the record straight about our research.
10/24/2014: Fox News and FCC Commissioner Pai continue to spread disinformation about our research. (The video of the interview about our project, to which we were not invited, was later removed from the Fox News website.)
11/3/2014: Jeffrey Mervis covers the controversy about this project in Science. We also provided additional information about our research in a slide deck embedded at the bottom of this post.
11/4/2014: Five leading computing societies and associations (CRA, ACM, AAAI, USENIX, and SIAM) wrote a joint letter to the chairman and the committee ranking member of the House Committee on Science, Space, and Technology expressing their concern over mischaracterizations of our research.
11/7/2014: Over the past few days we have seen more coverage in Computer World, The Hill, Information Week, and Science about the reactions of the computing and science communities to the Truthy controversy.
11/11/2014: The House Science Committee Chairman sent a letter to the director of the NSF on November 10, stating that our grant “was intended to create standards for online political discussion” and that a web service developed under the grant “targeted conservative social media messages.” These allegations are false, as we have explained in this post, in the slides embedded below, and in our publications — including the one quoted in the Chairman’s letter. On the same day, the Association of American Universities released a statement on the grant inquires by the House Science Committee.
11/21/2014: False rumors about our research continue to be spread. Some of the questions we have received suggested that our two separate project and demo websites were generating confusion, so we merged them into a redesigned research website with information and highlights about the research project, publications, demos, data, etc.
11/25/2014: Rep. Johnson and Rep. Lofgren, respectively ranking member and member of the House Committee on Science, write a letter to the committee chairman, Rep. Smith, in response to his accusations.
Facts about Truthy:
- Truthy is an informal nickname associated with a research project of the Center for Complex Networks and Systems Research at the IU School of Informatics and Computing. The project aims to study how information spreads on social media, such as Twitter.
- The project has focused on domains such as news, politics, social movements, scientific results, and trending social media topics. Researchers develop theoretical computer models and validate them by analyzing public data, mainly from the Twitter streaming API.
- Social media posts available through public APIs are processed without human intervention or judgment to visualize and study the spread of millions of memes. We aim to build a platform to make these analytic tools easily accessible to social scientists, reporters, and the general public.
- An important goal of the project is to help mitigate misuse and abuse of social media by helping us better understand how social media can be potentially abused. For example: when social bots are used to create the appearance of human-generated communication (hence the name “truthy”). We study whether it is possible to automatically differentiate between organic content and so-called “astroturf.”
- Examples of research to date include analyses of geographic and temporal patterns in movements like Occupy Wall Street, societal unrest in Turkey, the polarization of online political discourse, the use of social media data to predict election outcomes and stock market movements, and the geographic diffusion of trending topics.
- On the more theoretical side, we have studied how individuals’ limited attention span affects what information we propagate and what social connections we make, and how the structure of social networks can help predict which memes are likely to become viral.
- Hundreds of researchers across the U.S. and the world are studying similar issues based on the same data and with analogous goals — these topics were studied well before the advent of social media. In the US these research efforts are supported not only by the NSF but also by other federal funding agencies such as DoD, DARPA, and IARPA.
- The results of our research have been covered widely in the press, published in top peer-reviewed journals, and presented at top conferences worldwide. All papers are publicly available.
Finally, the Truthy research project is not and never was:
- a political watchdog
- a database to be used by the federal government to monitor the activities of those who oppose its policies
- a government probe of social media
- an attempt to suppress free speech or limit political speech or develop standards for online political speech
- a way to define “misinformation”
- a partisan political effort
- a system targeting political messages and commentary connected to conservative groups
- a mechanism to terminate any social media accounts
- a database tracking hate speech