How much misinformation is spread online

Research against disinformation on the Internet

In the current corona crisis in particular, there is a lot of information and headlines circulating on the Internet aimed at influencing people. For example, the federal government is currently warning of a particularly large number of errors and misinformation about vaccinations and vaccines. Conspiracy theories are also spreading more and more. Disinformation on the Internet affects all channels that citizens use to obtain information online and form an opinion: from messenger services to social networks and video platforms to blogs and news sites. The manipulated information gives the impression that it is real, verifiable messages. Lies and propaganda are spread in a targeted manner and money is earned with clicks. Due to the simple forwarding options, disinformation can also reach a large number of people very quickly. As a result, many people are increasingly insecure: Which news is true? What are trustworthy sources?

Briefly explained: What does research understand by disinformation?

“Disinformation is reports that are distributed with misleading or manipulative intent and that do not stand up to a fact-checking. The demonstrably dishonest intention distinguishes it from false information, which is usually caused by research errors or haste. The term fake news, on the other hand, has a wide range of meanings in the media. There are some people who call certain messages fake news for one simple reason: they don't like the messages. However, if these reports are based on facts and presented according to journalistic standards, they are not disinformation. "

Dr. Michael Kreutzer, coordinator of the BMBF funded project "DORIAN - Discovering and fighting disinformation on the Internet"

How to deal with the infodemic during the pandemic?

In view of the massive flood of information about Covid-19, the World Health Organization speaks of an “infodemic” with many myths, but is proceeding with a “mythbusting” campaign and is working on solutions together with service providers.

Large platform providers such as Facebook, Google, Microsoft, Twitter and TikTok are now taking their own countermeasures: As reports by companies to the EU Commission at the end of January show, they are now blocking hundreds of thousands of user accounts, advertisements and offers with incorrect information about the coronavirus and vaccines and vaccinations. They check and flag false reports that are spread on their platforms. Facebook, for example, works with around 50 external organizations around the world - so-called “fact checkers” - who check content in more than 25 languages ​​for truthfulness. Teams include news agencies, media companies, and nonprofits. This shows that the problem of disinformation on the Internet goes far beyond the corona pandemic and is of great social significance - especially when it comes to forming political opinion, for example during election campaign times, as recent events in the USA have shown. Arguments that are scattered are not always based on facts. Often unsubstantiated or even disproved things are alleged for propaganda purposes.

While the problem has reached the public consciousness, the origin of the disinformation can rarely be clearly traced. False and misleading information can be generated and disseminated via different paths and methods: One way of automatically producing disinformation and distributing it in large quantities is, for example, specially created computer programs, so-called malicious social bots.

Understanding the mechanisms behind the generation and dissemination of disinformation is an important focus of the research funded by the BMBF. The goal: to develop solutions for a more trustworthy digital world.

Deepfakes - a downside of artificial intelligence

Whether new social media networks, apps for fast image processing or ever better and freely available translation programs: The technical possibilities for generating digital content (on a large scale) are growing every day. This also opens up new ways of manipulating the reception of information. Artificial intelligence (AI) is not just a remedy for recognizing disinformation on the Internet and counteracting it. At the same time, it threatens to exacerbate the problem. Because AI techniques can also be used to forge images, audio and video material that are deceptively real - for example through so-called deepfakes. These forgeries can be generated largely autonomously with the help of machine learning methods, more precisely artificial neural networks. Programs based on such deep learning technologies are becoming more and more readily available on the Internet and can also be used without technical expertise.

In the interdisciplinary research association Forum Privatheit and at the International Center for Ethics in Science Tübingen, Dr. Thilo Hagendorff such developments. He says: "We are experiencing a democratization of AI tools and data sets, so that applications can increasingly be used by less qualified users." There are now suitable, largely freely available tools for the entire range of disinformation: text generators for the automated creation of fake news, Web platforms for generating fake video and audio files or online offers that generate almost photo-realistic facial images in a matter of seconds - for example to run fake profiles on social media platforms.

"Disinformation endangers democratic decision-making"

The National Cyber ​​Security Council (Cyber-SR) is also concerned with the issue of disinformation. The Cyber-SR acts as a strategic advisor to the federal government. The committee has been supported by a permanent scientific working group since October 2018. The experts advise the Cyber-SR from a research perspective on developments and challenges with regard to secure, trustworthy and sustainable digitization. At the end of 2019, the group published the paper “Disinformation at risk for democratic decision-making”. The bottom line: targeted disinformation is increasingly endangering democratic decision-making and therefore requires increased attention and targeted countermeasures. The lawyer Professor Alexander Roßnagel is the spokesman for the research association Forum Privatheit, member of the scientific working group of Cyber-SR and main author of the impulse paper. He says: “What needs to be researched is how disinformation, deepfakes, malicious social bots and their distribution channels can be identified, marked, blocked and deleted. The characteristics of disinformation and their effects on individuals and society as well as political and legal countermeasures that bring about effective combat without hindering freedom of expression are to be examined. "In order to be prepared for the future, for example," technical and interdisciplinary research projects are particularly important, that bring together the findings of various disciplines in order to develop overall effective and implementable solutions, ”said the researchers in their paper.

Cross-disciplinary research projects against disinformation

A similar interdisciplinary project has emerged from the research association Forum Privatheit: DORIAN - Discovering and combating disinformation on the Internet. In order to get to the bottom of the mechanisms of the spread of digital disinformation, the project partners from computer science, media psychology, technology, law and communication sciences analyzed the phenomenon in close cooperation and developed interdisciplinary approaches against disinformation campaigns and for plurality of opinions. In order to provide practical benefits, the researchers have drawn up specific recommendations: for the further development of the legal system, media didactics and journalistic research options. In addition, the scientists have developed concepts to make it easier for citizens to identify false news. It was also considered that disinformation on the Internet is often provided with false images, originating from a different context or manipulated images. The project has also advanced disinformation research from a technical point of view: Using machine learning, the DORIAN scientists were able to identify properties of malicious social bots, for example, and automatically categorize web content in terms of various factors such as emotionality or writing style. The researchers also developed the first strategies to expose deepfakes. Many of the research results were published in the open access publication “Disinformation Disinformation and Combat”.

The start of new research projects is imminent

The disinformation research field has a long “analog” history. In order to keep pace with the dynamics of the phenomenon in the digital space, new approaches are necessary today and in the future. Approaches such as those developed in the Privacy Forum and in the DORIAN project continue to gain in relevance. In view of the growing abundance of possibilities for manipulating digital media content and the increase in disinformation on the Internet, the BMBF will continue to focus on this in the future. Agile and interdisciplinary research is of particular relevance. That is why the BMBF looked for research projects in 2020 that are devoted to the systematic research of disinformation in the digital age. Several projects are expected to start in the middle of this year, in which researchers research the social and legal framework as well as methods and technologies in order to better understand the mass dissemination of disinformation and to be able to counteract it in a targeted manner.