×

Study shows AI-generated fake reports fool experts

It doesn’t take a human mind to produce misinformation convincing enough to fool experts in such critical fields as cybersecurity. iLexx/iStock via Getty Images

Study shows AI-generated fake reports fool experts

It doesn’t take a human mind to produce misinformation convincing enough to fool experts in such critical fields as cybersecurity. iLexx/iStock via Getty Images

Study shows AI-generated fake reports fool experts

It doesn’t take a human mind to produce misinformation convincing enough to fool experts in such critical fields as cybersecurity. iLexx/iStock via Getty Images

Study shows AI-generated fake reports fool experts

It doesn’t take a human mind to produce misinformation convincing enough to fool experts in such critical fields as cybersecurity. iLexx/iStock via Getty Images

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation.

Priyanka Ranade, University of Maryland, Baltimore County; Anupam Joshi, University of Maryland, Baltimore County, and Tim Finin, University of Maryland, Baltimore County

Takeaways

· AIs can generate fake reports that are convincing enough to trick cybersecurity experts.

· If widely used, these AIs could hinder efforts to defend against cyberattacks.

· These systems could set off an AI arms race between misinformation generators and detectors.

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine.

There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.

General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.

Disagree with this article?
Create an Opposing View
Add Related Article