Newswise — As artificial intelligence programs continue to develop and become more accessible than ever, it’s getting harder to separate fact from fiction. Just this week, an AI-generated image of an explosion near the Pentagon made headlines online and had a minor impact on the stock market until it was quickly dismissed as a hoax.
Casey Myers, a professor of communications at Virginia Tech, studies this evolving technology and shares her thoughts on the future of deepfakes and how to spot them.
“It is becoming increasingly difficult to identify disinformation, especially deepfakes generated by advanced AI,” says Myers. “The cost barrier for generative AI is also so low that AI is now accessible to almost anyone with a computer and the internet.”
Because of this, Myers believes we will see more and more misinformation, both visual and written, in the next few years. “Discovering this disinformation will require users to be media literate and scrutinize the truth of all claims.”
The Photoshop program has been around for years, but Myers says the difference between it and AI-generated disinformation is one of sophistication and scope. “Photoshop allows fake images, but AI can create highly convincing altered videos. , especially if the content goes viral, it could reach a wider audience.”
When it comes to fighting disinformation, Myers says there are two main sources of information. It’s us and his AI company.
“Examining sources, understanding the red flags of misinformation, and being diligent about what you share online are among the personal ways to combat the spread of misinformation,” he says. “But that alone is not enough. Companies that produce AI content and social media companies that spread disinformation will need to put in place some guardrails to prevent the spread of widespread disinformation.”
The problem, Myers explained, is that AI technology is developing so rapidly that the mechanisms that prevent the spread of AI-generated disinformation are likely to never be fully proven.
Attempts to regulate AI are underway in the United States at the federal, state, and even local levels. Lawmakers are considering a range of issues, including disinformation, discrimination, intellectual property theft and privacy.
“The problem is that lawmakers don’t want to enact new laws regulating AI before they know what the future holds for AI technology. There are many potential problems that can arise if legislation is made too late, and it will be difficult to strike a balance,” says Myers.
About Myers
Casey Myers is Professor of Public Relations and Director of Graduate Studies in the Department of Communications at Virginia Tech. His work focuses on the history of media, the laws that affect political communication and public relations. He is the author of A History of Public Relations: Theory-Practice and Careers and Money in Politics: Campaign Fundraising in the 2020 Presidential Election. Myers’ remarks have been cited by multiple media outlets, including Time, Bloomberg, Fox News, Los Angeles Times, The Hill, and The Associated Press.