Generative AI (tools like Dall-E 2, Midjourney, Stable Diffusion, ChatGPT, etc.) is getting a lot of attention for its potential in both business and creative functions.
However, it is very important to recognize the dramatic changes these tools can cause in the field of misinformation and disinformation.
The new dynamics created by generative AI have the potential to lower the barriers to entry for bad actors while increasing the sophistication of their efforts, changing the reputation management game. This requires communications professionals to take a closer look at the implications and solutions to the growing threat of disinformation.
Subverting the disinformation paradigm
A common framework in the anti-disinformation space is ABC: Actors, Actions, and Content. At a recent conference on “Combatting Disinformation,” Jack Stubbs, Graphika’s vice president of intelligence, discussed the impact of generative AI across three areas.
Lowering Villain Barriers: Generative AI lowers the barriers to entry and makes more complex activities more accessible and viable for less sophisticated actors. These technologies can be used by individuals and groups with limited resources to create and spread disinformation, potentially increasing the number of malicious actors in this area.
Economies of scale encourage bad behavior: Generative AI enables attackers to create new content at scale, dramatically increasing the amount of deceptive online content with minimal impact on production costs. This can lead to a flood of disinformation and make it difficult for people to separate fact from fiction.
Content that passes the sniff test: Historically, attackers, especially those operating at the nation-state level, have struggled to create compelling content from a cultural and linguistic perspective. However, having AI tools create content that appears linguistically native could quickly overcome these challenges.
Image and video creation tools also give rise to the specter of rich media creations that are more convincingly manipulated or downright fake. Sam Gregory, executive director of the human rights group Witness, said in a recent Washington Post article on deepfakes, “There have been major strides in our ability to mass-produce fake but believable images. ‘ said. This was recently seen in influence manipulation by a pro-Chinese actor to promote his AI-generated video footage of a fictional character.
Caveat – Midjourney’s AI now correctly executes hands. Be especially critical of online political images (especially photos) that try to incite a reaction. pic.twitter.com/ebEagrQAQq
— Dell Walker (@TheCartelDel) March 16, 2023
The CEO of OpenAI (the creator of ChatGPT) recently acknowledged his own concerns on this front in an interview with ABC News, stating, I am particularly concerned.”
Changes to the reputation management game
The important role of digital channels (especially search) as sources of information has made search engine optimization a central aspect of reputation management. Generative AI, especially his ChatGPT and other similar platforms, look to overturn that.
I recently spoke with the CEO of the company on the phone. CEOs cited ChatGPT’s portrayal of their company as a measure of whether the crisis has affected their reputation. It’s important to note that ChatGPT he is only trained on data up to September 2021, but it still raises questions. How do you manage your company’s online reputation when reputation is not managed by a list of search results, but by AI trained across the internet?
Tools are trained by an information landscape polluted by a deteriorating infodemic. Witness the recent misinformation coverage inserted into an article about the new Navalny documentary, for example, thanks to an author using AI.
Responding to this landscape
Given the significant challenges generative AI poses to the fight against misinformation and reputation management, here are five thoughts on how organizations might respond to the situation.
Nailing the basics: As many organizations have already found out, the best way to combat disinformation is to stay ahead of it. This is still true today, and the unglamorous aspect of risk management that burns misinformation into vulnerability and risk assessments, scenario planning, preparation, and training has led to the need to strengthen these vulnerabilities through proactive communication. is becoming increasingly important as well. To proactively mitigate these risks or, in some cases, to proactively prepare for serious threats.
Increased investment in AI detection tools: The positive, positive potential of AI should not be overlooked. Adopt AI-powered tools to monitor, analyze, and manage your company’s online reputation. Organizations should invest in AI tools that help identify and flag manipulated content, including that produced by generative AI such as deepfakes. For example, Adobe launched his Content Authenticity Initiative in 2019 and recently partnered with Microsoft to announce a new feature aimed at verifying the authenticity of photos and videos.. On the other hand, it didn’t take long after ChatGPT started to create tools like GPTZero aimed at detecting content developed using the platform.
Enhancing media literacy: Media literacy has become an essential skill in the age of disinformation. Encourage critical thinking and teach people to recognize the telltale signs of AI-generated content. Governments in countries like Finland have taken the lead in media literacy. However, in a world where people trust employers more than most other sources of information, companies can play a key role in increasing the media literacy of their employees and enhancing their resilience through in-house training programs. can be fulfilled.
partnership: The fight against disinformation cannot be solved by businesses, governments and civil society alone. Each of these institutions must work together to create effective solutions and promote development at the societal level. Examples include Google’s partnership with the University of Cambridge to test the impact of pre-banking as a means of combating disinformation, and his Digital Citizen in Canada. An initiative that has funded 23 projects aimed at increasing the resilience of Canadian citizens.
Legal and regulatory framework: Finally, we need to develop a regulatory framework that effectively governs the use of AI to combat disinformation. This could include regulations promoting transparency and accountability in the development and use of AI, and regulations to prevent the misuse of AI-generated content, with severe penalties for non-compliance. is imposed.
The EU is at the forefront in this area. For example, the proposed EU artificial intelligence law contains transparency obligations regarding deep fakes. In the short term, communicators should be aware that the legal landscape surrounding AI-generated content is fluid, especially as it relates to evolving areas such as copyright.
The rise of generative AI tools poses significant challenges to the fight against disinformation and reputation management and threatens to lead to incremental changes across multiple dimensions of the disinformation landscape. Without greater focus at both the organizational and societal level, we may be heading towards the proverbial Zero Trust environment where no one can trust what they see or hear.
Dave Fleet is Edelman’s Head of Global Digital Crisis.
[Disclosure: Adobe, Microsoft, and Google are clients of the author’s employer, DJE Holdings.]