Fake news, propaganda and battle for shaping public opinion for a specific agenda is nothing new. With AI, the tactics have become more sophisticated, presenting deeper challenges to unraveling truth from falsehood. With the capabilities to create extremely realistic images among other things, everyone will need to have greater awareness to avoid inadvertently relaying fake news.
In March, 2023, former US President Donald Trump wrote that he expected to be arrested. Few days after, AI-generated images envisioning the former president’s arrest circulate widely, accrue millions of views and go viral on social media platforms. The images were even spread to news websites, titillating Trump’s opponents while enraging many of his supporters.
Worldwide, numerous examples of politically motivated bad actors embracing AI tools to manipulate, and erode public trust have already been documented. The next generation of AI could dramatically expedite the creation of on-demand false evidence with minimal effort, creating manipulation that are ready for immediate distribution and conceived to incite reactions.
How is AI changing the game
Producing viable fake evidence fast enough to sway a given news cycle has been challenging until recently. In the absence of on-demand AI-conjured content, high-quality fakes required both time and manual effort, limiting the number of bad actors willing and able to toil to generate synthetic media to sway public opinion. Indeed, the technical complexity of AI tools and the processes required to use them was a considerable barrier in the pursuit of creating fakes for propaganda purposes.
Seemingly overnight however, entities with access to immense amounts of capital, petabytes of storage space, teams of highly skilled engineers, and unfathomable computation power have made it possible for anyone with an internet connection to manifest near-photorealistic images. AI is getting better year after year at generating strikingly human-like content. Language models such as GPT-4 can write entire articles on their own based on only a single-line prompt given as input. Deep neural networks are being routinely used to create fake images or videos known as deepfakes. Open-source software such as FaceSwap and DeepFaceLab have made the technology more accessible. Today, anyone with limited expertise can easily create deepfakes using a computer or a mobile phone.
Indeed, the Internet has become the most important channel for consuming information for billions of people. What we read and see on the web shapes our opinions and worldview. Access to information is vital for democracy. By incessantly hacking the truth, fake news is bleeding democracy through a thousand cuts.
News outlets have to play a crucial role to mitigate this misinformation, not only providing verified information, but also fact-checking stories. People have slowly lost trust in traditional media and institutions. Beyond spreading awareness therefore, a paradigm shift in how they interface with and perceive media is crucial in building resilience in the face of potentially harmful forms of technological innovation. Photo and video, which have held the mantle of the highest authority of substantiating truth since their inception, are now in contention due to the capacity of AI spoofing. In other words, the aim of journalists and other fact-checkers these days may not be to convince people that a certain event happened but to convince people that they cannot trust anything and to undermine trust in all images. Not an easy task, is it?
Nevertheless, there is still hope. Paradoxically, AI can also serve as a tool of stopping the spread of misinformation. Fact-checking, content-independent detection (form of the content analysis), social media sharing patterns analysis, and so forth, rely on strong AI algorithms to be efficient and effective. As the mechanism that supports fake news is complex and involves multiple actors with diverse interests, the response must involve many stakeholders. Media outlets, academics/researchers, public and private companies and charities must make an effort to improve public understanding about the developments of AI in shaping public opinion. A gentle reminder: images do not have to be that realistic to produce an emotional response generally speaking.