Eliot Higgins, Marilín Gonzalo, Felix Simon and Valentina de Marval discuss the challenges posed by software such as Midjourney and Dall-E
Over the past few weeks, a number of improbable images went viral: former US President Donald Trump getting arrested; Pope Francis wearing a stylish white puffer coat; Elon Musk walking hand in hand with General Motors CEO Mary Barra.
These pictures are not that improbable though: President Trump was indeed getting arrested; Popes are known to wear ostentatious outfits; and Elon Musk has been one half of an unconventional pairing before. What is peculiar though is that they are all fake images created by generative artificial intelligence software.
AI image generators like DALL-E and Midjourney are popular and easy to use. Anyone can create new images through text prompts. Both applications are getting a lot of attention. DALL-E claims more than 3 million users. Midjourney has not published numbers, but they recently halted free trials citing a massive influx of new users.
While the most popular uses of generative AI so far are for satire and entertainment purposes, the sophistication of their technology is growing fast. A number of prominent researchers, technologists and public figures have signed an open letter asking for a moratorium of at least six months on the training and research of AI systems more powerful than GPT-4, a large language model created by US company Open AI. “Should we let machines flood our information channels with propaganda and untruth?” they ask.
I spoke to several journalists, experts, and fact-checkers to assess the dangers posed by visual generative AI. When seeing is no longer believing, what are the implications this technology has on misinformation? How will this impact journalists and fact-checkers who debunk hoaxes? Will our information channels be flooded with “propaganda and untruth”?
A fake Trump gets out of jail
On 20 March, journalist Eliot Higgins, founder of Bellingcat, tweeted a series of images he made using Midjourney. The pictures depicted a narrative around former US President Donald Trump’s criminal conviction: from fictional arrest to fictional escape from prison. The pictures quickly went viral and Higgins was subsequently locked out of the AI image generator’s server.
“The thread I posted proves how quickly images that appeal to individuals’ interests and biases can become viral,” Higgins says. “Fact-checking is something that takes a lot more time than a retweet.”
For those who work to debunk disinformation, the rise of AI generated images is indeed a growing concern since a big proportion of the fact-checking they do is image or video-based. Marilín Gonzalo writes a technology column at Newtral, an independent Spanish fact-checking organisation. She says that visual disinformation is a particular concern since images are especially compelling and they can have a strong emotive impact on audiences’ perceptions.
“You can talk to a person for an hour and give him 20 arguments for one thing, but if you show him an image that makes sense to him, it is going to be very difficult to convince him that’s not true,” Gonzalo says.
Is a tsunami on its way?
Chilean journalist Valentina de Marval, a professor of journalism in the Universidad Diego Portales with previous fact-checking experience for agencies like AFP, Chicas Poderosas and LaBot Chequea, is also worried about the rise of AI-generated images. While there are clues to these images that show they are fake, like hands, teeth or ears, De Marval is concerned that the rapid improvement of these models will render these indicators obsolete.
“Maybe in a couple of months or days artificial intelligence will have learned, for example, to draw hands well, to outline the eyes well, to put teeth or ears, to make the skin less smooth and make it more real with imperfections,“ she says.
Despite concerns that AI generated imagery might lead to a truth crisis, experts like Felix Simon, a communication researcher and a PhD student at the Oxford Internet Institute, warns against taking an alarmist view on these new technologies saying that its proliferation does not necessarily equate to more people believing in those images.
“The relationship between image and truth has always been unstable,” says Simon. “One could say that what we see with generative AI is just a continuation of that. Many people will get used to it. They will develop defence mechanisms both on a personal level but also on institutional level, where news organisations will probably go to greater lengths to check if images show what they claim to show.”
Simon says that concerns about a new image-based information warfare and the proliferation of fake news date back to the days when photography was introduced to newsrooms. More recently, concerns about the impact of deep fakes have been around for years. Even going a few years back, similar concerns regarding image-based fake news emerged when Photoshop became accessible to the public. Even a few days ago, a suggestive Playboy magazine cover of French government minister Marlène Schiappa went viral. The image was quickly proven to be faked, done through a photomontage of the face of the politician and the body of another woman.
The problem of speed
Bellingcat’s Higgins believes that AI-generated images are a phenomenon that will be most likely contained to social media platforms rather than being something that reaches anywhere near the mainstream media. He also thinks that that fake images will be debunked as they go viral.
“The kind of people who are trying for a certain degree of mainstream legitimacy aren’t going to let themselves be called out constantly by sharing fake images,” he says. “I really think it is going to be something that is more about kind of gut reactions and memes, rather than anyone serious campaigning around fake images.”
However, what concerns fact-checkers is not necessarily what these software produces, but the speed in which they are produced. News organisations will not only have to properly verify information but do so in a timely manner to avoid an information vacuum.
Unlike Photoshop or deep fake softwares, DALL-E and Midjourney are able to generate media within seconds with just a few text prompts. Gonzalo calls this phenomenon ‘a digital fire,’ the rapid distribution of a fake image or video through social media platforms. “This is a constant concern for fact-checkers because we can’t see what is moving at the level of WhatsApp groups or other messaging groups and this runs very fast because it is a viral type of distribution,” she says.
De Marval thinks fact-checkers will have to adapt their methodology and rhythms to be able to catch-up to the potential influx of synthetic images. “Verification methods have to be adapted and streamlined in all newsrooms so they can process videos and images before showing them,” she says.
A more sceptic public
De Marval says the issue of disinformation goes beyond emerging tech and is related with the erosion of institutional trust. “We are never going to have enough journalists,” says De Marval. “There is a loss of prestige in the profession of journalism and a loss of prestige of institutions and politics in general. The more the media and state institutions are discredited, the more disinformation will circulate.”
While generative AI certainly contributes to an increase in the scale of production of mis- and disinformation, Simon thinks that claims that this technology might lead to the end of truth are problematic. “It is not necessarily that people will be more easily fooled, but rather that people will become slightly more sceptical of information in general, including trustworthy information,” he says.
This has problematic implications for a media environment where trust in news is already eroded. Our own Digital News Report 2022 showed that trust in news is on the decline with only 42% of people from our global sample saying they trust most news most of the time.The most recent report from our Trust in News Project found that trust in news on social media, search engines, and messaging apps is consistently lower than audience trust in information in the news media more generally. The study also details how a large proportion of people believe that false and misleading information and platforms using data irresponsibly are ‘big problems’ for many of these platforms in their countries.
“[What we’ve seen recently] has led to a much broader awareness of what you can do with these generation systems,” says Higgins. “Wherever that leads to people being a bit more cynical about what they’re seeing, it might go too far the other way where people just refuse to believe any image.”
What should tech companies do?
This raises the question of what responsibilities do these AI startups have in setting their content apart from real images and videos. Those whom I spoke to advocate for more transparency from these companies to make it easier for users to distinguish if an image was generated through AI or not, such as introducing watermarks.
Some news organisations have also been working to develop tools to let audiences know that their content is real. For example, Project Origin is a collaborative project between media organisations like the BBC, CBC/Radio-Canada, the New York Times and tech organisations like Microsoft that is developing signals, like cryptographic verification marks, that would be tied to media content to prove the authenticity and source of a given piece of content, like an image or a video. Adobe’s recently introduced image-generating tool Firefly will include ‘content credentials’ in each image or a label that would tell users if an image was created by AI or not, according to the company’s Chief Trust Officer Dana Rao. Rao cited the fight against misinformation and sorting what is real and what is fake going forward as one of the reasons why the company is introducing this caveat.
The sources in this piece are concerned about other ethical questions, particularly about the data that these models are being trained on. All the viral examples I’ve mentioned portray real people, raising ethical questions about the data these programmes are being trained on. Midjourney has already limited which public figures it allows users to generate images of. It doesn’t generate images of China’s president, Xi Jinping. However, this was not done out of privacy concerns but to “minimise drama,” according to the company’s founder and CEO David Holz, who wrote this in a post on the chat service Discord, as reported by the Washington Post.
“What they’re doing is clearly training on real people,” says Higgins. “There is the ethical consideration of, do they have the right to train these things on real people who haven’t given their consent?”
A number of these AI-generators such as DALL-E are trained using millions of public text-image pairs from the internet. “Donald Trump is a person who also has his personal data rights,” says Gonzalo. “Now people say ‘Well but if you put your data on the internet…’ No! I can put my data on the internet and that does not mean that I have to give up my right to data protection.”
Averting an information crisis
Experts say we can diminish the impact of AI-based misinformation by fostering media literacy and educating citizens in personal fact-checking techniques.
“It’s not a runaway situation where this technology arises, then everything’s going to change overnight and there’s no way we can stop that in any way,” says Simon. “There’s always ways to sort of hand that in and range in.”
Journalists and fact-checkers are already working on increasing the media literacy of their audiences so that they don’t fall for misinformation. “What we’re trying to do at Bellingcat is take a more education-driven approach, where we’re working with schools and universities to train students and teach them about these skills, ideas, and concepts,” says Higgins. These workshops aim to increase media literacy among students and teachers with techniques ranging from teaching them about fact-checking verification techniques as well as informing on what is possible now when it comes to fake images.
De Marval, who teaches a fact-checking course for her university students, says that the most important thing is to look at the context around the image and question who is distributing these ‘news’: the more politically incendiary an image is, the more hesitant we should be about its veracity. “No matter how much fact-checking we do or if all newsrooms are verifying all content, it will be of little use if people are not educated,” she says.
Source: https://reutersinstitute.politics.ox.ac.uk/news/will-ai-generated-images-create-new-crisis-fact-checkers-experts-are-not-so-sure