On March 19, on the sidelines of the fourth day of demonstrations against pension reform, fake photographs of Emmanuel Macron in the ransacked streets of Paris circulated on the Internet. On March 29, the day after another day of mobilization marked by clashes with the police, the equally artificial image of a nonagenarian with a face swollen by the blows went viral in turn. But despite unprecedented mobilization in twenty years and scenes of urban chaos, no image generated by artificial intelligence (AI) has yet appeared on social networks on the occasion of May Day.
Should this be seen as an indication of an early lull in the explosion of false visuals? Since its rapid democratization in March 2023, with the release of the new version of the Midjourney software, image-generating AI has raised fears of entering a new era of visual disinformation. David Holz, president of Midjourney, which gained 14 million users in a month, admitted to The Verge magazine that moderating the platform was “difficult”.
The payment shift
But its use has changed considerably in just a few weeks. The big break came on April 6, when the most popular AI image generation software switched to a paid model. The introduction of a minimum subscription of ten euros per month has considerably redirected its use towards a professional audience. Examination of the requests – a large part of which is public – shows that the software is mainly used to quickly and inexpensively design conceptual images for animated films, advertisements or even video games, more than to imitate press photos.
As a potential disinformation tool, Midjourney is also paying for its success: the countless fakes that circulated in the spring sparked a flurry of news articles giving clues to recognize them. The weak points of AI are now rather well known, such as its difficulty in reproducing realistic hands, coherent texts or even credible furniture. So much so that for facetious Internet users, it is already a little more difficult to deceive today than it was a few weeks ago.
Incidentally, there is a Midjourney aesthetic, recognizable by its always perfectly composed and skilfully lit images. Fed up with press photos and cinematographic shots, on which it “trains” its models, the AI ??excels at imitating the work of an image professional, but does not know how to ape framing errors, blurred movements and the pale glimmers of a more amateur realization. However, in the very particular world of disinformation, the strength of an image often rests on its artisanal character – like the hidden camera interviews of the antivax media Project Veritas, and their stolen photo aesthetic, as if the fault of framing was proof of truth. At this stage, Midjourney is unable to create such images.
Finally, if image generation software quickly found its audience, paradoxically it went relatively unnoticed by the usual actors of disinformation. Ironically, we even saw a conspiratorial influencer refuse to believe in the existence of AI-generated images. On the margins, some have fun with their possibilities, but stick to a cathartic use, by staging hated personalities in degrading postures – for example by mixing the head of a journalist with that of a a pig – but without these images trying to deceive anyone.
The risk of more successful fake photos
This does not mean, however, that any threat to public debate has been definitively ruled out. The rise of Midjourney and similar software, such as Stable Diffusion, which is even more powerful, has already considerably changed our relationship to images. In mid-April, the iconic photo of the police blocking access to the Constitutional Council, immortalized by Reuters photographer Stéphane Mahé, deceived many Internet users, who mistakenly took it for a fake.
Then, the professionalization of their use is accompanied by a rapid increase in expertise. The images designed with a paid subscription are more worked, more accomplished, and often free from the gross defects that sign an artificial photo. Thus, if fake news images have temporarily become scarce, the next ones risk being more successful, less detectable, and therefore much more misleading.
Finally, the use of these forgeries raises much broader questions than that of their simple use by professional disinformators. On April 28, in a series of tweets on the occasion of the second anniversary of police violence in Colombia, the Norwegian account of Amnesty International chose to illustrate its remarks with images generated by artificial intelligence. A way of “not endangering” real protesters, the NGO justified itself with the American site Gizmodo, before finally deleting its publications. Many observers felt that this initiative discredited Amnesty International’s fight, while police violence is widely documented, and there is no shortage of authentic images.