AI has had an enormous impact on the news landscape in 2023, with much of the noise serving to erode trust in the nascent technology. Between fake news about AI, and AI-generated fake news, media organisations worldwide - and their readers - have shown a huge appetite for the doubtful content.
To assess the scale of the issue, Netskope has ranked the most widespread fake AI news stories of 2023 so far, based on social views, engagement, articles, reach and authority, in order to determine their impact on each. This data has been juxtaposed with research showing the false confidence that the British and American public have in their ability to spot fake AI stories.
Alongside growing concerns around ChatGPT, and the increasingly asked question “Is ChatGPT secure?”, another worry emerging around AI is the effect AI has on fake news. The graph to the left shows how ‘AI fake news’ is a frequently trending subject on X (Twitter).
At Netskope, we’ve researched how far the most viral fake AI stories reached across news publications and social media, and how long it took for these stories to be revealed as fake.
AI fake news mention’s on Twitter
We scored the most widespread fake news stories of 2023 so far, based on social views, engagement, articles, reach and authority, in order to determine their impact.
Taking the top spot is an image of Pope Francis wearing an oversized white puffer coat, one of the more light-hearted fake news stories we’ve encountered. This story of the Pope dressing in trendy streetwear was so believable and lovable that it racked up over 20 million social views and was covered by 312 media publications.
In second place are the fake AI images of Donald Trump being arrested in Downtown Washington DC in March this year. Although Donald Trump was later arrested in real life, these convincing images pre-empted the arrest and were created by the AI image generator, Midjourney. They went viral on X (Twitter) with over 10 million views, and were covered by 671 media publications – double the number of publications that covered the Pope story.
Artificial intelligence (AI) image generators like DALL-E and Midjourney are growing in popularity, claiming millions of new users so far. The easy-to-use programs allow anyone to create new images through text prompts, and the results are often so realistic that even news publishers struggle to differentiate between the fictional images and reality. These falsified images dominate our chart, with 14 out of 15 stories being driven by them.
In third place is the only non-AI generated image led story; the reported simulation of an AI drone killing its human operator. This story was covered by the highest number of publications, with 1,689 pieces of coverage. The average authority of the sites covering the story was lower, and as a result, the total combined publication reach was 1.3 million.
Social networks allow AI-generated misinformation to spread rapidly. Stories can be shared instantly, making it difficult to determine if they come from a trustworthy source. The biggest fake AI news story to date on X (Twitter) is the Pope wearing an oversized white puffer jacket, which had over 20 million views on the social network.
There is only one news story that reached more people over social media. This wasn’t achieved through X (Twitter) but via the video platform TikTok. An AI-faked video of Democrat Hillary Clinton endorsing Republican Ron DeSantis for President generated almost 840 million views on TikTok and a further 877,000 on X (Twitter).
It’s difficult to detect what is real and what is AI, so believing a made-up story is an increasingly common and relatable mistake. But which fake stories have convinced publishers the longest?
Analyzing the original publication date from the top key source to determine how long it took for the story to be taken down or corrected revealed that the average time it takes for a fake AI story to be corrected is 6 days. An AI-edited video of an Australian news interview featuring Bill Gates had publishers convinced the longest, at 15 days. The tampered video falsely suggests that Gates abruptly ended an interview with Australian Broadcasting Corporation (ABC) News journalist Sarah Ferguson after facing questions about his involvement in the COVID-19 vaccine distribution.
The next story that took the longest to be corrected was the simulation of an AI drone killing its human operator. At a summit, the chief of AI tests in the US Air Force, Colonel Tucker Hamilton, stated that the drone turned on its operator to prevent it from interfering with its mission. This story wasn’t a faked image or video but misinformation around AI capabilities. The time for this correction was drawn out due to the delayed response from the Air Force stating that the incident was hypothetical.
We surveyed over 1,000 members of the US public, and a further 500 in the UK over the age of 18 - all of whom specified they used social media and had an interest in the news - to learn more about the public’s confidence in dealing with fake AI-generated stories. When asked what their most trusted platform is for news stories, newspapers and tabloids were voted top. Surprisingly, video-based social platforms like TikTok and Snapchat came second, demonstrating their huge influence and the importance of regulating misinformation on these platforms.
Traditional social media platforms such as Instagram, Twitter, and Facebook came in last place. This demonstrates the public’s awareness of the unreliability of information found here.
Over 80% of UK research participants claimed they are confident at spotting a fake news story, and the US participants were even more sure of their abilities at 88%. However, when shown a fake AI news story alongside a real one, 44% of UK respondents believed the fake story to be real. This percentage was even higher for US citizens with half answering incorrectly.
We broke down the data by region in the UK and the US to identify whether any regions are more vulnerable to AI-generated fakes.
In the UK, despite its reputation for worldly cynicism Greater London struggled the most in identifying the AI-generated fake news (64% got the answer wrong) had the most incorrect answers at 64%. In the South East of England, scores were almost reversed, with 63% correctly spotting the fake.
Arkansas was crowned the most susceptible US state, with 67% picking the wrong story, followed by North Carolina with 62%. In contrast, only 20% of those in Indiana answered incorrectly, crowning the state as the savviest to AI fake news.
While falsified images have been around for as long as photography, it is the often nefarious objective that is new in the modern era. While claims to have photographed fairies in the garden, or a monster in Loch Ness harmed noone, cyber criminals and political groups are using AI-generated images today to influence public opinion and even extort money from victims. Cyber criminals are already using fake AI-generated images and content to trick people into handing over personal information and company secrets, and even to convince people that their loved ones have been kidnapped, prompting ransom payments.
With AI advancing quickly, it can seem like an impossible task to differentiate between the truth and AI-fakery, and a zero trust policy will help ensure you verify before you trust anything. Here are our top tips for preventing being fooled.
If you see an AI-related headline claiming outlandish advancements, or an image or video that seems unlikely, first check the source. If it’s on social media, the comments could hold information about where it originated. For images, you could conduct a reverse image search using Google Reverse Image Search, TinEye or Yandex. Searching for the origin can reveal further context and existing fact checks by trustworthy publishers.
Enlarge the image and check for errors. Making the image bigger will reveal poor quality or incorrect details – both telltale signs of an AI-generated image.
Check the image’s proportions. frequent mistake in AI images is the proportions and quantities of body parts and other objects. Hands, fingers, teeth, ears and glasses are often deformed and have the wrong quantities.
Is there anything odd about the background? If an image has been altered or fabricated, items used in the background are often deformed, used multiple times and lacking in detail.
Does the image look smooth or lack imperfections? With AI images, certain aspects that would normally be highly detailed are often smoothed and perfected, like skin, hair and teeth. The faces are flawless and the textiles are too harmonious.
Do the details look correct? Inconsistencies in the logic of the image can be a telltale sign of AI generation. Perhaps the coloring of something as small as the eyes doesn’t match across different images, or a pattern may change slightly?
Is the video image small? Like with images, a small video image indicates ‘deepfake’ AI software has been used that can only deliver a fake video at a very low resolution.
Are the subtitles oddly placed? With fake videos, the subtitles are often strategically placed to cover faces, meaning it’s harder for viewers to notice the unconvincing video deepfake, where the audio often doesn’t match the lip movements.
Are there misaligned lip shapes present? Misaligned lip shapes present in the frames are a telltale sign of manipulation, with the boundary of spliced region visible near the middle of the mouths.
Following these tips will help you identify falsified stories. However, as AI advances, newer versions of programs like Midjourney are becoming better at generating higher-quality images. This means users may not be able to rely on spotting these kinds of mistakes for much longer.
We have discussed the impact of fake AI generated images and stories, but do you know what AI is capable of? See how savvy you are at spotting fake AI development news with this quiz. Containing 12 real and fake AI development headlines, try to spot which is true and which is false.
Netskope SkopeAI for GenAI is a stand-alone solution that can be added to any security stack, allowing IT teams to safely enable the use of generative AI apps such as ChatGPT and Bard, with application access control, real-time user coaching, and best-in-class data protection.
Our SASE architecture combines market-leading Intelligent Security Service Edge with next-generation Borderless SD-WAN to provide a cloud-native, fully-converged solution that supports zero trust principles. Netskope Zero Trust Network Access (ZTNA) offers fast, secure, and direct access to private applications hosted anywhere, to reduce risk, simplify IT and optimize the user experience.
The stories analysed were gathered and collaborated from a series of fact checking sites such as Snopes, Reuters and Channel 4.
Buzzsumo and Brandwatch were used to identify the source of misinformation determined by fact checkers to consistently measure reach of content.
Google news was used to gather indexed coverage from the web to extrapolate viewing estimates.
Coveragebook was used to determine estimated audiences, social engagement and linked coverage relating to selected AI fake-news.
Facebook, X (Twitter) & Archive.is were used to gather AI-fake news content.
The opinion poll was conducted on 501 UK respondents, 1005 US Respondents who specified they used social media and had an interest in news, aged 18+.