In an era of ever-evolving technology, the rise of artificial intelligence (AI) has been met with both excitement and exploitation. While AI tools offer numerous benefits to small and medium-sized enterprises (SMEs), they have also opened the door to cybercriminals seeking to exploit this technology for their nefarious purposes. It is essential to discern AI-generated content from authentic material to safeguard businesses and individuals. In this article, we will explored various techniques for identifying AI-generated content.
Identifying AI images
The increasing accessibility of AI tools has given rise to a boom of AI-generated imagery across the internet, especially on social media platforms. However, recognising it isn’t always straightforward to everyone. To distinguish artificially generated images from the real thing, here are a few strategies.
Have a closer look
When perusing social media, it’s common to scroll past images without giving them a second thought. However, if something appears a bit odd, it’s worth having a second glance. AI-generated images often exhibit characteristics that make them distinct from genuine photographs. These include a somewhat two-dimensional quality resembling a painting rather than a photograph with unnaturally smooth skin in the case of portraits, objects overlapping strangely, unnatural-looking facial features, and irregular proportions. While these details might go unnoticed at first glance, they become more apparent upon closer examination.
Scrutinise the background
Another red flag for AI-generated content lies in the background. Mistakes often creep into generated images, including cloned people in crowds, warped trees, and depth-of-field anomalies that create discrepancies with the foreground. Keep in mind that as AI technology advances, these errors may become less conspicuous.
Trace the image’s origin
Conducting research to determine an image’s origin can be an effective way to identify AI-generated content. For instance, if you come across an intriguing image, a quick search may reveal a number of fact-checking articles telling you that it’s a fake posted by a news satire website. Utilising reverse image search tools like Google can also help you trace the image source, but it’s important to note that not all AI images can be easily identified using this method.
A case in point: Martin Lewis Fraud
Prominent personalities are frequently targeted by fraudsters aiming to exploit their image to con people out of money. A notable example is Martin Lewis, founder of MoneySavingExpert.com, who has had his image stolen and used by fraudsters on multiple occasions. In a disturbing development in July 2023, a new scam ad surfaced showcasing an AI-generated likeness of Martin, endorsing an investments app allegedly linked to Tesla. This is a deeply troubling phenomenon that demands enhanced countermeasures as AI technology continues to advance.
Identifying fake videos
Spotting fake videos, particularly those featuring well-known public figures endorsing investments or cryptocurrency, is paramount. Here are some valuable tips for recognising deep fakes:
Verify from credible sources
When you come across a video featuring a public figure endorsing a product or servicce, visit the verified sources associated with that individual. This could include their website, or social media profiles with an identification mark. If there is no mention of promoting such a thing, or if they highlight scams to their followers, it’s a strong indicator that the video you have seen is fake.
Pay close attention
Fake videos often exhibit peculiarities such as discrepancies in sound and lip synchronisation, distorted facial features (particularly the mouth), and a lack of the natural expressions and gestures typical of genuine individuals.
As AI-generated content becomes increasingly prevalent, it’s imperative to equip yourself with the knowledge and tools needed to detect it (as far as that’s possible). By being more critical of online imagery, investigating origins, and critically assessing videos, individuals can better safeguard themselves and their businesses against AI-generated content used for the purposes of deception.