Staying on Top of Fraud: The Latest AI Scams
As machine learning and Artificial Intelligence (AI) driven tools become more easily accessible, scams engineered using this technology are becoming more prevalent day to day. While scammers use a variety of these tools simply to complete tasks at a faster rate than a human could hope to achieve, others are being used in new and nefarious ways for fraudulent gain.
AI Fraud Evolution
In the past, a professional fraudster could take up to a week to manipulate an existing piece of media. Today, modern technological advances have dramatically reduced the amount of time needed to produce these fakes with next to no software expertise necessary.
Typically a piece of media identified as a deepfake is one where a scammer utilizes AI or machine learning to create a fully synthetic piece of media in which they pretend to be someone else to falsely gain access to information.
In the realm of AI scams, deepfakes are especially concerning as the end results can be very believable despite being based on an entirely fraudulent premise. Deepfakes are often used in socially engineered scams to create fraudulent texts, voice messages and videos and lull victims into a sense of false security by posing as a known person or entity.
These falsified pieces of media are typically divided into two categories; partially synthetic (also called shallow fakes) and entirely synthetic (also called deepfakes), and can be difficult to spot out in the world.
Shallow Fakes versus Deepfakes
A Shallow Fake (also called a Cheap Fake) is when a piece of existing media is manipulated using technology that has no basis in machine learning or AI.
Oftentimes the end result of a Cheap Fake can still be very sophisticated but the key difference is that the technical process used for fraudulent editing is often more labor intensive and human reliant, as the editing process does not utilize machine learning, instead relying fully on human driven editing.
A common example of a cheap fake would be manually editing a photo to remove a certain element or person from the final result, or removing audio from an existing video and replacing it with different audio to change the focus of the final result.
A deepfake is when a piece of media (either one that is created entirely through an AI prompt OR one that is based on an existing video or image) is manipulated using AI tools or machine learning to falsify some of the narrative associated with that media.
In the realm of AI scams, deepfakes are especially concerning as the end results can be very believable despite being based on an entirely fraudulent premise.
Deepfake images, videos and/or voice messages are often used in socially engineered scams where the scammer uses them to convince victims that this information is real and coming from a reputable source or known person. Like other social engineering scams; deepfake focused fraudsters thrive on making victims act quickly out of panic (by sending convincing messages that are often urgent in nature) or by lulling them into a false sense of security (by convincing them that they’re talking to a family member or friend).
While both forms of “fake” are equally nefarious, the rapid advancement of AI and machine learning has brought deepfakes to the attention of the US Department of Defense as the technology to easily create a well-produced deepfake becomes more easily available to fraudsters.
How to protect yourself against deepfake scams
-
Trust your gut! If you feel suspicious in any way about a situation or request received, double-check with a verified contact before moving forward with that request.
-
Double-check the source of the media received before jumping to conclusions.
-
Reverse Image Search any pictures that feel questionable on a reputable search engine like TinEye or Google to verify the veracity of the image.
-
Use AI detection tools to determine the authenticity of photos or videos. Companies like Google, Intel, Microsoft and Adobe have introduced tools specifically developed to identify machine learning interference in videos, pictures and voicemails. Using these tools when in doubt can help you avoid falling victim to deepfake scams.
View additional tips and resources by the National Cybersecurity Alliance here.