Artificial intelligence (AI) is a technology that allows machines to learn from experience, adjust to new inputs, and perform human-like tasks. Scammers now use AI technologies like voice cloning and deepfake videos to create more convincing and manipulative scams.
These AI-driven scams are particularly dangerous because they can mimic real people's voices and images so closely that it becomes challenging to distinguish between what's real and what's fake. This high level of realism in scams can trick even the most cautious individuals.
That’s why in today's world, it's more important than ever to be aware of the latest scams.
1. Hi Mum 2.0
Initially, the Hi Mum scam involved simple text messages where fraudsters pretended to be a relative urgently needing money. Using AI, scammers have taken their tactics to a new level. Now, they gather brief audio clips from social media—like birthday wishes or video chats—to snip together new voice messages that sound strikingly real.
This fake audio often catches victims off guard, tapping into a deep emotional response that makes them highly vulnerable to the scam.
How to identify Hi Mum 2.0 scams
Be wary of unexpected requests for money or sensitive information, especially if the mode of communication is unusual for the person allegedly making the request. Then, contact them through another method, such as a direct phone call or in person.
If you can't reach them, listen carefully to the message. AI-generated audio will likely have minor imperfections. These can include slight distortions in the voice or unnatural pauses and inflections that don't quite match how the person usually speaks.
2. Deepfake videos
Think you can trust what you see? Think again. Thanks to deepfake technology, seeing is no longer believing.
Scammers have mastered the art of creating videos that are so realistic that they're hard to distinguish from actual footage. These fakes can feature anyone, from celebrities to your own family members—all scammers need are a few photos and clips of their voices. The production process is quick and easy, and the videos can be used for anything from blackmail to spreading false information.
How to identify deepfake videos
Deepfakes might look real on first watch, but they often have small flaws that give them away. Look out for anything odd, like unnatural eye or mouth movements or a voice that doesn't quite sync with the lips. Even the appearance of teeth or hair might be slightly off.
Sometimes, the content itself is a clue. If you see a video of someone you know doing or saying something out of character, it might well be a deepfake.
3. ChatGPT phishing
With the rise of AI tools like ChatGPT, scammers' phishing tactics have changed—and they're more sophisticated than ever.
Cybercriminals whip up emails that sound just like the ones you'd get from your bank, a tech company, or a government office in minutes. And with generative AI becoming more advanced, these messages are seriously convincing. They use social engineering techniques to trick you into sharing personal details or clicking harmful links.
What's problematic is that the old red flags—like unnatural grammar or spelling errors—aren't as reliable anymore.
How to identify ChatGPT phishing scams
AI-crafted emails aren't perfect. Pay close attention to any details that don't quite match up with what you'd expect from the real sender. Be wary of messages pushing you to act fast, asking for sensitive information, or directing you to click on links that look odd.
If something feels off, it probably is. Trust your instincts and double-check before taking any action.
4. Verification fraud
In these schemes, scammers create false identities using AI-generated images or videos. The counterfeit identities are sophisticated enough to pass various identity checks, leading to illegal access to bank accounts, unauthorised money transfers, and even fraudulent loan applications.
The fake documents and profiles look so real they can fool traditional security systems, including passwords and biometric scans.
How to identify verification fraud
Approach any verification request that depends solely on digital evidence cautiously, particularly from unfamiliar sources. Keep an eye out for anything odd or inconsistent in the documents or images you're shown.
If something feels wrong, feel free to ask for more proof or a different way to verify, such as through a live video chat.
3 quick tips to protect yourself against AI scams
Here are three actionable steps you can take today to protect yourself and your loved ones:
1. Audit your online privacy
Your online presence can reveal a lot about you. Take a moment to check and update your privacy settings on social media and other accounts. Be careful about what you make public, and think twice before sharing sensitive details like your birthday or home address.
2. Always verify first
Got a surprise request for money or personal information? Stop and think. Reach out to the person or company through another channel to check if the request is real. A quick phone call to a friend or your bank's official line can make all the difference.
3. Educate yourself and others
Knowledge is your best defence against scams. Stay informed about the latest tricks scammers use, and don't keep it to yourself. Talk about it with your family and friends. The more people know, the less likely they are to be fooled.
What should I do if I've been scammed by AI?
If you've been scammed, act quickly. Notify your bank, especially if you've sent money, then report the scam to the Australian Cyber Security Centre and Scamwatch to help stop it from happening to others.
This article is intended to provide general information of an educational nature only. This information has been prepared without taking into account your objectives, financial situation or needs. Therefore, before acting on this information, you should consider its appropriateness having regard to these matters and the product terms and conditions. Terms, conditions, fees, charges and credit criteria apply. We do not recommend any third party products or services and we are not liable in relation to them. Any links to third party websites are for your information only and we do not endorse their content. Information in this article is current as at the date of publication.