fbpx

How AI Is Supercharging Financial Fraud–And Making It Harder To Spot

AI deepfakes are tricking facial recognition programs by imitating people’s ID images. How do you protect yourself?

On at least 20 occasions, AI deepfakes have been used to trick facial recognition programs by imitating the people pictured on the identity cards, according to police. Here’s what you need to do to protect yourself and your business.

In the first week of February, a finance worker at a multinational firm in Hong Kong was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call. The story, first reported by CNN, centred around an elaborate scam which saw the employee duped. The scam, helmed almost entirely by AI, saw the worker attend a video call with what he thought were several other members of staff. But, all of them were, in fact, deep-fake recreations; synthetic forms of media that enable one person’s likeness to be swapped for another. “(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior police superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK. Chan went on to say that the worker grew suspicious after he received a message that was purportedly from the company’s UK-based chief financial officer, initially suspecting it to be a phishing email. However, his early doubts were cast aside after the call, because other people in attendance had looked and sounded just like colleagues he recognised.

After making the transaction of some $25m, as instructed on the call, the worker then realised it was a scam a week later, upon inquiring with the company’s headquarters. Police said two to three employees of the company were reached out by fraudsters using similar tactics, adding that they are still investigating with no arrests made in the case.

The term ‘deepfake’ was first coined in late 2017 by a Reddit user of the same name. They had previously created a space on the platform, wherein they’d shared pornographic videos that used open-source face-swapping technology. The term has since expanded to include ‘synthetic media applications’, which are essentially, realistic imagery of living people doing things they never have done. According to the Hong Kong police working on this case, on at least 20 occasions, deepfakes had tricked facial recognition programs of people on identity cards.

It’s a new and growing problem for CFOs, especially as cyber fraud hit 83% of organisations in some manner last year.

Generative AI – a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio and synthetic data – has grown to allow users with criminal intentions to write very convincing copy at scale. While we used to be able to detect phishing from the poor grammar and syntax in the messages, GenAI can now write a scam message in the language of the target and the tone of a financial institution. GenAI is also being used to create more convincing social engineering via the creation of synthetic identities, by mixing legitimate and artificial data to create fake profiles, mimicking human voices and writing malicious computer code to automate attacks. One lesson companies should take from the rise of AI-infused fraud is not to neglect their own use of AI to bolster defences.

Some of the things you can do to protect yourself from AI deep fake scams are –

  • Do not blindly believe someone if they call from an unknown number claiming to be your friend or family member.
  • And, more importantly, never send money to someone who claims to be your friend or family member if they call from a new number.
  • Use questions and past memories to verify if the caller is genuine, even if the voice matches the person.
  • Avoid publicly sharing your images, audio or videos, which can be used to create your deep fakes.

In the ongoing battle for digital security, it’s important to keep informed and be vigilant in order to protect yourself against the ever-evolving landscape of online fraud.

Related Topics

Elon Musk’s Neuralink brain chip: what is it, and what’s next?

No stranger to bold claims, Elon Musk – who has previously claimed to have plans to colonise Mars – has launched a trial that saw the world’s first wireless brain chip inserted into a human. So, what does this mean?
Read Blog