Jane Wakefield,Technology reporter
When Clark Hoefnagels’ grandmother was scammed out of $27,000 (£21,000) last year, he felt compelled to do something about it.
“It felt like my family was vulnerable, and I needed to do something to protect them,” he says.
“There was a sense of responsibility to deal with all the things tech related for my family.”
As part of his efforts, Mr Hoefnagels, who lives in Ontario, Canada, ran the scam or “phishing” emails his gran had received through popular AI chatbot ChatGPT.
He was curious to see if it would recognise them as fraudulent, and it immediately did so.
From this the germ an idea was born, which has since grown into a business called Catch. It is an AI system that has been trained to spot scam emails.
Currently compatible with Google’s Gmail, Catch scans incoming emails, and highlights any deemed to be fraudulent, or potentially so.
AI tools such as ChatGPT, Google Gemini, Claude and Microsoft Copilot are also known as generative AI. This is because they can generate new content.
Initially this was a text reply in response to a question, request, or you starting a conversation with them. But generative AI apps can now increasingly create photos and paintings, voice content, compose music or make documents.
People from all walks of life and industries are increasingly using such AI to enhance their work. Unfortunately so are scammers.
In fact, there is a product sold on the dark web called FraudGPT, which allows criminals to make content to facilitate a range of frauds, including creating bank-related phishing emails, or to custom-make scam web pages designed to steal personal information.
More worrying is the use of voice cloning, which can be used to convince a relative that a loved one is in need of financial help, or even in some cases to convince them the individual has been kidnapped and needs a ransom paid.
There are some pretty alarming stats out there about the scale of the growing problem of AI fraud.
Reports of AI tools being used to try to fool banks’ systems increased by 84% in 2022, accounting to the most recent figures from anti-fraud organisation Cifas.
It is a similar situation in the US, where a report this month said that AI “has led to a significant increasing the sophistication of cyber crime”.
Given this increased global threat, you’d imagine that Mr Hoefnagels’ Catch product would be popular with members of the public. Sadly that hasn’t been the case.
“People don’t want it,” he says. “We learned that people are not worried about scams, even after they’ve been scammed.
“We talked to a guy who lost $15,000, and told him we would have caught the email, and he was not interested. People are not interested in any level of protection.”
Mr Hoefnagels adds that this particular man simply didn’t think it would happen to him again.
The group that is concerned about being scammed, he says, are older people. Yet rather than buying protection, he says that their fears are more often assuaged by a very low-tech tactic – their children telling them simply to not answer or reply to anything.
Mr Hoefnagels says he fully understands this approach. “After what happened to my grandmother, we basically said ‘don’t answer the phone if it’s not in your contacts, and don’t go on email anymore’.”
As a result of the apathy Catch has faced, Mr Hoefnagel says he is now winding down the business, while also looking for a partner who would integrate it into their other financial products.
While individuals can be blasé about scams, and scammers increasingly using AI specifically, banks cannot afford to be.
Two thirds of finance firms now see AI-powered scams as “a growing threat”, according to a global survey from January.
Meanwhile, a separate UK study from last December said that “it was only a matter of time before fraudsters adopt AI for fraud and scams at scale”.
Thankfully, banks are now increasingly using AI to fight back.
AI-powered software made by Norwegian start-up Strise has been helping European banks spot fraudulent transactions and money laundering since 2019. It automatically, and rapidly, trawls through millions of data points per day, unveiling hidden risks.
“There are lots of pieces of the puzzle you need to stick together, and AI software allows checks to be automated,” says Strise co-founder Marit Rødevand.
“It is a very complicated business, and compliance teams have been staffing up drastically in recent years, but AI can help stitch this information together very quickly.”
Ms Rødevand adds that it is all about keeping one step ahead of the criminals. “The criminal doesn’t have to care about legislation or compliance. And they are also good at sharing data, whereas banks can’t share because of regulation, so criminals can jump on new tech more quickly.”
Featurespace, another tech firm that makes AI software to help banks to fight fraud, says it spots things that are out of the ordinary.
“We’re not tracking the behaviour of the scammer, instead we are tracking the behaviour of the genuine customer,” says Martina King, the Anglo-American company’s chief executive.
“We build a statistical profile around what good normal looks like. We can see, based on the data the bank has, if something is normal behaviour, or anomalistic and out of kilter.”
The firm says it is now working with banks such as HSBC, NatWest and TSB, and has contracts in 27 different countries.
Back in Ontario, Mr Hoefnagels says that while he was initially frustrated that more members of the public don’t comprehend the growing risk of scams, he now understands that people just don’t think it will happen to them.
“It’s led me to be more sympathetic to individuals, and [instead] to try to push companies and governments more.”