Friday, November 22, 2024

Deepfakes: More Than Skin Deep Security | Silicon UK Tech News

Must read

 

 
According to the latest report from iProov, the rapid advancement and availability of generative AI tools to bad actors—namely deepfakes—have created an urgent, rising threat to governments and security-conscious organisations worldwide.

Their statistics are stark and act as a wake-up call for all businesses: There was a 704% increase in deepfake face swap attacks from H1 to H2 2023. Almost half (47%) of the dark web forum exchange groups exploring deepfake attacks identified by iProov’s analysts were created in 2023.

“Generative AI has provided a huge boost to threat actors’ productivity levels: these tools are relatively low cost, easily accessed, and can be used to create highly convincing synthesized media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions. This only serves to heighten the need for highly secure remote identity verification,” says Andrew Newell, Chief Scientific Officer, iProov.

According to Gartner research, presentation attacks represent the most prevalent attack vector, whereas injection attacks saw a 200% increase in 2023. Mitigating such attacks will necessitate a blend of PAD, injection attack detection (IAD), and image inspection.

“In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,” said Akif Khan, VP Analyst at Gartner. “As a result, organisations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake.”

And deepfakes are just one of an expanding threat landscape, as CyberAngel reveal in their new report. “Enterprise cybersecurity leaders and decision-makers have been successful in securing their own security perimeter, but critical infrastructure and other modernizing industries have fallen short. This is a major concern in itself,” said Erwan Keraudy, CEO and co-founder of CybelAngel.

“With the majority of detected risks originating from external assets and actors, the threats these industries face today are ultimately the same. This highlights an immediate need for a security mindset overhaul – passive and reactive security measures are no longer enough in today’s security landscape. Cybersecurity teams must take a proactive and comprehensive stance on looking for early indicators of risk, which requires full visibility into the EASM including known assets, shadow assets, partner, vendor, supplier assets and more.”

Defending your business

Speaking to Silicon UK, Jose Luis Riveros, security researcher and deepfake expert at Trustwave explained how the rising deepfake threat could be defended: “There are several models being used for detection of deepfakes including Deep Learning, which includes techniques related to Convolutional Neural Networks.

Jose Luis Riveros, security researcher and deepfake expert at Trustwave.

“Another example is the Machine Learning model which uses techniques like Generative Adversarial Networks (GANs), Decision Tree, Random Forest, and Multiple Instance Learning, while the Statistical models uses techniques like Total Variation Distance among others. Thankfully the industry has taken the challenge of increased deep fake threats and turned it into an opportunity to work on technologies for detection.”

Once the strategy is outlined and the minimum requirements are established, CISOs and leaders in risk management must incorporate additional risk factors and recognition indicators, such as device identification and behavioural analytics, to enhance the likelihood of identifying attacks on their identity verification processes.

Primarily, security and risk management leaders accountable for identity and access management should take measures to mitigate the dangers posed by AI-driven deepfake attacks by selecting technology capable of verifying genuine human presence and by integrating supplementary measures to thwart account takeover.

Regulations like the EU AI Act are designed to tackle cybersecurity in the age of AI. “Regulators are playing checkers while fraudsters play chess. With an unprecedented number of elections on the horizon throughout 2024, the EU AI Act is promising, however still raises concerns,” says Natália Fritzen, AI Policy and Compliance Specialist, at Sumsub. “While we commend its aim to regulate AI risks, doubts loom over its ability to effectively tackle increasingly popular AI-powered deepfakes. Worryingly, recent data reveals that deepfakes grew 10x in 2023 from the previous year. As the threat continues, we are not convinced upcoming measures will sufficiently safeguard businesses and the wider public, as well as electoral integrity.

Natália Fritzen, AI Policy and Compliance Specialist, at Sumsub
Natália Fritzen, AI Policy and Compliance Specialist, at Sumsub.

“It is crucial that policymakers address concerns and supplement the Act with stringent, proactive measures. Whilst watermarks have been the most recommended remedy against deepfakes, there are several concerns raised around their effectiveness, such as technical implementation, accuracy and robustness. For these to be effective, the European Commission must set certain standardisation requirements for watermarks, as we saw from President Biden’s Executive Order 14110.

“The Act’s emphasis on transparency from providers and deployers of AI systems is a step in the right direction. However, as shown by Margot Robbie in Barbie, lawmakers must acknowledge real-world atrocities and step away from a seemingly utopian regulatory landscape – where an EU AI Act is simply ‘Kenough.’ While deepfake regulation is an evolving field, Policymakers and governments must collaborate closely with private technology businesses, acknowledging their frontline role in combating AI-related illicit activities, to establish a robust regulatory framework.”

A fake future?

In today’s digital age, the rise of deepfake technology poses a significant challenge to our understanding of truth and authenticity. As Thea Mannix, Director of Research at Praxis Security Labs, aptly points out, the current methods of detecting deepfakes through deep learning models and biometric analysis may only be effective in the short term. As deepfake technology continues to advance, so too must our strategies for combating its proliferation.

“Deep learning detection models are promising for the immediate future however I fear they will soon lose efficacy as the fakes themselves improve, similarly with biometric analysis (complex comparison of micro expressions and manner of speaking between a real person and their generated image). For the long term, I believe cross-modal media verification (verifying the consistency of information across different types of media) and advanced digital watermarking to verify source of information are more able to withstand advances in deepfake.”

However, the implementation of these strategies presents its own set of challenges. Cross-modal media verification requires sophisticated algorithms capable of analysing and comparing information across disparate formats. Likewise, the integration of robust digital watermarking techniques demands careful consideration of privacy concerns and data integrity.

The battle against deepfakes extends beyond technological solutions alone. It requires a concerted effort from policymakers, tech companies, and civil society to address the root causes of misinformation and promote media literacy. By fostering a culture of critical thinking and digital hygiene, we can empower individuals to discern fact from fiction in an increasingly complex media landscape.

From a regulatory point of view, Rory Lynch, legal director, Gateley Legal, offers this advice: “The first piece of advice I would give is to be as proactive as possible. This involves, firstly, understanding deepfake technology as best as you can and assessing the risks that it could pose to your business. For example, does your organisation have security procedures to confirm an individual’s identity when speaking on the phone? And is there any content in the public domain that could be doctored and used to harm your business’s reputation?

Rory Lynch, legal director, Gateley Legal
Rory Lynch, legal director, Gateley Legal.

Lynch continued: “Any content you place online could be used to ‘train’ AI software, so now would be a good time to make sure that your branding, messages, and voice are as clear and as consistent as possible. This will help your customers to better identify when content is legitimate, and when it is faked. Some brands are also exploring the use of blockchain technology to digitally ‘watermark’ their content, although this is still in its infancy and should not be relied upon.”

Clearly for all businesses, the threat of deepfakes must be taken seriously, built into the defences that are created. Audra Streetman, Security Strategist at Splunk SURGe, outlined how the threat is evolving: “Last year, much of the public discussion revolved around the capabilities of tools such as ChatGPT, which amazed the world with its ability to generate coherent, human-like patterns of speech and analysis. However, this year, the public spotlight has shifted to the proliferation of deepfakes. Cybercriminals are taking advantage of this, utilising deepfake audio and video to perpetrate scams. From individual fraud schemes, such as persuading a ‘customer’ or ‘an employee’ to transfer funds, to large-scale attacks, such as manipulating President Joe Biden’s ‘voice’ to discourage voting ahead of the U.S. primary election, the threat landscape is evolving quickly.

Audra Streetman, Security Strategist at Splunk SURGe
Audra Streetman, Security Strategist at Splunk SURGe.

“With a year of elections ahead of us, the risks associated with deepfakes are poised to escalate. Encouragingly, there’s a movement toward stricter legislation, as seen with the UK’s Online Safety Act criminalising the dissemination of deepfake pornography, and the voluntary EU Code of Practice on Disinformation being signed by several social media platforms with the potential that they may be held liable for failing to take action to combat disinformation and deepfakes.

Streetman concluded: “But this is just the start. While some platforms have improved content moderation and implemented watermarking of deepfake images, greater emphasis must be placed on educating both employees and the broader public about the associated risks. Recognising and dealing with deepfake scams should be a standard part of companies’ annual security training, and platforms have a role to play in keeping people informed and empowering them to spot content that isn’t credible. There is no complete solution or regulatory framework yet, but a multifaceted approach offers promise in combating the deepfake trend.”

“While it is impossible for CISOs to prevent falling victim to a deepfake 100% of the time, there are a few measures they can take to reduce risk,” advises Chaim Mazal, Chief Security Officer at Gigamon.  Here are three tips to do so:

Chaim Mazal, Chief Security Officer at Gigamon
Chaim Mazal, Chief Security Officer at Gigamon.
  1. Prioritise education: CISOs must educate their workforce on different deepfake methods. Although they are constantly evolving, there are plenty of repeated efforts that are easier to detect, such as texts and requests from the organisation’s CEOs or other C-suite executives. Increasingly realistic vocal deepfakes make it more important than ever that all employees be educated about their risks and encouraged to never trust but verify.
  2. Remain on the offensive: CISOs should approach these threats the same way malicious actors do, staying on the offensive with AI technologies. Automated security can be a real asset, but businesses need to understand where the cracks may be in implementation to safeguard their organisation properly. Recurring threat intelligence audits, prioritising real-time visibility, will ensure their infrastructure remains free of threats.
  3. Respond quickly: It doesn’t have to be game over once a threat actor has accessed a network. CISOs need to have deep visibility into their networks to stop attackers before they can successfully exfiltrate sensitive data. Non-baseline traffic that could possess potentially nefarious activity is a great indicator of an attack, especially as 93% of malware hides behind encryption. Today, with real-time visibility across all hybrid cloud infrastructure and North-South and East-West traffic, CISOs can respond quickly, mitigating the impact of a breach on employees, customers, public opinion, profitability, and more.”

In today’s fast-paced digital landscape, where authenticity is constantly under threat from the insidious rise of deepfake technology, businesses face a formidable challenge in safeguarding their integrity and reputation. As explored throughout this article, the proliferation of deepfakes poses multifaceted risks, from reputational damage to financial loss and legal liabilities. However, amidst the uncertainty and complexity, concrete steps exist that businesses can take to defend themselves against this pervasive threat.

Traditional methods of detection, such as biometric analysis and deep learning models, may soon become outdated in the face of rapidly evolving deepfake technology. Therefore, businesses must stay abreast of the latest developments in deepfake detection and invest in cutting-edge solutions that can effectively identify and mitigate the risks posed by deepfakes.

The threat posed by deepfakes is real and pervasive, but it is not insurmountable. By prioritising education, investing in cybersecurity measures, adopting advanced detection technologies, fostering collaboration, and remaining vigilant, businesses can effectively defend themselves against the risks posed by deepfakes. In doing so, they can protect their integrity, reputation, and bottom line in an increasingly digital world.

Silicon Head-to-Head Interview

Philipp Pointner, Chief of Digital Identity at Jumio.

With over 20 years of industry experience, Philipp is a seasoned cybersecurity expert and currently spearheads Jumio’s strategic vision of enabling multiple digital identity providers in its ecosystem.

Philipp Pointner, Chief of Digital Identity at Jumio.
Philipp Pointner, Chief of Digital Identity at Jumio.
How would you define deep fake technology, and what distinguishes it from other forms of manipulation?

 “Deepfake technology is powered by advanced artificial intelligence algorithms to create highly humanlike images, video, and audio. Deepfakes use neural networks to analyse and replicate facial expressions, lip movements, and even voice intonations. The sophistication and quality of today’s deepfakes are such that it’s almost impossible for even experts to identify them without their own AI assistance.”

What are some of the most common applications of deep fake technology today?

“Today, deepfakes are often used in marketing campaigns, as well as in education, bringing the past to life. However, cybercriminals also take advantage of the power of deepfakes to spread misinformation, perpetrate scams, and commit fraud. Today’s deepfakes are much more realistic compared with older forms of manipulation and can bypass traditional verification technologies reliant on facial recognition alone.”

What are the technical challenges in detecting and mitigating the spread of deep fakes?

“The last 18 months have seen rapid growth and innovation in deepfakes, and it’s never been cheaper or easier for a fraudster to deploy them. Widespread use reinforces a positive feedback loop where deepfake fidelity is continuously improving, forcing verification platforms to continuously update their detection models. It’s an ongoing cat-and-mouse game between fraudsters and the platforms that verify content or identities as genuine.”

How do you see the landscape of deep fake technology evolving in the future, and what implications might this have for society? 

“The future of deepfakes could have profound implications for misinformation, privacy, and trust, challenging the integrity of news, politics, and security. However, effective collaboration between cybersecurity experts, government, and businesses can mitigate these risks. Alongside a vigilant and well-informed general population, cyber defence teams can stay ahead in the digital arms race and reduce the likelihood of this worst-case scenario.

“It’s important to note there are also plenty of positive applications of deepfakes in education, entertainment and elsewhere — to make the most of these, we need to continue developing an effective regulatory regime that enables cyber threat responders to combat malicious uses.”

Are there any emerging technologies or techniques that show promise in combating the spread of deep fakes? 

“Deepfakes are often used as a component of AI attacks. Only AI itself can analyse and understand the complex patterns at the speed and scale required for effective detection — we must fight AI with AI to combat the deepfake spread.

“We can do this through AI-powered liveness detection, which uses millions of biometric markers to distinguish forgeries. While deepfakes are hard to detect with the naked eye, the vast datasets in AI-enabled detection models make them highly accurate.

“Banks are already using AI-powered liveness detection to ask for a user selfie, collecting data from their retina/iris coupled with eyebrows, eyelashes, and ears, which taken together tends to be very precise. Deepfakes struggle with generating images and video with these minute details, so firms using this technology can have increased confidence that only legitimate users can initiate transactions.

“Moreover, AI-enabled graph technology sifts through transaction data to flag patterns suggestive of complex AI-powered fraud, for example, several accounts opened with the same ID in disparate locations. Crucially, detection is predictive, generating actionable analytics to pre-empt fraud before it happens.”

What role should digital authentication and verification processes play in safeguarding businesses against deep fake attacks? 

“Digital authentication and verification processes are vital in safeguarding businesses against deepfake attacks. Implementing robust digital identity verification measures, such as biometric checks and liveness detection, can prevent unauthorised access and ensure that actions are performed by legitimate users. These processes are essential for maintaining the integrity of financial transactions, protecting intellectual property, and securing communication channels.”

Are there any legal or regulatory considerations that businesses should be aware of when combating deep fakes?

“Businesses must navigate an evolving regulatory landscape when combating deepfakes. As laws like the EU’s General Data Protection Regulation (GDPR) continue to evolve and firms handle ever more volumes of data, organisations work closely with legal teams to identify the types of data they can process and use. While it’s important to have an ethical governance policy, enforcing it is an entirely different challenge.”

 

Latest article