New Threat Intelligence Report Exposes the Impact of Generative AI on Remote Identity Verification

iProov’s report examines the remote identity verification threat landscape, providing first-hand insights into the anatomy of a digital injection attack and exposing bad actor methodologies, threat trends, and impacts.

iProov, the leading provider of science-based biometric identity solutions, today launched The iProov Threat Intelligence Report 2024: The Impact of Generative AI on Remote Identity VerificationiProov’s report examines the remote identity verification threat landscape, providing first-hand insights into the anatomy of a digital injection attack and exposing bad actor methodologies, threat trends, and impacts. The report is created using data and expert analysis by the iProov Security Operations Center (iSOC).

Digital ecosystems continue to grow and multiply at record levels as organizations and governments seek to provide remote access and services to meet consumer and workforce demand. However, this growth’s unintended side effect is an ever-expanding attack surface that, coupled with the availability of easily accessible and criminally weaponized generative artificial intelligence (AI) tools, has increased the need for highly secure remote identity verification. The new threat report from iProov reveals how bad actors are using advanced AI tools, such as convincing face swaps in tandem with emulators and other metadata manipulation methodologies (traditional cyber attack tools), to create new and widely unmapped threat vectors.

Face swaps are created using generative AI tools and present a huge challenge to identity verification systems due to their ability to manipulate key traits of the image or videos. A face swap can easily be generated by off-the-shelf video face-swapping software and is harnessed by feeding the manipulated or synthetic output to a virtual camera. Unlike the human eye, advanced biometric systems can be made resilient to this type of attack.

However, in 2023, malicious actors exploited a loophole in some systems by using cyber tools, such as emulators, to conceal the existence of virtual cameras, making it harder for biometric solution providers to detect. This created the perfect storm with attackers making face swaps and emulators their preferred tools to perpetrate identity fraud.

“Generative AI has provided a huge boost to threat actors’ productivity levels: these tools are relatively low cost, easily accessed, and can be used to create highly convincing synthesized media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions. This only serves to heighten the need for highly secure remote identity verification,” says Andrew Newell, Chief Scientific Officer, iProov.

“While the data in our report highlights that face swaps are currently the deepfake of choice for threat actors, we don’t know what’s next. The only way to stay one step ahead is to constantly monitor and identify their attacks, the attack frequency, who they’re targeting, the methods they’re using, and form a set of hypotheses as to what motivates them.”

Latest HRtech Interview Insights HRTech Interview With Tommy Barav, Founder And CEO At TimeOS

The Evolution of Digital Injection Attacks

The use of emulators and metadata spoofing by threat actors to launch digital injection attacks across different platforms was first observed by the iSOC in 2022 but continued to dominate in 2023 growing by 353% from H1 to H2 2023. An emulator is a software tool used to mimic a user’s device, such as a mobile phone. These attacks are rapidly evolving and pose significant new threats to mobile platforms: injection attacks against mobile web surged by 255% from H1 to H2 2023

Advances in Collaboration and Sophistication

Across 2022 and 2023, indiscriminate attack levels ranged from 50,000 to 100,000 times per month. There was also a considerable increase in the number of actors and an improvement in the sophistication of the tools used.

A significant growth in the number of groups engaged in exchanging information related to attacks against biometric and remote human identification or “video identification” systems was also observed, evidencing the collaborative approach now being adopted by threat actors. Of the groups identified by iProov’s analysts, almost half (47%) were created in 2023

New Trends for 2023

There are two primary attack types observed by the iSOC: presentation attacks and digital injection attacks. Among the new trends discovered for 2023 are:

  • A significant increase in packaged AI imagery tools deployed which make it far easier and quicker to launch an attack and this is only expected to advance.
  • There was a 672% increase from H1 2023 to H2 2023 in the use of deepfake media such as face swaps being deployed alongside metadata spoofing tools. Presentation and digital injection attacks may have different levels of impact, but they can pose a significant threat when combined with traditional cyber attack tools like metadata manipulation.

The report also includes a new section outlining case studies on prolific threat actor personas, whose identities have been anonymized. These case studies evaluate the sophistication of each actor’s methodologies, efforts, and the frequency of their attacks. This analysis provides invaluable intelligence and supports iProov in continually improving its biometric platform’s security helping minimize the risk of exploitation for organizations of both present and future remote identity verification transactions.

The iProov Biometric Threat Intelligence 2023 Report is informed by data from the iProov Security Operations Center (iSOC) and expert analysis.

Recommended : Untraditional Ways To Discover Tech Talent And Promising Software Projects

[To share your insights with us, please write to  pghosh@itechseries.com ] 

Collaboration and SophisticationDigital Injection Attacksexpert analysisGenerative AIgovernmentsiProoviProov Security Operations Center (iSOC)NEWSOrganizationsRemote Identity Verification