
What does it mean to have an identity when parts of it can be sold, forged, or constructed from scratch?
Identity fraud now extends far beyond data breaches and cloned credit cards. It includes scams that harvest personal information through fake job ads, fabricated profiles that meet onboarding checks, and AI-generated impersonation capable of fooling biometric security. These techniques feel lifted from science fiction, yet they are very much part of reality: regularly deployed, measurably effective.
Drawing on the latest industry reports, Cifas data, and real-world case studies, this article blog explores how identity fraud has evolved, how synthetic identities and AI-driven impersonation amplify risk, and why detection must shift from point-in-time checks to continuous scrutiny.
Recent data from Cifas reveals more than 118,000 identity fraud reports between January and June 2025 - over half of all fraud filings in that period. The rise in identity fraud has been accompanied by growing use of synthetic identities, many of which are now generated or supported by AI tools capable of mimicking legitimate customer behaviour and documentation.
Not all identity fraud relies on fabrication. Alongside the growth of synthetic profiles, there is an emerging trend of individuals willingly, or sometimes under financial pressure or coercion, selling access to their own identity data. This includes national insurance numbers, passport scans, utility bills, and bank credentials, often handed over via encrypted messaging apps or anonymous online marketplaces in exchange for a quick payment. In some cases, people are recruited through social media or messaging platforms and led to believe they are participating in a legitimate scheme, ‘lending’ their identity to someone who cannot access credit or banking services themselves. Unfortunately, this is rarely where it ends. Once that information is in circulation, it is used to open accounts, apply for loans, or serve as a foundation for synthetic profiles. The individual remains legally and financially liable, even when the resulting fraud is orchestrated by someone else.
This isn’t limited to isolated incidents. Fraud prevention bodies report increasing volumes of these cases, particularly among young adults and those experiencing financial instability. Some individuals may feel they have little to lose; others are unaware of the long-term consequences; but the repercussions can be serious, ranging from damaged credit scores and frozen bank accounts to police investigation and exclusion from future access to mainstream financial services. The normalisation of identity commodification risks introduces a new category of fraud: one that blurs the line between victim and enabler.
This kind of identity commodification has a counterpart in another emerging typology: the use of fake job adverts to harvest personal data. In contrast to individuals who knowingly hand over their details, victims of these scams believe they are engaging with legitimate employers.
The fraud is designed to look routine. Adverts are posted on mainstream platforms. Interviews are conducted. Forms are exchanged. Contracts are issued. And then nothing. The job never materialises, but the applicant’s documents - passports, addresses, national insurance numbers - are repurposed to build synthetic identities capable of bypassing standard onboarding and verification checks.
Conversely, businesses are increasingly being targeted by fraudulent job applicants. In one recent case from the United States - a woman was found to have coordinated a remote work scheme that helped North Korean IT contractors gain access to Fortune 500 companies using stolen identities. The operation involved banks of laptops, falsified CVs, and staged video interviews. Tens of millions of dollars were channelled through the scheme before it collapsed. Even experienced hiring teams were misled by the level of preparation behind the fake candidates.

Synthetic identity fraud remains one of the most complex and poorly understood threats in the financial crime landscape. These profiles are not stolen in the traditional sense. They are constructed: assembled from a blend of real information, such as stolen national insurance numbers, and fictitious information, such as fake names and addresses or fabricated employment histories. The result is a plausible identity that does not belong to any one individual, but can still pass as genuine without the proper anti-fraud processes in place.
What makes synthetic identities especially difficult to detect is their capacity to behave convincingly over time. Unlike traditional identity theft, which is often opportunistic and short-lived, these profiles are maintained deliberately. Fraudsters open low-risk accounts, simulate normal activity and gradually build credit histories. When the profile has acquired enough credibility, it is used to access loans, credit products, or other services before being abandoned in what is often described as a “bust-out”.
Quantifying the scale of synthetic identity fraud is inherently difficult: as these identities do not correspond to a real person, they are frequently excluded from victim-based reporting and rarely surface in conventional metrics. Nonetheless, evidence suggests that they are contributing significantly to wider identity-related crime. In 2024, identity fraud accounted for 59% of all reports to Cifas’s National Fraud Database (almost 250,000 incidents). Facility takeover rose by 76%; and unauthorised SIM-swap cases increased by more than 1,000%. By mid-2025, over 217,000 additional identity fraud cases had already been recorded. While not all of these can be traced to synthetic profiles, the growth in typologies that depend on long-term infiltration and misdirection is consistent with the way synthetic identities operate, particularly in sectors such as telecoms, public services and insurance.
Despite these developments, institutional responses have been slow to adapt. In many organisations, identity fraud continues to be treated as a problem to be solved at the point of entry. Onboarding procedures remain the dominant line of defence, focusing on document checks, biometric scans and one-off identity verification. These controls may be effective in screening out obvious attempts at impersonation, but they are not historically designed to detect synthetic identities that evolve gradually and mirror the behavioural patterns of legitimate customers. Without continuous monitoring across transactions, usage behaviour and digital touch points, there is little to prevent these identities from persisting undetected for months or even years.

Advances in generative AI and synthetic media have created new opportunities for identity fraud, making once-sophisticated methods accessible to a much wider pool of actors. Voice cloning, facial synthesis and document fabrication are no longer the preserve of advanced threat actors. They are now achievable with minimal technical skill and little financial outlay.
Recent data suggests that AI-enabled impersonation is already reshaping the fraud landscape. In 2025, reported impersonation scams rose by 148%, with analysts estimating that over 80% of current fraud attempts now involve some form of AI-generated content. This includes synthetic voices, altered facial imagery and forged video avatars used to bypass biometric and visual verification systems.
To demonstrate how easily these tools can be weaponised, technology journalist Amanda Hoover used a low-cost voice synthesiser to replicate her own speech and successfully navigate her bank’s authentication process, exposing the vulnerability of systems that rely on voice verification alone. The same techniques are now being applied at a far more sophisticated level. In a now infamous case from 2024, a finance employee at a multinational company in Hong Kong was deceived into transferring over 200 million Hong Kong dollars (approximately 38 million US dollars) after attending a video call with what appeared to be the company’s chief financial officer and other senior colleagues. In reality, the meeting participants were AI-generated deepfakes, constructed using publicly available images, voice recordings and visual data.
The attackers had previously targeted the employee with a phishing email and lured them into the call under the pretext of a confidential transaction. The quality of the deepfakes was convincing enough to pass unnoticed throughout the meeting. By the time the fraud was uncovered, the funds had already been dispersed across multiple international accounts.
These methods are also being deployed in job recruitment scams. Small businesses have reported a rise in job candidates who appear credible on paper and during initial interviews, but who turn out to be AI-generated personas with fraudsters operating behind the scenes. In one instance, a recruiter asked a candidate to switch to typing responses mid-interview after the voice feed began to break down - a subtle but revealing sign of automation.
These cases demonstrate just how easily AI-generated profiles can meet existing verification standards. Most systems continue to rely on surface-level checks: matching a face to a photo, a voice to a sample, or a CV to expected formatting. When synthetic content is built to satisfy these requirements, there is little to prevent it from passing as legitimate.
Identity fraud continues to evolve, but so too do the tools designed to counter it. A growing number of projects and systems are pushing detection beyond static checks into behavioural, contextual and real-time defence. One notable example is dAIsy, developed by O2 in the UK. This voicebot, programmed as a chatty elderly woman, engages scammers on the phone, keeping them occupied for as long as 40 minutes and diverting them from real victims. Its conversations have also provided insight into scammer tactics used across call centres.
Stay on top of the ever-changing financial crime landscape by accessing the latest information on emerging criminal techniques and the risks associated with carrying out business with particular industries or in particular jurisdictions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript