Fake it till you make it? The deepfake job candidates flooding the market

Fake it till you make it? The deepfake job candidates flooding the market Image 1

The face of recruitment is changing (literally at times). Not only are we contending with a flood of AI in job applications, position descriptions, resumes and cover letters, but now there’s a new challenge. The rise of deepfake job candidates. With advancements and the democratisation of artificial intelligence (AI), it's now quite easy for individuals to fabricate their entire persona. We’re talking about AI-generated resumes, LinkedIn profiles, and even convincingly altered video interviews.
This emerging threat not only undermines the integrity of hiring processes but also poses significant cybersecurity risks to organisations.

What’s happening?

According to Gartner research, this increase in AI-generated profiles means that by 2028 - globally 1 in 4 job candidates will be fake. The recent case which sparked this conversation was a candidate for US tech company, Pindrop Security - where an applicant called ‘Ivan’ was interviewed for a senior engineering role. The recruiter noticed his facial expressions were slightly out of sync with the audio, and it turned out that ‘Ivan’ was using deepfake software in a bid to be hired remotely.

Why are they doing it?

Apparently, individuals are using the technology for nefarious purposes like executing cyber attacks, stealing customer data, gaining access to install malware, trade insider information or funds. Christine Aldrich, Chief People Officer at Pindrop said, “deepfake employees could lock critical files and demand ransom payments, resulting in millions in losses from system downtime, recovery efforts, and legal fees.”

How are deepfakes being created?

A recent report by Palo Alto Networks showed just how easy it can be to create a fake job applicant. “It can take as little as 70 minutes for someone with no prior experience in image manipulation to craft a deepfake candidate capable of passing a video interview.”

According to KPMG, the proliferation of generative AI tools has made it possible for almost anyone to create convincing deepfakes using just a few seconds of recorded audio. This accessibility has led to the emergence of "deepfake-as-a-service" offerings on the dark web, further facilitating fraudulent activities.

So, what can be done about it?

Recruiters and internal hiring teams are now grappling with the challenge of distinguishing genuine candidates from sophisticated fakes. But there are ways to help decipher deepfakes, or bad actors in the recruitment process. Firstly, verifying candidates’ identities as thoroughly as possible from the get-go. Doing things like researching their backgrounds, their work histories and vetting references is a good start. There are also automated forensic tools that check for indicators of tampering and consistency of information across a candidate's documents, as well as ID verification software.

The most obvious way to verify a candidate would be to meet in person, where possible, but if you’re unable to meet your candidate in person, there are techniques for detecting deepfakes over video. Some of these include: asking applicants to perform specific actions like moving hands across their faces, noticing suspicious eye movement patterns, and audiovisual or lighting irregularities to show any deepfake manipulation. From an organisational standpoint, in this day and age, training internal teams to recognise the signs of deepfake tech and fostering a culture of cybersecurity awareness is crucial.

Whether you’re looking to land your next cybersecurity role or make your move up the career ladder, the Decipher Bureau team is here to help. With offices in Brisbane, Sydney, Melbourne, and Canberra, and a skilled team with global reach, we’re here to support your next career move. Contact us for a confidential chat with one of our expert consultants, and let’s work together to find your perfect role.