An Indian comedy, Dragon, released this year, has a scene where two guys are in a room for an online job interview. One is the job candidate dressed in a suit and tie. But the other person does all the talking on behalf of the interviewee. The lip syncing is hilariously off.
The interviewer, suspecting something is amiss, asks the “candidate” to scan the room with his laptop camera. Satisfied that everything is on the up and up, says “There are so many fake interviews these days.”
If only art didn’t imitate life.
Sham job interviews are on the rise and growing ever more daring and crafty. Artificial intelligence, and the ease with which it allows people to pass themselves off as others, is partly to blame. It isn’t enough for employers to throw their hands up and conduct interviews solely in person. That won’t fix the problem.
Who can forget this memorable scene from Good Will Hunting when Ben Affleck’s character pretends to be Matt Damon’s character in a job interview?
No, organizations need to match or exceed the sophistication of the AI tools used by fraudsters.
Catch more HRTech Insights: HRTech Interview Vijay Swaminathan, Co-Founder & CEO at Draup
Fighting fire with fire
AI agents have come on the market to help organizations address the surge in hiring fraud, especially in industries that deal with highly sensitive information. Agents aren’t just scanning resumes for embellishments. They go well beyond that by alerting the recruiter or hiring manager to answers that seem a little too perfect.
Here’s how they work. As a remote interview is taking place, AI tools analyze candidates’ answers to alert the interviewer in real time if there’s a likelihood that ChatGPT or some other technology is being used. Recommendations are then made to the interviewer on what types of follow up questions to ask to gather more authentic answers.
For example, “How many teams in total were involved in the project, which stakeholders did you work with directly, and what was your specific contribution to the project?” might be a follow up to “Tell me about a complex project you worked on that involved multiple teams in a highly matrixed organization?”
For recorded interviews, fraud-fighting technologies highlight instances where candidates directly addressed job requirements. Recruiters also have the ability to compare video and audio results that show the same person who was initially interviewed is in fact the same one in ensuing interviews.
Counter-punching requires an end-to-end approach. Use tools early in the hiring process to identify that the person does indeed have a skill footprint that is consistent with the resume and application before getting to the actual interview stage. Think of a seamless and continuous analysis of a candidate’s information. A secure, holistic hiring process should become second nature, much like two-factor authentication when signing in to a bank account.
Next-level fraud is coming
Organizations that once thought they were immune from hiring chicanery need to prepare for the inevitable. Fraud will only get more sophisticated. As it stands today, nearly 62% of hiring managers admit that candidates are now better at faking their identities with AI than hiring teams are at detecting them, according to a survey conducted by a background check company.
In the near future, candidates could become avatars or a digital representation of a real person. A taste of what is to come was posted by a recruiter’s odd interaction with an “applicant.” Alarm bells went off when the recruiter noticed eye and mouth movements that were not in sync when the subject talked. Something clearly wasn’t right with this situation.
This incident is a warning sign. AI could also generate fake LinkedIn profiles and create an entire portfolio of work. Someone claiming to be an engineer could show “proof” of sophisticated code-writing skills, or a finance executive could generate realistic-looking profit and loss statements.
The possibilities are endless. We truly don’t know what we don’t know.
When fraud hits home
I’ve interviewed and hired hundreds of people in my career. Red flags for me include not being able to specifically speak to accomplishments and work histories. Even simple things like using “we” instead of “I” make me pause. Admittedly I have hired people who were unable to demonstrate a skill set in practice that they claimed to have on the resume.
We all make mistakes. It happens.
That’s why it’s imperative to slow down, gather as much information as possible and use tools to present that data to make a smart decision.
But don’t go too far. Don’t let fraud protection be so heavy that it negatively impacts the candidate experience. Know the role that is being filled and act accordingly. Hiring a scientist is different than a frontline hotel worker. One requires higher touch points than the other, but both deserve a rich candidate experience.
Be smart and careful
In the movie alluded to earlier, the imposter candidate is elated after receiving a job offer. His little scheme worked. But a friend warns him: “If you don’t perform well, they’ll throw you out. Learn your job well.”
Wise advice for anyone trying to game the system. Eventually most scammers will get caught. Organizations can lessen the incidence of imposters falling through the hiring cracks by:
- Employing a holistic process with AI agents as a warning system.
- Striking a balance. It’s good to be skeptical, but be practical too. Don’t go overboard such that it negatively affects the candidate experience.
As the old saying goes, don’t ride in on horses when going up against tanks. Prepare for what’s to come with the proper tools.
Read More on Hrtech : HRTech Interview with Mike Dolen, CEO of Humancore
[To share your insights with us, please write to psen@itechseries.com ]