Nearly every company, from tech giants like Amazon to small startups, has first-hand experience with fake IT workers applying for jobs – and sometimes even being hired.
Even so, using a deepfake video to apply for a security researcher role with a company that does threat modeling for AI systems seems incredibly brash.
“It’s one of the most common discussion points that pops up in the CISO groups I’m in,” Expel co-founder and CEO Jason Rebholz told The Register, talking about the North Korean-type job interview scam. “I did not think it was going to happen to me, but here we are.”
Before starting his own AI security shop, Rebholz worked as an incident responder and chief information security officer (CISO). He has researched deepfakes for years, and even used them in his presentations – so he’s not an easy target for this type of scam.
In January, Rebholz posted a few job openings at his firm on LinkedIn. Within a couple hours, he received a direct message from someone he didn’t know personally saying that they knew someone who would be a good candidate for the security researcher role.
The purported job-seeker’s profile pic wasn’t of a real person. Rebholz says it looked like an anime character, and calls it the “first red flag” in this whole experience. But he still gave the candidate the benefit of the doubt.
“In the security community, people get freaked out about privacy, and so it’s not outside of the norm if somebody has an alias or don’t have a real picture,” he said. “This was the first instance of me trying to justify what I was seeing.”
More red flags
Rebholz asked his LinkedIn connection about the job seeker, and the connection send him a link to a resume hosted on Vercel, a cloud platform for building apps that integrates with AI tools. That also seemed a little strange.
“I’m chatting with my co-founder at the time and he said, ‘that’s kind of weird. He probably used Claude to generate that resume,'” Rebholz said, adding that if you ask Claude Code to create a resume or portfolio, it typically deployed these on Vercel.
Rebholz justified this by telling himself the guy’s a developer, it makes sense that he’d use a coding tool to create a portfolio, plus, “it was a good-looking resume. It all looked really professional.”
The LinkedIn contact said he previously worked with the job seeker at his last company, and told Rebholz the would-be security researcher worked overseas. Rebholz said that wouldn’t be a problem. Expel’s a young company, and Rebholz wanted to interview as many people as he could to get a sense of who was right for the job.
“At this point, I didn’t suspect it was a scam,” he said. “It felt a little bit weird with the person, being overseas, even though the last job that they had was in San Francisco.”
Still, Rebholz figured he could ask about this in the interview. He gave the LinkedIn person his email address, asked the mutual “friend” to make the connection, “and within five minutes of the email hitting my inbox, I have a LinkedIn message saying: ‘Check your spam folder, he replied to you, make sure you reply to him.'”
Rebholz started to suspect he was getting scammed. “I’ve never had that level of urgency for an introduction before,” he said. Every security sleuth – or anyone who has gone through anti-phishing training – knows that attackers typically try to create a sense of urgency to get their mark to take a certain action.
When we had the interview, that’s when things just went off the rails right away
“But I thought, all right, let me just talk with them. No harm there. And so we got an interview scheduled,” Rebholz said. “I didn’t reach a point yet where the red flags cross that threshold where I thought this is definitely a scam. We were in the yellow zone. Things were definitely getting a little suspicious. But when we had the interview, that’s when things just went off the rails right away.”
The scammer joined the call with his camera off. Then it took him a good 30 seconds to turn his camera on. “During that 30 seconds, I’m thinking this is going to be a deep fake, all those red flags started popping up from before, this is definitely not going to be real,” Rebholz said.
When the camera turned on, the job seeker was sitting in front of a virtual background – you can see a clip here – his face looked a bit blurry and plastic, there was a greenscreen reflected in his glasses, and at one point dimples appeared on his face.
“At that point, I was latching on to the softness of his face, and as he was moving around you could see if appearing and disappearing,” Rebholz said. “At this point, I know I’m definitely talking to a deepfake. But again, I tried to justify it.”
During our interview, Rebholz repeatedly describes an “inner turmoil” he experienced during this process. “What if I’m wrong? Even though I’m 95 percent sure I’m right here, what if I’m wrong and I’m impacting another human’s ability to get a job? That was literally the dialog that was going on in my head, even though I knew it was a deep fake the whole time. It was this weird juxtaposition. I found myself short circuiting to: there could be an explanation for what I’m seeing. It was a very surreal experience.”
What if I’m wrong? Even though I’m 95 percent sure I’m right here, what if I’m wrong and I’m impacting another human’s ability to get a job?
Rebholz says he noticed that the job seeker had a tendency to repeat the interview questions back before answering them, and many of his answers were almost word-for-word quotes of things Rebholz had said or written that were shared online. “It was almost an out-of-body experience where I felt like I was talking to myself,” he said.
Rebholz never ended the interview or asked the candidate to prove his humanness. “This was the inner turmoil I was going through: Do I confront him? But I kept going back to: What if I’m wrong? That was the oddest part of the whole experience because everything in me, everything I know about deepfakes was screaming at me: This is a deep fake. But there was something blocking me, the 1 percent chane that I’m wrong, this is actually a good candidate, and he’s going to think poorly of me if I confront him.”
After the interview, Rebholz said the video clips to his friend at Moveris to analyze using the company’s deepfake detection tech. It confirmed Rebholz’s suspicions. And one of the lessons learned is that it can happen to anyone, at any sized company.
“Small companies are also victims,” Rebholz said. “You don’t need to be a massive tech company to be a victim.”
Massive tech companies like Google and Amazon are also being targeted by North Korean IT workers. In fact, most Fortune 500 companies have fallen for the scam.
In December, Amazon said it stopped more than 1,800 suspected scammers from the Democratic People’s Republic of Korea (DPRK) from joining its workforce since April 2024.
“And we’ve detected 27 percent more DPRK-affiliated applications quarter over quarter this year,” Amazon Chief Security Officer Steve Schmidt said.
Rebholz said the issue comes up at least once a month in CISO chat groups he belongs to, and everyone is trying to figure out the best way to solve the problem.
High-tech and low-tech solutions
“It’s got to be a mix of low-tech and high-solutions, low-tech being just call it out,” Rebholz said. “The biggest learning for me is: trust your gut. Moving forward, the rule I have is forget about the social awkwardness. It’s more important to just challenge upfront and have that awkward conversation than it is to waste your time.”
It can cost execs and their companies a lot more than just their time. This type of IT worker fraud has cost cost American businesses tens of millions of dollars.
“If that person gets hired, you have a potential risk to the company, whether it’s a security incident where that individual is stealing information, or you’re paying a criminal,” Rebholz said.
In some cases, the fraudsters use their insider access to steal proprietary source code and other sensitive data, and then extort their employers with threats to leak corporate data unless a ransom is paid.
In addition to trusting your gut, Rebholz suggests mandating the camera stays on during job interviews, and if the interviewee is using a virtual background, tell them to turn it off. If they refuse, end the interview.
“If you’re still suspicious for whatever reason, go ask them to pick something up in their background and bring it back to the desk,” Rebholz said. “The old school thing of having them wave their hand in front of their face – that’s completely dead.” Modern deepfake software can defeat that trick, he added.
After hiring an applicant for a job, require them to work on-site for the first week, even if it’s a remote position. “It’s all about adding a little bit of friction for this person who is trying to get through the interview process,” Rebholz said, noting that one CISO told him that the person who showed up on day one – the only day they were required to work on-site – wasn’t the same person the executive team had interviewed.
The scammers “had hired somebody to come into the office for the first day before they were able to go remote,” he said.