Your Human Risk Program Has a Blind Spot: Recruiting

North Korean IT workers are getting hired through your careers page. Learn how DPRK operatives use stolen identities, laptop farms, and social engineering to infiltrate companies, and what human risk teams can do to detect and prevent hiring fraud.

February 10, 2026
Ross Lazerowitz
Ross Lazerowitz
Co-Founder and CEO
Your Human Risk Program Has a Blind Spot: Recruiting

In one of the largest DPRK fake-hire cases charged, the DOJ says a single laptop-farm network used 68 stolen U.S. identities to get hired by 309 U.S. businesses and generated more than $17 million.

In a 2025 nationwide crackdown, the FBI conducted searches of 21 suspected laptop farms across 14 states, seizing roughly 137 computers tied to the same playbook.

Even Amazon says it has blocked more than 1,800 suspected DPRK-linked applicants since April 2024, with attempts rising 27% quarter over quarter.

Most security teams will read that and think, 'that's HR's problem.' It's not. This is social engineering, except the entry point is your careers page.

How North Korean IT workers are infiltrating U.S. companies

It kind of sounds like a spy movie. Except nobody's parachuting in. They're submitting resumes on LinkedIn.

North Korea runs an estimated 3,000 to 10,000 IT operatives whose job is to get hired at Western companies under false identities. The motivations are straightforward: revenue to fund the regime's weapons programs, and in some cases, direct access to steal intellectual property and sensitive data. A UN Security Council Panel of Experts report estimates that DPRK IT workers generate $250 million to $600 million annually for the regime.

Here's how it actually works. An operative obtains a stolen or borrowed U.S. identity. They build resumes, create LinkedIn profiles, and apply for remote roles. During the interview, they use real-time LLM assistance, voice manipulation, or a stand-in on camera, with answers provided off-camera. Once they receive the offer, the company ships a laptop to what appears to be a U.S. address. It's not the operative's home. It's a laptop farm operated by a U.S.-based facilitator who keeps the machines powered on and connected to residential Wi-Fi. The operative then controls the laptop remotely from China, Russia, or elsewhere using common remote access tools or KVM switches. Your SOC sees a login from Arizona. The keystrokes are coming from Dandong.

And that's just the version where the identity is stolen. It gets worse.

In the Minh Phuong Ngoc Vong case, a real U.S. citizen applied using his own face, passed the video interview, cleared ID verification, and was hired by a cleared defense contractor working on FAA systems with access to national defense information. Then he handed his credentials over to DPRK operatives. Your background check didn't fail. It passed. The person was real. What happened after onboarding is what should keep you up at night.

If you're thinking "we're too small for this," you're wrong. These operatives are quota-driven. They spray applications across everything from 10-person startups to Fortune 500s to government contractors. The common denominator isn't your company size. It's whether you have a remote or hybrid role open.

Recruiting Red Flags: Fast Triage Checklist

Before you get lost in the full playbook, here's the short list your recruiters can actually use. Share it, screenshot it, paste it into your recruiter enablement doc.

  • Camera friction: refuses camera-on, won't do simple unscripted prompts (turn your head, hold up ID, answer a basic local question tied to their listed address)
  • Identity inconsistency: photo/appearance/voice doesn't match across resume, LinkedIn, screening call, and interview (or changes between rounds)
  • Thin or unstable online presence: unusually new profiles, minimal history, inconsistent dates/employers, or profiles that disappear/go dark mid-process (by itself this isn't proof, but it's a useful supporting signal)
  • Reuse across applicants: the same phone numbers, email patterns/domains, or "references" showing up across multiple candidates
  • Templated resumes: identical work-history blocks, recycled project descriptions, or eerily similar education/history across "different" people
  • Shipping weirdness: address changes after the offer, forwarding requests, "send it to my friend," or a shipping destination that doesn't match the application
  • Timezone mismatch: claims U.S.-based but only reliably available during hours that line up with East Asia, or "tech issues" that spike when verification gets harder
  • Post-hire "something's off" reports: manager/coworkers say the person avoids video, looks different than the interviewee, hands off work that doesn't match their demonstrated skill, or uses unusual intermediaries for basic tasks
  • Remote-access on day one: remote desktop tools installed early without a clear business need (flag installs of tools like AnyDesk, RustDesk, Splashtop, and similar on new-hire endpoints)

Important note: you don’t want these signals used to automatically disqualify candidates. That’s both risky (due to false positives) and can create exposure to hiring discrimination. Use them as escalation triggers that prompt additional verification, documentation, and a consistent review path.

The DPRK hiring fraud playbook: stolen identities, fake interviews, and laptop farms

Phase 1: The identity

It starts with a persona. The operative either steals a U.S. identity or, increasingly, leases one from a willing American accomplice. In the Chapman case, a single network cycled through 68 stolen identities. But stolen IDs are just the baseline. In the Vong case, a real U.S. citizen lent his own identity, showed his own face on camera, and passed every check your process would throw at him.

The application itself is industrialized. Operatives spray resumes across dozens of open roles simultaneously, churning through personas when one gets flagged. They reuse work histories, education credentials, and references across multiple identities. The same document creation artifacts show up across resumes. VoIP numbers are shared among candidates and their "references," who are often other operatives.

If your recruiter is reviewing these in isolation, each one looks fine. The red flags only emerge when you look across applications.

Phase 2: The interview

This is where it gets uncomfortable for security awareness teams, because what's happening on that Zoom call is textbook social engineering.

The candidate shows up on camera, but they may not be the person doing the talking. Operatives use stand-ins who sit for the video while answers are fed from off-call. Others use real-time LLM assistance and voice manipulation software. Some run dual-call A/V relays, piping the interview audio to a second platform where a handler listens and dictates responses.

When pressed on verification, they manufacture "tech issues." The camera freezes. Audio drops. They disconnect when the interviewer pushes too hard. The tells are there if you know to look for them: scripted-sounding answers, lip-sync mismatches, reluctance to do anything unscripted on camera, and call latency that doesn't match their claimed location.

Phase 3: The last mile

The candidate gets the offer. Now the company ships a laptop. This is the moment the operation goes from social engineering to infrastructure.

The laptop doesn't go to the operative. It goes to a U.S.-based facilitator who hosts it at a laptop farm. Chapman operated as many as 90 laptops from her Arizona home. Knoot operated a farm out of his Nashville residence. The facilitator keeps the device powered on and connected to residential Wi-Fi, so the IP address appears domestic to your IT team.

The operative then connects remotely via tools such as AnyDesk, TeamViewer, or Splashtop and controls the machine from overseas. The facilitator acts as "hands" for anything physical: rebooting the machine, plugging in a 2FA key, shipping reroutes. Occasionally, the laptops are even shipped to China along the border of North Korea.

Some of these facilitators also handle the last physical steps of the hiring process. In at least one case, a U.S. accomplice showed up for the drug test on behalf of the operative.

By the time your new hire's first day rolls around, you've already shipped a corporate asset to a nation-state proxy, granted network access, and onboarded someone who doesn't exist.

How to detect and prevent DPRK IT worker fraud

If you've read this far, you're probably thinking: this touches recruiting, IT, security ops, and legal. So what can I actually own?

More than you think. Human risk teams are already in the business of training people to spot social engineering, building verification culture, and influencing processes. This is the same work, applied to a different pipeline. Here's where to start.

What human risk teams can own vs. influence

Before jumping into tactics, it helps to map what you can drive directly versus what requires cross-functional partnership:

Own:

  • Recruiter enablement training on candidate fraud red flags
  • Interview verification culture and escalation protocols
  • Reporting loops between TA, security, and IT
  • Metrics and measurement of fraud detection effectiveness

Influence:

  • Identity proofing requirements and liveness checks
  • Device shipping controls and address verification
  • IT onboarding gates and endpoint monitoring
  • Escalation paths to legal, compliance, and insider threat teams

Partner with:

  • Legal and compliance on verification requirements and data handling
  • HR/TA on process changes and candidate experience
  • IT and SecOps on telemetry integration and alerting
  • Insider threat programs on post-hire monitoring

You're not going to own every control. Your job is to make sure this doesn't fall between HR, IT, and security.

Detection and prevention controls

1. Educate your recruiters the same way you educate your employees on phishing.

Your security awareness program likely trains employees to spot suspicious emails, pretext calls, and impersonation attempts. Your recruiting team is facing the same tactics in a different channel, and nobody is training them on it.

Teach talent acquisition to recognize the red flags of a fraudulent candidate: scripted-sounding answers, reluctance to go on camera, "tech issues" that conveniently appear when verification becomes more rigorous, VoIP numbers shared by candidates and their references, and resumes with recycled work histories or document creation artifacts. These are the same behavioral indicators you'd flag in a vishing simulation. The delivery mechanism is a Zoom interview instead of a phone call.

2. Push for identity verification, not just background checks.

This is the single biggest gap. Most companies run background checks and assume they've verified a person's identity. They haven't. A background check confirms that a Social Security number belongs to a specific name. It does not confirm that the person on your Zoom call is the SSN owner.

Advocate for liveness checks during the hiring process. That means verifying the person on the screen, not just the data on the form. In the Chapman indictment, operatives abandoned their applications to government agencies the moment they realized fingerprinting was required. Friction works. Your job is to make the case for adding it in the right places.

3. Mandate camera-on policies for interviews and challenge what you see.

Camera-on is table stakes, but it's not enough on its own. Deepfake overlays and stand-ins can pass a casual video call. Train interviewers to introduce unscripted moments: ask the candidate to turn their head, hold up an ID, or answer a localized question about their listed address. These small friction points break the choreography that operatives depend on. If someone's camera "suddenly stops working" when you ask them to do something physical, that's a signal.

4. Bridge the gap between HRIS and security telemetry.

This is where human risk teams can add unique value. You sit between HR systems and security tooling. Start looking for discrepancies that neither team would catch on their own: does the contact information on the original application match what's in HRIS after onboarding? Is the employee using a VoIP number as their primary contact? Does their IP geolocation match their stated location? Has their manager reported that they're consistently camera-off in meetings?

None of these are conclusive on their own. Together, they form a risk profile that your existing human risk scoring models can incorporate.

5. Assume you've already been compromised and start vetting backwards.

Don't just look forward at your hiring pipeline. Look at who's already inside. Your deck of current employees deserves the same scrutiny. Check for VoIP numbers in contact records, VPN usage that doesn't match stated locations, HRIS data that conflicts with application data, employees who are never on camera, and banking details that point to institutions far from their listed addresses. Approach this from the position that at least one fraudulent hire has already made it through. If you're wrong, you've tightened your controls. If you're right, you've found something everyone else missed.

6. Own the cross-functional conversation.

You're not going to implement all of this alone. Shipping address verification is an IT ops control. Liveness checks involve legal and compliance. Recruiter training is a talent acquisition initiative. But someone has to connect these dots and make the case that this is a coordinated social engineering threat, not a series of isolated process gaps. That someone should be you. Human risk teams are already the connective tissue between security, people ops, and executive leadership. This is an expansion of scope, not a new function.

What to do Monday morning

If you're a human risk lead, security awareness manager, or insider threat practitioner reading this, here's your starter pack. Pick the ones that fit your org and start moving:

1. Brief TA leadership on the threat and share the red flags checklist

Schedule 30 minutes with your Head of Talent Acquisition or recruiting ops lead. Walk them through the threat, share the checklist from this post, and make it clear that this is a social engineering attack targeting their team. The goal isn't to scare them, it's to make them a partner.

2. Add an escalation path for suspicious candidates

Create a lightweight process that allows recruiters to flag candidates who trigger multiple red flags. This doesn't need to be a formal ticketing system. A Slack channel, a shared inbox, or a standing meeting works. What matters is that there's a clear path from "this feels off" to "someone with context reviews it."

3. Add one verification step for remote IT roles

Start small. Pick your highest-risk category—usually remote IT, DevOps, or finance roles—and add a single verification gate. It could be a liveness check, an unscripted video interaction, or a requirement that shipping addresses match verified identity data. One control, consistently applied, is better than a perfect plan that never ships.

4. Add shipping controls for new-hire laptops

Work with IT Ops to flag any new hires whose laptop shipping address doesn't match the address in their application or HRIS data. This is a high-signal, low-effort control. If IT is already tracking device shipments, this filter applies to existing data.

5. Start a retro-review of remote hires from the last 6–12 months

Run the same checks on people already inside. Look for VoIP numbers, VPN usage that doesn't match stated location, HRIS conflicts, employees who are never on camera, or banking details that point to institutions far from their listed address. Assume you've already been compromised. If you find nothing, you've validated your controls. If you find something, you're ahead of the problem.

Pick two. Do them this week. Build from there.

Start the conversation before your next hire.

This isn't something security can solve alone, and it's not something HR knows to look for yet. Tag your Head of Talent Acquisition, your CISO, or your insider threat lead in the comments. If this is the first time they're hearing about it, that's exactly the problem.


Ross Lazerowitz is CEO and Co-founder of Mirage Security, where he builds AI-powered social engineering simulations to help organizations measure and reduce human risk. Previously he led product at Splunk and Observe (acquired by Snowflake). You can find him on LinkedIn and X.