As more companies embrace distributed setups for their engineering teams, they’re tapping into a global talent pool, bringing diverse skills and expertise to their projects. As of 2023, approximately 71% of companies have permanently adopted some form of remote work, and this number is projected to reach 73% by 2028.
While this shift offers access to top talent worldwide, it also brings new challenges—especially in identifying genuine candidates. Recently, many companies have reported an influx of fake candidates, complicating the hiring process.
At Adeva, we process hundreds (if not thousands) of applicants each month. Through a combination of advanced technology and refined screening techniques, we’ve identified ways to help companies protect their teams, save time, and reduce risks associated with hiring deceptive candidates.
Here’s how you can strengthen your hiring process.
How Are Fake Candidates Trying to Fool Organizations?
Scam candidates can vary widely in intent and execution. In some cases, a candidate might genuinely have the skills but fall short of requirements—they could be based in a location not aligned with the role's logistical needs, lack proper visas, or fail to meet specific qualifications for the role.
They still want the job, so they apply and try to slip through screening. While this is a less severe version of the scam, it still carries risks for the employer, from potential compliance issues to problems with work quality or availability.
A more sophisticated version involves overseas developer shops setting up U.S.-based front people to interview for the job. In these cases, the face the company sees may be just a stand-in for an entire offshore team doing the work. Sometimes, these shops even create fake identities to look like legitimate U.S.-based applicants.
This is a common scenario reported by companies, and it can result in serious challenges if the real workers don’t meet expected standards or the arrangement violates company policies.
What Role Does AI Play in Today’s Hiring Scams?
Technology is further complicating the picture. With deepfake tools, fraudulent applicants are going as far as using AI-generated images or even altering their live video feed to hide their true identity.
The FBI recently issued a warning about these cases, highlighting that scammers are using deepfake technology during video interviews to avoid detection. It’s a calculated move designed to make fraudulent candidates appear like ideal hires.
General AI tools, like ChatGPT, are also becoming part of the problem. Candidates are reportedly feeding job descriptions straight into AI, asking it to generate the “perfect” resume. This often results in copy-pasted job duties, bogus statistics, or even details pulled from other applicants’ profiles. With high-volume applications, there’s little effort made to personalize these outputs, meaning companies are often faced with resumes that appear too good to be true—or turn out to be exactly that.
In some cases, the stakes are even higher. A recent incident involved a North Korean hacker who used AI to alter his photo to look more American. He managed to secure a position with a U.S. security vendor. Once hired, the hacker uploaded malware, and with that, he turned the scam into a serious threat to the company’s security.
The blending of AI, fake identities, and social engineering has created a breeding ground for scammers seeking remote work, and it’s leaving companies scrambling to protect their hiring processes.
What's more, there are numerous tools available that allow users to auto-apply to thousands of jobs in minutes, making it even easier for individuals with fake identities to distribute their profiles and applications widely. Often, these applicants have little knowledge of the companies they’re applying to, which further highlights the challenges recruiters face in filtering genuine candidates from fraudulent ones.
How Much Do Fake Candidates Really Cost Companies?
Hiring fake candidates is expensive, even when they don’t stay long. The type of fake candidates who lack the required skills or experience often get caught quickly. Yet, even a short tenure can be costly.
Costs can include everything from recruitment expenses to onboarding costs and lost business opportunities. Every hire that doesn’t work out means lost productivity and the need to start the hiring process all over again, which can be a heavy hit, especially for smaller teams.
The risks escalate with fake candidates who target sensitive roles through deceptive means, like deepfake technology. Hiring these scammers brings serious risks, including:
- Theft of Data: Scammers access valuable company information and sell it for profit, often using deepfakes to secure these roles.
- Internal Sabotage: They can introduce ransomware or malware anytime, bypassing security since they're viewed as trusted employees.
- Security Exploits: Scammers find and exploit security gaps, which can lead to costly breaches that average $500,000 each.
- Access to Sensitive Data: With direct access to financial or customer data, scammers can use or sell this information for personal gain.
For scammers, the end goal is often to steal data or install ransomware. The rise of AI-driven tactics means these scams are unlikely to fade anytime soon. As technology and social engineering tactics evolve, this presents a sustained threat to businesses.
What Are the Red Flags for Identifying Fake Candidates?
Here are some common red flags to watch for when screening candidates:
- Resume Clues: Overly generic language may indicate an AI-generated resume.
- Suspicious Experience Claims: Often, these candidates will claim experience with big tech companies or competitors in your industry. They may even go as far as tailoring their experience to match the job description, weaving in specific details to appear like an ideal fit. This can make them seem like the perfect candidate at first glance.
- Profile Photo Issues: No profile photo or one that appears to be a stock image can indicate a fake profile.
- LinkedIn Red Flags: Signs like few connections, connections with questionable profiles, or minimal profile activity are worth noting. Profiles created only recently, especially within the past few months, can be suspicious. Many fake candidates may not have LinkedIn at all, as it's less commonly used among certain developer groups.
- Lack of Credentials: Absence of licenses or certifications mentioned on the resume, as well as no endorsements or recommendations, are additional red flags.
Strategies for Preventing Fake Applicants
Forrester Research anticipates that at least one high-profile company will mistakenly hire a nonexistent candidate next year.
According to a Checkster survey, over 77% of applicants admitted to misrepresenting themselves at some level, with nearly 60% claiming skills they don’t have and over 50% extending employment dates to cover up past jobs. Misrepresentation affects all industries, especially fields like software, retail, and manufacturing, leaving companies wondering who they can trust.
However, there are effective steps hiring teams can take to protect their company against such risks.
Inflated Claims | Percent have done or would do |
---|---|
Mastery in skills you barely use (e.g., Excel, language) | 60.0% |
Worked at some jobs longer to omit an employer | 50.25% |
GPA is higher by more than half a point | 49.25% |
A director title instead of a manager title | 41.25% |
A degree from a prestigious university with credits short | 39.5% |
A degree from a prestigious university instead of your own | 39.25% |
Prestigious university degree with only one class taken | 39.25% |
Achievements that aren't mine | 32.5% |
Significantly inflated role on a key project | 49.5% |
False reason for leaving (e.g., left vs. being fired) | 45.75% |
Made-up relevant experiences | 42.25% |
Salary inflation by more than 25% | 39.5% |
Current residence location is different | 34.5% |
Inflated job outcomes (e.g., increased sales 150% vs. 50%) | 34.5% |
False references (friend vs. real, I pretend to be a reference) | 43.75% |
No criminal record when I have one | 26.5% |
An achievement I did not really get (e.g., award, press coverage) | 26.0% |
Use AI to Enhance Screening
AI tools can be a powerful ally in detecting fake candidates. In a recent IBM survey, 42% of companies reported using AI to improve recruitment processes, especially for screening out fraudulent applicants. AI can identify red flags like keyword stuffing, formatting issues, and even detect if a resume was AI-generated.
For interviews, AI tools can analyze a candidate's voice and expressions, flagging signs of deception such as reading off-screen or exhibiting scripted responses. AI can handle initial screenings and data analysis, while human recruiters can focus on nuanced interactions that require personal judgment.
Check Digital Footprint
Examining a candidate’s online presence is crucial, especially for tech roles. Verify LinkedIn activity, connections, and recommendations. Look for a genuine LinkedIn bio, regular posts, and meaningful interactions that showcase their experience.
Platforms like GitHub or Stack Overflow can further validate their skills—most legitimate candidates in tech will have active, original work on these platforms. Additionally, LinkedIn’s verification tool can confirm identity through government ID checks, marked by a grey checkmark. This should make it easier to identify authentic profiles.
Conduct Rigorous Reference Checks
Reference checks can also help verify a candidate’s claims. Ask for connections within their listed companies, and request team-specific referrals. It’s helpful to approach references directly rather than accepting any names the candidate provides, as this can reveal discrepancies in their employment claims. When candidates are unwilling to provide connections from current or recent teams, it’s often a red flag.
Incorporate ID Verification
Directly verifying ID helps confirm that the candidate you’re speaking with is legitimate. Requesting passports, driver’s licenses, or professional IDs during video interviews can add an extra layer of security. Many companies also verify that the individual on-screen matches the ID presented. This ensures that no stand-ins are participating in the interview.
Set Unique and Detailed Screening Questions
Tailor questions to require specific, detailed responses that can’t easily be answered with general knowledge. Ask about their direct involvement in projects, technical challenges they’ve faced, or niche skills. Specific questions, such as details about system integrations or unusual project requirements, often reveal if a candidate truly understands the role.
Train Recruiters in Behavioral Analysis
Trained recruiters can spot subtle signs of inauthenticity, such as unusual pauses, off-screen glances, or overly scripted responses. By monitoring for these behaviors and asking informal, location-specific questions, recruiters can catch inconsistencies that indicate coaching or deceptive setups. Recruiters trained in behavioral cues can add a critical layer of protection against fake applicants.
Use Location Verification Tools
Many tech roles require applicants to be based in certain locations. Location verification tools or collaboration with IT can help verify where a candidate is based, which makes it more difficult for offshore scammers to fake U.S.-based locations. Including location-specific terms like “background check” or “local residency” in job postings can also dissuade fake candidates from applying.
Work with Reputable Talent Platforms
Partnering with established talent platforms like Adeva can add an extra layer of security in your hiring process. Adeva employs a rigorous, three-tier vetting system so that candidates have the technical skills, experience, and credibility required for the role.
Adeva’s recruiters are also trained to recognize red flags associated with fake applicants, such as inconsistencies or suspicious employment gaps. Additionally, each candidate undergoes thorough interviews and technical assessments to verify their skillset and authenticity. Working with a trusted platform like Adeva saves companies time and reduces the risk of hiring unqualified or deceptive applicants.
Conclusion
Hiring for remote tech roles comes with unique challenges, including a rise in sophisticated fake applicants. By combining technology-driven tools like AI and location verification, conducting rigorous digital footprint checks, and working with trusted talent platforms, hiring managers can build stronger defenses against fraudulent candidates.
With careful screening processes in place, companies can focus on finding genuine talent that meets their standards and contributes meaningfully to their teams.