Research reveals how employer-funded job platforms use AI to sort candidates for recruiter benefit, not career advancement.
The recruitment industry is racing to deploy AI-powered job matching, with every major platform promising algorithms that connect candidates with their perfect role.
Most job boards optimize for employer satisfaction because that is who pays them. We optimize for job seeker success. That changes everything about how our technology works.”
— Jan Hendrik von Ahlen, Co-founder & Managing Director, JobLeads
But according to JobLeads, the global career platform serving over 12 million professionals, the business model behind most of that AI tells a very different story: every AI matching system optimizes for something, and on most major platforms, it is not the job seeker.
You are the inventory, not the customer
Over 60% of job boards depend on employer payments as their primary revenue source. Companies pay to post listings, pay for featured placement, or pay to access candidate databases. That revenue structure creates a fundamental misalignment.
When employers are the customers, the AI is built to satisfy employers. A candidate uploading a resume is not interacting with a personal career tool–they are feeding data into an inventory management system. Skills become search filters. Work history becomes a screening criterion. Salary expectations become budget constraints. All of it processed by algorithms designed to answer one question: is this candidate what our paying customers want right now?
Catch more HRTech Insights: HRTech Interview with Bernard Barbour, Chief Technology and Product Officer at Skillsoft
Bias is not a bug–it is the output of a misaligned objective
As AI matching grows more sophisticated, this conflict becomes harder to ignore. Early keyword matching was crude but relatively neutral. Modern AI analyzes complete career trajectories, identifies unlisted skills, and models growth potential. The more it learns, the more consequential the question of whose interests it serves.
Research from the University of Washington found that AI models trained on employer preferences favored white-associated names 85% of the time and male-associated names 89% of the time. The models were not programmed to discriminate. They learned what a good match looked like from the employers who paid for the platform and replicated those preferences at scale.
Amazon scrapped its own AI recruiting tool after it learned to systematically disadvantage female applicants from historical hiring data.
This is statistical discrimination packaged as personalization. The more capable the AI becomes, the more precisely it can filter candidates out of opportunities they never knew existed.
What changes when the job seeker is the customer
JobLeads pioneered the candidate-first revenue model in 2007, building one of the first major platforms to earn revenue from job seekers rather than employers. Eliminating the employer-as-customer dynamic was the only way to remove the fundamental conflict from the platform’s incentives.
When candidates pay, the optimization flips. Instead of efficiently sorting applicants to satisfy recruiter needs, the platform’s goal becomes helping a specific person land a better job. That means surfacing stretch opportunities where transferable skills make a candidate stronger than a keyword filter would suggest. It means showing a role paying 20% more, or a senior title instead of another lateral move.
That alignment also drives different infrastructure decisions. While employer-funded platforms focus on publicly advertised listings, JobLeads built a network of 40,000+ headhunters specifically to surface the hidden job market–the estimated 70% of positions that are never publicly posted. When job seeker success is the business model, connecting users to unadvertised opportunities is not a feature. It is the core product.
The questions every job seeker should be asking
Job seekers do not need to wait for AI regulation to make better decisions about which platforms to trust.
Before relying on any platform’s matching algorithm, the relevant questions are: who pays for this service? Does the AI surface stretch opportunities, or only safe matches? Can users access unadvertised roles, or only what employers have chosen to post publicly? Does the platform provide transparency about market value and application status?
The answers reveal whose interests the technology actually serves.
AI matching is not inherently biased or misaligned. The technology is neutral. What determines the output is the objective function, and the objective function is determined by the business model.
Read More on Hrtech : AI-Native HRTech: Embedding Intelligence At The Core Of Workforce Strategy
[To share your insights with us, please write to psen@itechseries.com ]