2024 is already halfway over and there’s no sign of slowing with advancements in generative artificial intelligence nor AI regulations that are rushing to keep pace.
Google and Microsoft have embedded generative AI into their core services, enhancing search capabilities and office productivity tools. On the regulation front, the European Union has made significant progress with the EU AI Act – a crucial step towards ensuring the ethical and safe deployment of AI technologies. And at the intersection of hiring and AI, hiring isn’t slowing down significantly despite economic headwinds. The Bureau of Labor Statistics reported that the American economy added 209,000 jobs and the unemployment rate ticked down to 3.6 percent, continuing the longest stretch of sub-4 percent unemployment since the 1960s.
Amidst all of this, HireVue just released our first-ever Global Guide to AI in Hiring, which made me think that it was a good time for a mid-year check-in. Here’s what our research results and current events tell us about the state of AI.
Catch more HRTech Insights: HRTech Interview with Debra Squyres, Chief Customer Officer at Bonusly
Comfort with AI is increasing because of improved transparency
There was a time when using machine learning and artificial intelligence in hiring made vendors like HireVue an outlier, but that’s no longer the case. Further, the proliferation of ChatGPT and other generative AI has increased comfort with AI systems in everyday life for many people.
In our recent survey, 49% of candidates say they believe AI could help the issue of bias and unfair treatment in hiring, and 66% of hiring leaders have a more positive attitude towards AI in the workplace compared to one year ago.
The increased comfort with AI in hiring is a direct result of taking candidate and company concerns seriously and working diligently to answer questions and address critiques. More transparency in how we build technology is a positive for anyone and everyone who is affected by hiring (read: all of us).
Industrial organizational psychology tells us time and again that the more candidates understand how they’re being evaluated, the more comfortable they are with selection tools.
Creating auditable, explainable AI is the only viable path forward in this space, which is why we’re currently updating and re-releasing our industry-first AI Explainability Statement. New updates include additional information about how our bias mitigation has improved. A living document with this level of transparency should be a gold standard for anyone doing algorithmic work.
AI regulations are here and they’re getting more robust
We’ve been longtime proponents of greater regulation in AI. We believe that an ongoing engagement with a multitude of stakeholders is the key to creating sensible legislation that protects candidates, companies, and innovation. This type of engagement is especially crucial for businesses as our survey respondents signal they’re wary of whether their current tech can stand up to the changing standards of compliance:
- Nearly 40% of HR professionals say they have set up an internal team to assess the compliance of current products.
- 16% have hired external resources to assess the compliance of their current products.
- 52% have little confidence that their current vendors will meet the new AI standards being proposed.
We’ve always understood the need to be a strategic business partner for compliance, which is why we have an entire cross-functional regulatory compliance team dedicated to staying at the cutting edge of emerging regulation.
Our general counsel, executives, and science team members meet regularly with legislators and regulatory bodies to educate them about hiring and the existing rigors for our particular use-cases. We continue to propose that legislation for AI in hiring should have these at the foundation:
- Uniform audit criteria
- Static and deterministic algorithms
- Comprehensive audits of all Automated Employment Decision Technologies (AEDTs)
- Notice and transparency to empower candidates
- Mandated provision of demographic data by employers
- Audit delivery by vendors
I’m incredibly proud of the work we’re engaged in and these are two of the most recent announcements we’ve made:
- Signing a joint letter calling on Congress to prioritize funding for the National Institute of Standards and Technology (NIST) fiscal year 2025 budget request.
- Joining the U.S. Artificial Intelligence Safety Institute Consortium, which was established by the National Institute of Standards and Technology (NIST), as part of our efforts to advance the creation of ethical artificial intelligence.
My optimism for the future of work alongside AI
I’m incredibly optimistic about the way AI can both improve hiring and the type of jobs available to people. Since its release, I’ve been particularly interested in how generative AI can be used to help job seekers transition industries by helping people better understand how their skills are applicable in new roles.
History has proven time and again that with the advent of new tech (e.g. the printing press) there is job loss, but it’s offset by job creation. We will see jobs eliminated, and governments need to figure out the best ways to take care of and transition affected people to newer, better roles. But the roles available have the potential to be much more meaningful and creative.
In hiring, we’re going to see a complete and rapid change from a requisition-based approach to a multidimensional space of job discovery and opportunity matching where candidates own more of their profile and application data. One day we all look back in disbelief at the current approach that relies so heavily on job descriptions, resumes, and one-to-one applications. The revolution toward this new future is already underway with tools like Find My Fit that measure skills instead of focusing on past experience alone.
Read More on Hrtech : HRTech Interview with Laura Baldwin, President at O’Reilly
[To share your insights with us, please write to psen@itechseries.com ]