Over the past year we have heard a lot about the potential impact of AI. Commentary has ranged from doom posts warning of mass redundancies as skills become obsolete to skeptics who believe the capacity of AI has been vastly overblown. Like most other people, I take a much more nuanced view of AI. Although its development has been rapid, AI is still a long way from fully replacing jobs. Nor is it a hyped up novelty. The reality is that AI is already being used to optimize and enhance a range of professions with HR being a prime example. Companies big and small use AI for everything from recruitment screening through to analyzing job performances. Nevertheless, we are very much at the start of this journey with all but a handful of organizations fully leveraging the AI’s capabilities in their HR departments. This provides a golden opportunity to determine how AI can best be applied, what safeguards and ethical frameworks need to be implemented and how HR departments can upskill themselves to avoid pitfalls and maximize impact.
Diversity and inclusion
One of the most promising applications of AI is to tackle the diversity gap. A lot of attention has been paid to underrepresentation. While company cultures have, for the most part, clearly changed for the better, this has not translated into a marked improvement in diversity figures. Women are still vastly underrepresented in industries such as technology and finance. Ethnic minorities are struggling to make a dent on senior management and leadership roles, and too many businesses have workforces made up of individuals from the same social and educational class.
Catch more HRTech Insights: HRTech Interview with Jonathan Leaf, CRO at BambooHR
We know that one of the main drivers of underrepresentation comes from how companies recruit people. Even with the most rigorous policies in place and best wills in the world, recruitment processes are still subject to human nature. Unfortunately, this human nature can manifest itself in unconscious biases. People, for the most part, prefer candidates that are like themselves. If a company is already dominated by one particular group it is very hard to break this cycle.
AI could be the answer. A recruitment process that uses the objective analysis of AI can help to remove unconscious bias. New job specs and adverts can be created, CVs can be screened without prejudice, neutral interview and assessment policies can be developed, and interview performance can be scored on an equal footing. AI can also be used to identify where candidates would best fit into an organization beyond the role they applied for.
Beyond recruitment, AI can be used to more fairly assess the performance of every team member. Research has shown many organizations gear their pay and promotion criteria to favor behaviors that suit a particular type or group of individuals. This can perpetuate inequalities as people from different backgrounds struggle to fit this mold. Managers are also subject to the same unconscious biases that impact recruitment – they tend to reward team members that more closely behave and look like themselves. Using AI not only mitigates these risks, it also opens the door to a huge number of new data points to be included in an assessment. This is particularly relevant with hybrid working models. As it stands most HR departments still assess their team’s based on in house working culture – e.g. contributions in meetings, in office relationships etc.. This naturally disadvantages people who work from home. Through AI we can now use structured data points such as productivity alongside unstructured data such as idea generation, quality of writing, and accuracy of delivery. This goes far beyond the superficial to create a much more rounded picture of what an individual actually achieves for the business.
Of course, as wonderful as all of this sounds, AI is not a silver bullet. In fact, without a well thought out strategy and the right policies in place it does have the capacity to do more harm than good.
Keep the human in HR
AI tools are not infallible, it is actually only as good as the people who create the algorithms and the data that is used to train it. If, for example, you use existing successful CV applications to tell an algorithm what to look for in a candidate it will, over time, magnify biases. This happened to Amazon a few years back. An AI solution designed to recruit more women actually did the opposite – it actively discriminated against them.
Similarly, if the data scientists who produce your AI solution are all from one type of group or all with the same experiences they are much less likely to catch flaws in their AI designs. A prime example was the UK Government using AI to assign exam grades to students during the pandemic. The algorithm they used ended up unfairly penalizing people who were from poorer backgrounds. This was because the algorithm would, among other problems, reward students from smaller schools – which tended to be in wealthy areas or private education. I would say, with some confidence, that if there was someone on that team who had been educated in a larger inner city school, they would have spotted this flaw during development.
So HR professionals need to be fully aware of the risks associated with any AI model and mitigate them accordingly. This can only be achieved through education and human oversight. It’s essential to upskill on how data and AI work. Without a basic knowledge of statistics and an appreciation of the limitations of AI, it’s impossible to be able to identify problems and apply the insights generated by AI in a fully meaningful way. Put simply, if you don’t know how it works you shouldn’t be using it. Keeping rigorous control means constantly testing and verifying results. For example, in the case of CV screening, analyzing discarded applications for patterns, or using dummy CVs to check for accuracy. Regular testing alongside good data hygiene practices can go a long way to keeping your AI in a healthy state.
One of the great fears of AI is that its complex and dispassionate nature will lead to Kafkaesque scenarios where decisions are made with no clarity on why and no way to appeal. AI must be transparently applied and it cannot be the last word on any decision. Employees must be able to review decisions and understand what was taken into account. This means that they too must be empowered through data education and upskilling. AI cannot be a black box and those that design (or if purchased from a third party – implement), monitor and maintain the solutions must be accountable for any decision that is made.
AI for me but not for you?
A final point to finish on is the big decision many organizations will need to take very soon is whether they do or don’t allow AI to be used by their potential recruits and existing team. We’ve already seen people use generative AI to mass produce job applications. People are undoubtedly using it to craft ideal responses to common interview questions and help with assessments. As it gets more powerful we need to ask ourselves at what point does a candidate stop being themselves and start being a machine? Similarly, if a team member finds a way to automate a lot of their job using AI – when should their output stop being treated as their own work?
If your HR department is vigorously applying AI, can it with any legitimacy tell candidates not to use it or penalize team members that do? There really is no straight forward answer to these questions. Each company will need to decide where they draw the line and update their HR policies accordingly. The crucial part is to start these conversations now. We’ve already seen how businesses struggled to quickly grasp tech innovations like social media or adapt to changing working practices such as the rise of remote global working. AI has the capacity for much more radical change and without getting stuck into figuring out how to both maximize the opportunity and mitigate the risk, you could soon find your businesses dealing with a lot of AI induced HR headaches.
Read More on Hrtech : HRTech Interview with Charlotte Dales, Co-founder and CEO at Inclusively
[To share your insights with us, please write to psen@itechseries.com ]