Business intelligence software has long become a staple in recruiters’ and HR managers’ work. A variety of enterprise solutions and tools exist to streamline workflows, provide insights, and automate various aspects of their jobs. The deployment of such solutions is often required and for the most part raise no questions.
Yet, there’s a new player in the HR tooling town, picking up steam and creating a lot of polarizing buzz: artificial intelligence. While it’s only a logical evolution of the previously established and widely accepted systems, the hiring AI is both praised for and accused of the changes it is bringing to the HR table.
The promise of AI in HR
On the one hand, the benefits of machine learning algorithms can be a game changer. High levels of automation, adaptiveness, deep analytics and insight, personalization, and self-learned constant development sound like all-around winners, no matter the industry.
Companies developing AI-enhanced software for human resources stress the numerous advantages that machine learning brings to recruiting and workforce management, such as:
- Expanding on the existing and discovering new sources for candidate search.
- Streamlining workflows and speeding up the recruitment pipeline.
- Effectively detecting and reducing human error.
- Personalizing every step of recruitment, onboarding, staff management, and reviewing.
- Better judging and forecasting based on in-depth, all-permeating analysis.
- Better detecting and nurturing employees with leadership potential.
- Eliminating explicit bias by automatically widening the recruiting pool, tapping into candidates’ unrealized potential, and ignoring specific socio-economic and background characteristics when instructed to do so.
The last one sounds especially alluring: Logically, when a machine is told to ignore certain parameters, it will simply shut them off. If told it to specifically seek out and focus on best diverse candidates, it will do just that, no biases or preconceived notions factored in. One powerful concept, indeed.
So, does it mean AI software for hiring is instrumental to solving the issues around diversity and inclusion in the workplace? As always, the answer isn’t that simple.
The issues to be aware of
When it comes to the territory of people assessment, the smart technology’s track record is not exactly spotless. AI’s history in the public eye has been a rollercoaster so far, with both impressive highs and astonishing lows.
In banking and judicial systems, algorithms used for assessment and evaluation have been known to qualify minorities as high-risk, with real-life implications ranging from higher interest rates and loan rejection to denied bail and extended detention.
In the area of HR and talent acquisition, there are the examples of Amazon’s scraped in-house recruitment project that proved biased against women, or the fresh controversy and the resulting scrutiny of HireVue and its AI-tooling.
Among the concerns every AI-enabled hiring and diversity system must address, and every HR manager should take into account, are these:
Transparency and interpretability of AI for hiring — It is important to be aware of the data and the factors these AI systems use, and the reasons for them to produce the results they come up with. Only transparent and interpretable systems can be sufficiently assessed and deemed fair, ethical, and diverse. In such systems users can understand how and why the AI reached a specific outcome.
Data privacy and security — The algorithms and data not only need to be both accessible and transparent, but also to be protected against misuse, violations, and unsanctioned access. Potential weak points can sprout from relying on third-party vendors, siloing off data, and lacking global privacy and security standards. One more weak point lies in the contradiction between the push for transparency and what could be considered as protected technology and proprietary commercial secrets.
Mitigating bias with better data quality and diverse human supervision — A particular sticking point in the idea of AI implementation in HR are concerns around bias. Artificial intelligence looks for patterns in data to reach predictions. Just like humans, machines tend to gravitate toward shortcuts search for patterns. When the data is incomplete or originally biased, the results are faulty.
In data science, the famous example of this is an AI that was to learn to determine whether the images depicted wolves or huskies. The AI performed somewhat well, but it turned out that instead of zeroing in on the animals and their differences, it just searched for snow since the original training datasets for the algorithm included a disproportionate number of images of wolves in winter.
To put it in the HR context, even excluding certain parameters from the algorithm when assessing candidates might still not be enough. Even when explicitly eliminating the race, gender, name, and age data from the equation, AI can provide biased results based on historical hiring data and applicants’ resumes. Despite filtering out those factors, AI will still trip up, producing biased results for a company whose historical data shows it hires candidates who are predominantly male, white alumni of prestigious universities, aged between 22 and 35. Because AI will find a strong correlation with these factors, it will apply this pattern to making candidate recommendations.
Right now, to bring efficiency and change instead of just “same old things, but faster,” AI needs ongoing supervision and more time to work out its current issues.
Is AI ready for diverse hiring?
The short answer is wait, or proceed with caution.
Right now, the main objectives of enterprise-grade AI in this sphere are to streamline and invigorate processes, and also to ensure that existing problems are solved instead of being amplified.
Like any single tool — and it is in fact just another technological tool — AI should not be solely responsible for final decisions. When applied to talent search and employee evaluation, it still has to be done with the acknowledgement of potential flaws and limitations.