For all of its much-lauded potential to be game-changer, the fact remains that there is also a growing sense of trepidation when it comes to adopting AI in the workplace – especially when it comes to recrutiment.
There are widespread cries for it to be better regulated – and many of these cries come from HR practitioners themselves.
However, it’s also the case that the numerous articles and polls that are out there on aspects such as bias also perpetuate what are often nothing more that anecdotal perspectives rather than research-based facts. The problem is, these articles merely stoke more fear that bringing AI tools into the workplace will lead to hiring bias, or other potential negative impacts.
So I believe we need to take a pause.
Amid all the fear, uncertainty, and doubt, we must ask ourselves whether proposed regulations actually serve to protect citizens and employees, or if fear blinds us to potential solutions that can mitigate the well-studied reality that humans are already quite biased in hiring and other HR processes.
Firth thought: Is AI regulation driven by the right data and motives?
While doomsday prophecies tend to grab headlines and public attention, concrete instances of AI-induced discrimination on a mass scale are yet to materialize.
The reality is that large companies in the AI space, like Microsoft, Google, and OpenAI, are a big driving force behind a host of new local and state regulations on the use of AI in the workplace.
These companies are working with the federal government to invest in responsible AI and develop guidelines for its ethical use.
What is perhaps more pertinent is the fact that some are raising valid concerns around these efforts, pointing out that well-established tech giants can afford to meet arduous compliance and testing standards while small startups cannot, effectively stifling competitors.
Commentators say it begs an important question: Do these companies truly have the best interest of citizens and job seekers in mind, or are they advocating for AI regulation because they’re looking to make it harder for smaller companies to enter the space?
It’s one that is certainly food for thought – but is this a side-distraction?
I think a closer look is needed at AI bias vs human bias.
Second thought: AI bias vs. human bias
We can look at the threats/opportunities around AI regulation more clearly when it comes to recruiting.
Here, the core concern driving AI regulation is that AI systems will perpetuate historical data that is skewed toward Western ideals and demographics, which may result in inherent bias.
For example, New York recently enacted a law requiring that automated employment decision tools (AEDTs) be audited for bias before employers use them.
However, most of the alleged bias these systems contain is a statistical bias that stems from systemic issues in our society rather than the technology itself.
To illustrate this, AI hiring tools may pick male software engineer candidates more than female candidates not because it favors males, but because there are simply more males in the candidate pool due to larger societal trends at play.
So, the remedy to this is NOT to regulate AI, but to provide equitable access to learning and job training opportunities long before people even enter the workforce.
Furthermore, not all AI tools provide ratings or recommendations on whom to hire.
Some intelligent solutions may provide objective data on candidates or even detect bias in interview questions.
Yet, the regulation is often defined so broadly that it discourages the use of solutions that present less risk and can help mitigate known human bias.
Remember, humans are inherently biased too
Countless studies show that despite our best efforts and the increasing importance of fostering diverse workplaces, humans are extremely biased – whether intentionally or not.
When humans screen resumes, they give 50% more interviews to people with white-sounding names than Black-sounding names. That’s just a fact.
Also a fact is that women are 30% less likely to receive the opportunity to interview than men.
Older applicants, especially those above 64, get 68% fewer responses than younger applicants.
Muslim women who wear headscarves are less likely to receive an interview and get hired than those who do not.
Even height can influence humans’ perception of competence, with 58% of Fortune 500 CEOs being 6 feet tall or more despite only 14.5% of all men being over 6 feet tall.
Overall, it’s been shown that people have a bias in favor of preserving the status quo, therefore the hiring process is typically approached with a preconceived notion of what qualities the ideal candidate will embody.
We must be careful what we regulate
Given the reality of known human bias and the lack of data-driven hiring, we must be careful not to regulate ourselves out of the potential for AI solutions to help mitigate discrimination that is already pervasive.
Regulation that is too broad, overly restrictive, and not well thought out will merely delay the adoption of less risky AI hiring tools and perpetuate human bias.
Without applying data to hiring decisions, HR teams report that 85% to 97% of hiring is based on gut instinct.
If the ultimate goal of regulation is to protect job seekers from hiring bias, we’re in a catch-22 because the alternative to using AI solutions to augment human decisions and mitigate bias in hiring is to maintain the status quo.
Threading the needle: regulation and innovation
The applications of AI and its impact on the workplace are still unknown.
Some people believe it could usher in a great new era of productivity, job growth, and more equitable access to the education and training needed to advance in certain career paths, while others see it as potentially devastating to humanity.
But there’s no way to know until we daringly move forward.
On one end of the spectrum, waiting to apply AI to HR processes until it’s regulated could stifle innovation and postpone meaningful advancements that could foster more equitable hiring practices, especially since regulation can take a long time to be broadly applied and enforced.
For example, it’s been nearly two decades since Facebook launched in 2004 and there’s still no clear regulation designed to prevent misinformation on social media.
Meanwhile, social media has connected millions of people and fueled activism that would have otherwise been impossible.
While the downsides of little regulation of social media are clear, it’s hard to imagine where we’d be today and what advancements wouldn’t exist had the legislative process prevented it from being used at all.
What we ultimately need to remember is that every generation experiences a technological advancement in their lifetime that stokes fear and prompts a debate on regulation.
Cars have been around since 1886 and the US didn’t pass legislation requiring drivers and passengers to buckle their seatbelts until 100 years later.
But that didn’t stop people from driving, nor did it dissuade automobile manufacturers from continuously making other safety improvements.
All of this is to say, the government doesn’t have the best track record of regulating emerging technology and there will always be backlash and fear of the unknown.
AI tech is still nascent and rapidly evolving, so it’s not necessarily reasonable to expect fair and equitable AI regulation to emerge quickly and scale elegantly as the technology evolves.
A worthy goal: AI augmented human decisions
It’s my belief that we must find a careful balance between using AI to improve the hiring process, conduct fair interviews, and evaluate candidates faster – but while also avoiding the introduction of new biases.
For starters, AI must augment human decision-making rather than replace it.
In the HR space, we should avoid the use of AI applications to replace human decisions on whom to hire, whom to promote, or whom to lay off.
Instead, AI solutions should be leveraged by humans to gain insights that they may not be able to gather efficiently on their own and use this wealth of information to apply as much objective data as possible to HR decisions.
HR departments should be leaders in this
HR departments and recruiters can be leaders in preparing their organizations for responsible AI adoption before formal laws are passed.
One way to do this is by conducting adverse impact studies or running AI and human processes in parallel to see if the outcomes are the same.
Then, organizations can come to their own conclusions about where AI solutions may help or hurt the hiring process.
AI can help to inform important personnel decisions with more objective data and automate repetitive HR tasks so humans can have more time to engage with each other and make more informed, thoughtful decisions.
We owe it to ourselves and each other to explore this balance of humans and AI to its full potential.