It’s fair to say that with its almost limitless creativity, Artificial Intelligence (AI) is here and it’s here to stay in a big way.
The global artificial intelligence market size was valued at $93.5 billion in 2021 and is projected to expand at a compound annual growth rate of 38.1% from 2022 to 2030.
It is disrupting every single industry – from supply chain logistics to marketing, and yes, you guessed it, the world of HR (and specifically talent management), is no exception.
Background checks, keyword searches in CVs and AI-powered video interviews are helping HR teams automate routine tasks and focus on selecting the best candidates from a scarce talent pool.
However, as HRDs will be starting to experience, the use of AI and automation also creates anxiety and uncertainty amongst a workforce – many of which perceive this technology as imbalanced from a human-centric perspective, and with the potential to forever supplant human functions.
In addition to this, in the hiring processes, AI raises numerous concerns around ethics and equity, and whether it can unwittingly – (or even purposefully) discriminate against certain groups of people.
Naysays often point to the fact that it can also fail to identify valuable talent that doesn’t fit into a specific skills bracket – for example, neurodiverse candidates.
Finally, the advent of AI raises privacy concerns too. This is particularly salient an era when giant technology companies are experiencing such low levels of public trust.
So, why does all this leave AI? And what should organizations keep in mind when using AI without disrupting recruitment processes and triggering bias/privacy concerns?
AI through the US lens
We recently undertook research to understand more of this in detail, and found that there different approaches to AI were being taken by different countries.
For some time, for instance, the United States has been the leading country in AI readiness, investment and research activities. With more than 58,000 AI-related patents registered between 2016 and 2021, America has been cementing its position as the innovation hub, spearheading AI research and implementation through the world’s biggest companies such as Apple, Google and Meta.
But, despite its supremacy overall, we found the level of “trust” in using AI in HR is lower in the US compared to some developing countries and underrepresented communities.
We found there were three reasons for this:
- Data privacy: Our research showed that data privacy is something that is believed to be exploited in the United States. People don’t feel their data is protected due to lack of safety measures in the US compared to Europe. There is a feeling data is being utilized and then sold to the highest bidder by multinational companies, which causes people’s trust in AI and tech to be jaded.
- Cultural differences: Pre-existing distrust is further driven by cultural differences. With all the technological advancement that’s taken place, there is a perception that AI has disproportionately benefited certain populations whilst disadvantaging others. For example, the state of New York made an assessment, based on Equal Employment Opportunity Commission (EEOC) research, that some sourcing AI tools are creating high-tech discrimination against certain groups, and have banned these tools until they are reformulated and reprogrammed to be more inclusive.
- Government distrust: Finally, evidence suggests there can be distrust in government generally, especially when AI and technology is used to manipulate people’s opinion through electoral polls.
What all of this suggests, is that overall, there is a breakdown of trust between people and their institutions that also reflects in technology.
How does trust in AI compare everywhere else?
What’s interesting is that distrust (or at least the lower than expected take-up) of AI by US HR departments is not consistent globally. In first-world countries, for example, technology is seen as a tool to get to a better place in society or have more opportunities. Workers there haven’t been “over-exposed” to it. People in first-world nations want to have better access to opportunities, so are in a better place to reject elements they don’t like because of their situations.
What about the UK perspective?
The UK is somewhere in the middle, but we should all take notice of it – even in the UK.
The EU’s AI Act, for example, is aspiring to establish the first comprehensive regulatory scheme for AI that will influence global countries and communities.
At the same time, it seems the UK has its own agenda to overtake the US as the global AI leader.
For instance, The Office for AI and National AI Strategy in the UK has been ken to promote the fact that the adoption of AI brings big economic benefits – most notably that organizations that use AI outperform the ones that don’t with cost-savings and efficiency gains.
So, what approach should HRDs take?
If there is one thing HRDs really need to think about it’s this: There is absolutely the need to assess the ethical standards around this technology: how it is used and how it affects people.
Secondly, despite the fact the UK is longer part of the EU anymore, the EU’s AI Act will have a huge impact on organizations all around the world, especially US firms that have staff working in European countries. It’s an enormous challenge to get to a place where organizations are progressive and understand the line to take.
Finally, education is an important element as well. Most fears are driven by the unknown, and research has found that employees can experience burnout and insecurity because of the threat of robots taking their jobs. PwC has emphasized that organizations need to invest in people skills development, while encouraging their workforce to work in collaboration with AI. The more people are aware of AI’s benefits (and learn to collaborate with the technology), the less likely it is to drive fear and misunderstanding.
One last thing: AI in recruitment
When it comes to recruiting, so called ‘glass box’ AI enables transparency so employers and candidates can see how assessment results are used to make hiring decisions.
But, AI doesn’t have to replace human recruiters – instead, it can make them better – for instance by amplifying some of the less-evolved human characteristics.
By infusing AI with a human perspective to encompass cultural diversity, artistic creativity and the eccentricity that makes humans unique, this technology absolutely has the power to enhance the fields of talent management and executive search.
So as AI continues to develop in the US and UK, building trust in companies that deploy this technology will be paramount.
Without the confidence that AI will reflect humanity’s better sides by emphasizing ethics, valuing equity and respecting privacy, we may see AI become demonized by significant segments of the population that feel threatened by its advancement into our professional and social realms.
The better we, as a humanity, can uphold the ideals and values we hold dear to our respective culture and societal fabric, the better AI will be harnessed to supplement and improve human functions.
Indeed, AI is but a reflection of the humans who develop it.