This may sound obvious, but if artificial intelligence is to become a powerful agent in organizations, then we will all need to be sure that the AI being used is aligned with human values.
Determining this isn’t easy though.
One group researching this topic however, is the Meaning Alignment Institute, based in Europe.
What interests me as an HR pro, is that the methods it is developing to support AI alignment may be relevant to HR, particularly for employee listening.
A new approach to employee listening?
Most employee listening approaches rely heavily on employee surveys.
We all know these have weaknesses but to date, they have been the most practical tool for gathering input from a large number of employees.
The Meaning Alignment Institute is interested in gaining a deep understanding of human values and it was apparent to them that a multiple-choice survey wouldn’t capture the nuance they need.
Instead, its GenAI system poses a question to participants and gets a free-form written answer.
But that’s not the end of it.
The system goes on to ask probing questions to uncover what underlies the participant’s original comment.
They are – in effect – interviewing participants at scale, and this is possible now thanks to the abilities of GenAI.
Gathering a set of insights from each participant creates a huge amount of text and in days-past this would have been unmanageable.
Now, again thanks to GenAI, they can automatically find themes and summarize the information.
Perhaps the future of employee listening is not to rely of a question like: “Do you have the tools you need? Answer on a scale of 1 to 5” but instead use GenAI to “interview” them about the tools they need and probe to understand why they feel that way and what should be done.
Another tantalizing tool
In their analysis of values, the Meaning Alignment Institute doesn’t just stop with a list of values.
They get users to identify “wiser” versions of a value.
It is much like saying: “One way of framing the situation is this, but if you think about it, then it would be better to frame it this way.”
The point is not to change someone’s values, just get them to see their values in the clearest and most helpful form.
The idea that we can take abstract and emotional concepts like values, and nudge people towards clearer and wiser framing of these concepts is tantalizing.
Could we do something similar with employee or manager behaviors where they start with “I would likely do this because…” and then lead them to a more effective behavior?
To be clear, this is not the problem the Institute is working on, however, its methods may have applications in other domains.
Closing observations
The world of AI research has attracted a lot of great, innovative thinkers. All of these people are deeply immersed in this new technology.
By following what different groups are doing we may find their approaches are relevant to HR even if this is not the field their work is aimed at.
So, we may be in for an explosion of great new tools that let us do things at a scale that were impossible before.
If you are interested in the work of the Meaning Alignment Institute then check out their website: https://www.meaningalignment.org/