Artificial intelligence isn’t perfect—but it has the potential to make interviews fairer.
There has been much talk about machines replacing jobs — but less so of machines helping people find the right jobs. Artificial intelligence and machine learning are finding firm footing in the recruiting functions of companies of all sizes — helping filter through large application loads and reducing both time and cost.
But it begs the question: can an algorithm do more harm than good in the hiring process — a process that’s critical for driving competitive advantage?
A Broken Interview Process
It’s worth a quick look at the current state of affairs. Today, unconscious human bias in traditional interviews runs rampant and unchecked. It’s likely that you’ve unwittingly been on the receiving end of this bias — and, if you’ve hired anyone, that you have unknowingly applied your own bias to people decisions as well, from favoring someone from your alma mater to unconsciously bypassing a candidate with an ethnic-sounding name. Most of us think we’re impervious to such biased decision making, prompting Harvard University to sponsor an effort that allows anyone to put their biases to the test: Project Implicit.
Pair the pervasiveness of unconscious human bias with the belief that we are more effective interviewers than we truly are and it becomes clear why the interview process is broken. As interviewers we are affected by our mood, state of hunger, and any number of other unwieldy factors. It turns out we harbor this misperception of ourselves in other core life skills such as driving, as revealed in a recent Journal of Applied Social Psychology piece. Not surprisingly, we’re not as good at driving as we think we are. On the road, this means we overestimate our parallel parking skills. In hiring, this means we introduce an uncontrollable level of inconsistency to a process so critical to an applicant’s livelihood and future that we need to prize fairness and structure above all else.
The majority of job candidates face the harsh realities of today’s hiring process: No response for weeks, or forever, from companies for roles that seemed like the perfect fit, online job applications disappearing into a “black hole,” and lengthy and mind-numbing employment assessment tests. And that’s all in the best-case scenario. An even further extreme, which is also alarmingly commonplace, are real-world results such as the outright dismissal of candidates with ethnic-sounding names and years of struggle for worthy workers punished for having served their families or their countries. When we let the depth and breadth of the brokenness of our current system sink in, there is great urgency to stop perpetuating it — as quickly as possible.
Diligence and Due Process Needed
Enter artificial intelligence. “Freaky” and “frightening” have been used to describe the role of AI in interviewing job candidates. Although these fears are exaggerated in clickbait headlines, it is true that, without rigorous testing and validation of the data sets and the resulting algorithms themselves, AI can recreate the worst of the traditional hiring process with unfortunate consequences for both candidates and companies.
Does that mean we shun the application of AI and machine learning to the hiring process altogether? On the contrary: it demands that any team developing AI technology for hiring must approach it with diligence and due process in order to ensure a fair shot for all qualified candidates while unlocking the benefits that only AI can bring. The best HR technology available today is a result of data scientists working in collaboration with industrial and organizational psychologists to build proven methodologies grounded in decades of assessment science. Such tools can help to provide impartial decision support to human recruiting teams.
So we would posit a different and perhaps more accurate single-word description of properly developed and rigorously tested AI applications in hiring: “objective.”
Knowledge from decades of research, combined with real-life modern-day use cases, tells us in clear terms that machines can be the impartial interviewers that you and I may never be. Artificial intelligence can help – and, in several cases, is already helping — correct the failings of our current hiring practices by providing human decision-makers with objective, qualitative and standardized data from which they can make unbiased decisions.
We can see the benefit of AI at work in other industries. You don’t have to search long to find headlines such as “Study: AI faster, more accurate than humans at analyzing heart scans,” “Algorithm outperforms radiologists at diagnosing pneumonia,” and “Computer learns to detect skin cancer more accurately than doctors.”
The Peril of Waiting for Perfection
So, we know that AI has a high degree of quality in its predictions. When it comes to hiring, however, it must also yield objectivity. Is AI technology for hiring perfect at what it does? Perfect, no; provably fair and effective, yes.
There are academics, policy experts, and industrial-organizational psychologists that, working in concert with ethical data science, have dedicated themselves to the accurate and objective application of AI in hiring.
Rather than asking “Is the practice perfect yet?” we should ask instead “Are our AI-facilitated interviews better than our traditional hiring practices?” Unequivocally, yes. We know AI and machine learning technology create tangible hiring improvements here and now. Take the case of a multinational consumer-goods giant that increased diversity by 16 percent after implementing an AI-based recruiting approach. When done properly, technology already has the capability to not only mitigate bias in hiring, but also help companies identify biased patterns in their own hiring to date — and then work to solve them.
While it’s true the technology behind AI’s application in recruiting and hiring isn’t perfect, it’s also true that waiting until such a state of perfection arrives will only continue to do harm to job seekers every day. Candidates deserve to be judged on the basis of their merits alone, especially during a time such as now, when they’re under increasing pressure to showcase their skills differently as the workplace becomes ever more dynamic.
Given the critical nature of the hiring process to both the candidate and company, a carefully crafted and tested algorithm can reserve judgment and put the focus where it belongs — on performance and potential.
Dr. Nathan Mondragon is chief industrial organizational psychologist with HireVue, bringing more than 20 years’ experience of blending talent management and technology solutions. He is recognized as one of the pioneers in creating web-based assessment tools for professional hiring and was the lead IO on a team the delivered the first ever integrated online selection assessment in 1996.
Recent Comments