Companies may not be ready to outsource vetting candidates for C-Suite and executive positions to algorithms, but the stakes are lower for entry-level roles and internships. That means some of today’s college students are effectively the guinea pigs for a largely unproven mechanism for evaluating applicants.Metz, R. (2019). There’s a new obstacle to getting a job after college: Getting approved by AI. CNN Business.
I agree with the concern that we don’t have a good idea of how well these algorithms will work when it comes to narrowing the field of potential interviewees for a post. However, I think that it can’t be any worse than what currently happens.
We already know that unstructured interviews by human beings are completely unreliable predictors of future performance (structured interviews seem to work better but the improvements in validity are marginal…better than chance but not by much). What if we find out that AI is at least reliable? At first glance, the idea that an AI-based system will screen candidates to narrow the pool of applicants seems unfair but we already know that being screened and interviewed by a human being is also unfair. So a human interview panel is likely to be both invalid and unreliable, whereas a computer might at least be reliable. Although I suspect the AI will also be a better predictor of performance than human beings, because it’ll probably be less likely to be influenced by irrelevant factors.
For me, this seems to be another example of having different expectations for outcomes, where an AI has to be perfect but a human being gets a pass. Self-driving cars are the same; they have to demonstrate near perfect reliability, whereas human drivers are responsible for the preventable deaths of tens of thousands of poeple every year.