AI Is Calling: Can Automated Phone Screens Redefine Global Hiring?
CodeSignal's new AI interviews candidates by phone in 20+ languages, promising a revolution in efficiency. But can it overcome the hurdles of bias and empathy?
AI Is Calling: Can Automated Phone Screens Redefine Global Hiring?
SAN FRANCISCO, CA β December 10, 2025 β The initial phone screen, long a staple of the hiring process, is getting a dramatic, AI-powered makeover. CodeSignal, a skills assessment platform known for its work with tech giants like Netflix and Meta, has announced its AI Interviewer can now conduct live, conversational phone interviews in more than 20 languages. The move signals a bold push into a future where the first conversation a candidate has with a potential employer might not be with a human at all.
This isn't a simple chatbot reciting a script. The company promises a natural, human-like interaction, where the AI can understand context, handle clarifications, and even respond to candidate questions. The goal is to create a screening process that is available anytime, anywhere, and in a candidate's native tongue, effectively dissolving the barriers of time zones and language that have long complicated global recruitment.
"Teams shouldn't lose great candidates because the screening process only works in one format or at one time of day," said Tigran Sloyan, CEO and Co-Founder of CodeSignal, in the announcement. "Phone interviews allow candidates to move forward anytime, anywhere, in the language they're most comfortable with, while giving companies the insights they need to advance qualified talent at scale."
CodeSignal's data already shows a clear demand for this flexibility, with nearly a third of its web-based AI interviews taking place outside the traditional 9-to-5 workday. By extending this capability to the ubiquitous phone call, the company is betting that on-demand, skills-focused screening is the next frontier in the war for talent. But as this technology scales, it raises profound questions about efficiency, equity, and the very nature of the first impression.
The Unrelenting Drive for Efficiency
CodeSignal's innovation doesn't exist in a vacuum. It arrives as the AI recruitment market, valued at over $630 million in 2022, is on a steep upward trajectory, projected to exceed $839 million by 2028. This growth is fueled by a corporate world grappling with the realities of remote work, global talent pools, and an urgent need to hire faster and more cost-effectively. For many talent acquisition leaders, the allure of AI is undeniable.
Industry data suggests that recruiters can spend up to 50% of their time on administrative tasks, including the high-volume, often repetitive work of initial candidate screening. AI platforms promise to automate this crucial first step, providing structured, consistent evaluations that allow human recruiters to focus on more strategic activities like engaging high-potential candidates and building relationships. The potential ROI is significant, with some studies suggesting AI can slash recruitment time and costs by half.
This move also represents a doubling down on the principles of skills-based hiring. By standardizing the questions and evaluation criteria, the AI interviewer ensures that every candidate is assessed against the same core competencies, moving the focus away from resume prestige and towards demonstrable ability. The output is a unified, competency-focused report, whether the interview was conducted via web or phone, creating a consistent data stream for hiring managers. This data-driven approach is a stark contrast to the variability of human-led phone screens, where unconscious bias, fatigue, or simple conversational drift can lead to wildly different candidate experiences and evaluations.
Balancing Automation and the Human Element
While competitors like HireVue have pioneered AI in video interviews and others like Pymetrics use gamified assessments, CodeSignal's focus on live, conversational phone calls is a strategic choice. It targets the earliest, most scalable part of the funnel with a universally accessible technology. However, the central challenge remains: can an algorithm replicate the nuance of human conversation?
CodeSignal claims its AI delivers a "natural conversational flow," complete with pauses and clarifications. This feature is critical, as it directly confronts a primary concern for both candidates and critics: the fear of a cold, rigid, and unforgiving robotic interaction. For a candidate, especially one for whom the interview language is not their first, the ability to ask for clarification or pause to formulate a thought is essential for a fair evaluation. An AI that can handle these subtleties could significantly improve the candidate experience over less sophisticated systems.
Even so, the line between efficiency and depersonalization is a fine one. Candidates increasingly report a sense of 'application fatigue,' feeling like they are submitting their qualifications into a void. The introduction of an AI gatekeeper, no matter how sophisticated, could exacerbate this feeling. The success of such tools will hinge on their ability to feel less like an automated test and more like a genuine, albeit structured, conversation. For many, the value of a first-round interview lies in the subtle cues and rapport-building that only a human-to-human connection can provide.
The Specter of Algorithmic Bias
Beyond the user experience, the rise of AI in hiring brings significant ethical and legal challenges to the forefront. The most pressing of these is algorithmic bias. AI models learn from the data they are trained on, and if historical hiring data reflects societal biases related to gender, race, or socioeconomic background, the AI can learn, replicate, and even amplify those same prejudices.
This is not a theoretical risk. High-profile cases have emerged where AI recruitment tools were found to penalize applicants based on their names or favor one gender over another. When applied to multilingual voice analysis, the risks multiply. An AI trained predominantly on one accent may unfairly score a candidate with a different regional dialect. It could misinterpret cultural communication styles or penalize neurodiverse individuals whose speech patterns don't conform to the algorithm's learned norms.
Regulators are taking notice. The European Union's AI Act classifies hiring systems as "high-risk," imposing strict requirements for human oversight, transparency, and the use of high-quality, unbiased training data. In the United States, jurisdictions like New York City now mandate that companies using automated employment decision tools conduct annual bias audits and notify candidates that they are being assessed by AI. For companies like CodeSignal and their global clients, navigating this patchwork of regulations is becoming as critical as the technology itself.
Ensuring fairness requires a relentless commitment to monitoring, auditing, and refining the AI models to root out bias. It also demands transparency, giving candidates insight into how they are being evaluated. As these tools become more embedded in the hiring process, the ability of companies to prove their AI is fair and compliant will be a key competitive differentiator, determining whether the technology ultimately promotes equity or simply automates discrimination at an unprecedented scale.
π This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise β