Hiring strong talent is critical to the success of any business organization, and even more so in a digitally transforming world where legacy organizations are best able to differentiate themselves and innovate based on the skill profile of their employees. Yet the recruitment process continues to pose significant challenges to employers. The volume of applicants per positions has rapidly increased over the past few years, while staffing at the human resources (“HR”) departments has remained relatively flat, creating higher costs for sourcing and screening applicants. An additional challenge for employers is the consistent research demonstrating patterns of implicit cognitive bias leading to unintentional discrimination in hiring processes. In layman’s terms, implicit bias boils down to a “gut” preference for candidates whom are most similar or intuitively relatable to decision-makers.
Bias that leads to homogeneity in hiring undermines the health of a company in several ways. First, robust statistics demonstrate that companies with more diverse executive and management teams are more profitable relative to their less diverse counterparts. Secondly, companies are increasingly being scrutinized and held to higher standards of diversity in return for engagement by both investors and customers alike.
In the hopes of addressing these challenges and securing an edge in finding capable candidates, companies are turning to artificial intelligence (“AI”) to assist in the hiring and recruitment process. The promise of AI in this field is that it will counteract human bias and streamline the hiring process by identifying talent with greater efficiency. Rising to meet the demand, providers now offer AI hiring services ranging from using natural language processing to parse resumes to mining social profiles and deploying facial recognition technology to identify candidate emotions during video interviews. The market for these technologies places a premium on speed of hiring and increased retention rates, with an emphasis on identifying aptitude and personality to help establish a fit for the role.
“The volume of applicants per positions has rapidly increased over the past few years, while staffing at the human resources (“HR”) departments has remained relatively flat”
In this case, AI works by analyzing data gathered from an applicant’s digital footprint against other sources of historical data that the company or its contracted technology partners have used to train the AI system to make certain cognitive determinations about human behavior. Yet if we are looking for AI to improve upon accuracy and mitigate bias in hiring, it becomes important to evaluate the quality and character of the training data that is being used — neither of which are guaranteed in the deployment of AI systems in any context.
One risk is that data supported by existing hiring practices will privilege certain qualities that end up entrenching discriminatory hiring patterns through automated decision-making. It is widely known that Amazon had to abandon its AI hiring tool on the revelation that the system was exhibiting a gender bias in evaluating candidates for technical roles. This occurred because the AI was trained to vet applicants based on resumes submitted over the past decade, most of which were received from men. As such, the algorithm learned for example, to favor verbs more commonly found on resumes from male engineers.
The paradox here is that the cognitive biases in hiring that AI is being sought to improve upon are also built into the ecosystem of data on performance and retention that is currently available to train these AI. Acknowledging a biased or inaccurate data set resulting from the limitations of legacy practice in hiring, one solution may be for management to require systems that introduce corrective data deliberately engineered to counteract biased patterns and create datasets better approximated to support equal consideration for a diverse applicant pool.
Another data-based challenge for companies to be aware of is the uncertainty around using existing indicators of high performance to predict future success. A survey by IBM of Chief Human Resources Officers notes that as jobs are being redefined “leading organizations are rethinking the employee construct at its most elemental level.” As organizations devise new business models to support emerging technologies, fairness and efficacy in hiring will require data-fluent HR professionals whom are trained to understand the context and limitations of the algorithmic recommendations that they receive.
Such mandates on data practice would do well to be implemented as part of a broader company-wide strategy around the ethical use of AI. Addressing hiring as consistent with the company’s overall approach towards implementing AI helps to create a legally and financially sustainable network of AI applications that are thoughtfully designed from the beginning to create value for the long-term. The hope remains that if properly deployed, AI systems can be used to counteract the intractable problem of implicit bias, thereby reversing the trend towards discrimination in hiring and supporting the health of the organization at the same time. This coming spring, the World Economic Forum will release guidelines to help companies choose AI systems which support diversity rather than stultify hiring.