knowledge-workerCreated with Sketch.
qimin

Qimin Liu

VP of Data Science @ nugget.ai

Data Scientist and advocate on diversity and inclusion initiatives.

AI series on promoting fairness and diversity in talent recruitment via ethically and socially responsible AI



In people analytics, recruitment services often aim to select candidates with potential for high performance. In the process, HR agents may put emphasis on certain aspects that they hold important from their hiring experiences or attempt to optimize the employment cost and corporation gain. Modern computer-assisted hiring also tends to focus on tools that assist in obtaining required talents alone -- sometimes unforeseeably at the cost of diversity and fairness. Diversity, in terms of a wide range of aspects, such as gender, sexuality, race and ethnicity, cultural and educational backgrounds, can often bring great value to the corporation by sparking creativity and innovation. Similarly, fairness, especially in workforce practices, defines the “Geist” of a corporation: it is the fairness that motivates and inspires employees to engage proactively.


To illustrate the potential bias that compromises diversity and fairness in hiring practices, consider the following scenario: Assume that Ajay and Bob have equivalent quantitative skills that are essential to a quantitative analyst job. Ajay and Bob, however, each provided different response for the same interview question that aims to examine their statistical knowledge. The question involves an elaborate scenario that are described in idioms and terms particular to local residents because the interviewer is from the area. Ajay, coming from Asia, has just relocated to the area, despite years of working experiences in English-speaking country. Bob, however, has been a local resident throughout all his life. While the particular position requires no knowledge of local area, Ajay’s response to this particular interview question leaves the impression to the interviewer that Bob may be a better candidate. Admittedly, the decision to hire Bob is no less valid than hiring Ajay given that they have equivalent quantitative skills. It is nonetheless undesirable that an interview question that aims to assess the same set of skills would invoke responses differentially across the candidates of different background due to the specifics of the question or the experiences of the interviewer. This phenomenon, i.e., a talent screening item functions differentially for or against members of certain groups given that the measured attribute are matched across groups, is termed differential item functioning (DIF) in psychometrics. DIF can pose challenges to the validity of the recruitment process in various ways and deprive the hiring decision of respect for diversity and fairness.


Computer-assisted recruitment processes can suffer from DIF as well: machine learning algorithms and conventional statistical models are only as good as the input features that are fed into the data analytic processes. Were the input features carrying bias against or for particular groups, the resultant inferences or predictions from the data analytic processes may unavoidably produce unfair advantages for or disadvantages against certain groups of candidates. Even if the inferred insights or the prediction recommendations is not without values or utilities, it is the fairness and diversity that is put at risk by acting onto such insights or recommendations.


At Nugget AI, we not only care about finding the talent that has the potential to show high performance but also also about finding such talent in a socially and ethically responsible way. Different from our competitors that highlight either the performance in ability or personality assessments, Nugget AI recognizes the importance in finding candidates that show similar problem solving processes to top-performing employees of the corporation. Unlike the performance that can be improved through practices, evaluation on the problem solving processes of candidates can show stability, consistency, and reliability. Through gathering process data on how candidates solve role-specific problems across scenarios, we find candidates that can readily fit in the corporation role with potentials to grow. Given honest data sources, we further refine our features with rigor because we recognize the high stake or workforce assessment. We want to eliminate the potential compromises to diversity or fairness in informing talent recruitment decision-making with Nugget AI. Thus, prior to building our AI models, Nugget AI would filter features using models derived from modern psychometric theories, such as item response theory. The goal is to ensure that the input features for the machine learning model would influence the model in the manner given the same potential for high performance regardless of candidates’ demographic backgrounds. In other words, Nugget AI model would only include features that function similarly across candidates’ background given the same performance potential.


Nugget AI is among the first to incorporate psychometric theories with machine learning algorithms. Including psychometric theories to tackle the intricacy behind human data is our acknowledgement of the human being behind each candidate profile: we want to celebrate candidates’ uniqueness and ensure fair hiring practices. By filtering out features that is potentially compromising to diversity and fairness, it is one of the many steps Nugget AI promises to take in building ethically and socially responsible AI tools that assist in modern workforce management tasks.




To learn more about our research, contact Qimin here.



Keep Reading