10
min read

The science behind nugget.ai challenges

How we use I/O Science and Psychometric Data Science to create winning soft skills profiles.

This paper was written by students in the Masters of I/O Science program at the University of Guelph, located in Ontario, CA. Special mentions to the students for their generous contributions: Alexandria Elms, Craig Leonard, and Melissa Pike. Supervised by Marian Pitel, VP of Research at nugget.ai.

If you're not familiar yet, let me give you the quick beat on what we do. nugget.ai is a soft skills screening tool that helps companies hire and develop top talent. We use I/O Psychology and AI to evaluate performance in personalized problem solving challenges that are a reflection of day to day work tasks.

What are nugget.ai challenges?

To meet current demands for assessing employee and job applicant performance, nugget.ai has designed challenges (i.e., work samples) that can help companies hire or train personnel. These challenges consist of job-related exercises that candidates and employees can complete. Each challenge is specific and relevant to the position that the company would be hiring or training for. Some of these challenges are based on previously established work samples used currently in global organizations. With the help of in-house PhD candidates and experts in Industrial-Organizational Psychology and Psychometrics, nugget.ai has developed unique challenges using a proprietary process that is kept standard across all challenges.

A key aspect of the nugget.ai challenge is its state-of-the-art evaluation method. Performance metrics on each challenge are evaluated using exploratory and predictive machine learning models. Algorithms tracks everything from the pace of work, to tone of language, as well as the keywords and grammar used by the candidates or employees undertaking these challenges. The information collected through the challenges are then used to create individual profiles for the candidate or employee as well as an aggregate profile for the pool of candidates or employees. These profiles provide summarized data to empower human capital decision making in the data-driven era.

The benchmarks for performance on the nugget.ai challenges are based on top performing employees. These benchmarks help nugget.ai assess how job candidates are performing compared to existing top performers when using the challenges for hiring. When using the challenges for training, these benchmarks can help determine the degree to which employees need to be trained further and what specific skills or abilities can be developed to close in on the gap between their current performance level and a top performing level.

What is the hybrid approach?

The nugget.ai challenges use a hybrid approach to assessment. Specifically, the challenges combine elements of two psychometrically robust assessment tools – work samples and assessment centers. This combination allows Nugget.ai challenges to accurately and thoroughly assess challenge-takers, while keeping to a minimum the limitations associated with using a work sample or assessment center by themselves.

What are work sample tests?

A work sample is a test in which an applicant’s performance is assessed on a set of tasks that are comparable and relevant to tasks performed on the job (Robertson & Kandola, 1982). A key tenet of work samples is their high correspondence with the position of interest (Guion, 1998). There are four categories of work samples including (1) psychomotor, (2) individual, situational decision making, (3) job-related information, and (4) group discussion or decision making (Robertson & Kandola, 1982). Nugget.ai’s challenges are analogous to individual, situational decision-making work samples. In these work samples, candidates make decisions that are similar to decisions that an employee would need to make in the job of interest (Robertson & Kandola, 1982).

Work samples have some of the highest rates of validity when compared to other selection tools, meaning they can be strong predictors of who will perform well in a certain role. Hunter & Hunter, 1984; Reilly & Warech, 1993

What are assessment centers?

An assessment center is a combination of dynamic tests and exercises, often done in person, that are used to assess candidates and employees on job relevant competencies and skills (Thornton & Rupp, 2006). Assessment center exercises often include interviews and questionnaires but more notable exercises include role-play scenarios, group activities, and written exercises. This method is primarily used for observing behavioural processes of candidates – that is, the “how” of candidate performance. Assessment centers are composed of eight main components. Assessment center exercises must (1) be based on a job analysis, (2) use multiple assessment techniques, (3) use simulation exercises, (4) elicit behavioural responses, (5) use behavioural questions, (6) have multiple assessors (7) that are trained, and (8) the observations must be integrated in a formal manner (Thornton & Rupp, 2006).

Assessment centers have excellent predictive validity and are one of the most valid selection tools (Hunter & Schmidt, 1998; Meriac, Hoffman, Woehr, & Fleisher, 2008). They are used for external and internal hiring, promotion assessment, and high potential identification (Thornton & Gibbons, 2009). Assessment centers can also be used for developmental purposes as they can assess how employees may perform on competencies they may otherwise not get to exhibit (Thornton & Gibbons, 2009). However, assessment centers can put a strain on client’s resources as they are generally a more expensive and time-consuming assessment tool.

The Hybrid Approach

Assessment centers and work samples evidently have intersecting components as they both employ practices to ensure the test(s) are job relevant and rigorously assessed. Nugget.ai seeks to combine the best elements of both methods while maintaining psychometric integrity and rigor. Nugget.ai challenges are based on information gathered from examining the job in question and therefore, the challenges are job specific. The challenges require challenge takers to exercise their decision making and problem-solving skills as they complete tasks that are similar to tasks they may encounter on the job. The design of the challenges elicits behavioural responses from the challenge takers and these behaviours are then assessed using a statistical approach which amalgamates the data in a systematic manner. Nugget.ai challenges go beyond current work samples as the method of performance evaluation provides a detailed depiction of performance metrics. However, the challenges can reduce any cost or time restraints associated with assessment centers. The nugget.ai challenges are based on the hybrid approach to provide the most effective and valuable assessment tool for each client.

The advantage of work sample tests over questionnaires?

When hiring, companies often use questionnaires to gain an understanding of the personality or cognitive ability of job candidates. In fact, questionnaires are one of the most commonly used hiring tools (Spector, 2012). Questionnaires are great as they can be created to measure many different traits and behaviours, however, companies can be weary of using questionnaires to measure applicants’ knowledge, skills or abilities because of the concern that applicants may misrepresent themselves, or ‘fake’ on these kinds of tests (Spector, 2012). When research has been conducted to ask applicants if they engage in faking or some degree of deception in the application process, the majority state that they do (Donovan, Dwight & Hurtz, 2003). Even when applicants are honest, there are limits to how self-aware individuals are which affects their ability to report their own abilities (McDonald, 2008; Spector, 2012). Therefore, it is important to use assessment methods that demonstrate the applicant’s skills as opposed to having them report what their skills are. Beyond issues such as faking, questionnaires are also prone to biases (e.g. the tendency to respond on the extremes), and responses can even be affected by an applicant’s mood (Paulhus & Vazire, 2007; Spector, 2012). There are also researchers that argue that questionnaires are not as objectively accurate as behavioural measures (Kagan, 2007), and therefore, it can be argued that better measures should be used in selection.

nugget.ai challenges circumvent the issues with questionnaires as well as reaps many of their benefits. nugget.ai challenges are objective measures for judging an applicant’s performance which reduces the issues associated with subjectivity that are present in questionnaires. In other words, by having an evaluation process that does not require the applicant to judge their own knowledge, skills, abilities, or assessment performance, the nugget.ai challenges help to minimize biases as well as issues with faking. Minimizing issues associated with questionnaires used for hiring and training is important, however it is ideal if we can do this while also including the benefits associated with questionnaires – and nugget.ai challenges do just that. Three of the major benefits of questionnaires is that the results are easy to interpret (Paulhus & Vazire, 2007), the scoring is straightforward (McDonald, 2008) and the data is easy to collect. With the use of intelligent prediction algorithms, all of these benefits are present as well, giving the nugget.ai challenges an upper hand over questionnaires as the challenges include these benefits along with avoiding issues associated with questionnaires.

Another important aspect in nugget.ai challenges is their ability to measure some degree of cognitive ability. Why is cognitive ability important and how does it relate to questionnaires? Cognitive ability is one of the most accurate predictor of job performance (Schmidt & Hunter, 1998) and questionnaires are a very common method used to assess cognitive ability. The issue with cognitive ability questionnaires, however, is that they are high on adverse impact, which means that they tend to unfairly bias selection in terms of biased rejection of many members of protected classes (e.g. African Americans). Methods such as work samples and assessment centres (i.e., the components present in nugget.ai challenges) are able to capture some degree of cognitive ability but often with less adverse impact than questionnaires (Callinan & Robertson, 2000; Schmitt & Mills, 2001), making them more preferable for use in applicant and employee assessments.

When the challenges are presented to candidates, it is immediately evident to the applicant that the challenges are job-related and therefore, measures are perceived fairly and viewed positively by applicants (Steiner & Gilliand, 1996). Because the challenges are job-related and involve the applicant completing job-specific tasks, the challenges as part of a hiring process provide a realistic preview snippet of what the job will entail and help applicants decide if they think the job is suitable for them and whether or not they should continue trying to get the job (Downs, Farr, & Colbeck, 1978). This process can help prevent turnover down the road as the applicant has a better idea of what to expect and if the job is right for them.

How do we maximize the benefits and minimize issues of the Nugget.ai challenges?

nugget.ai challenges are analogous to work sample tests and assessment centre exercises. When designing nugget.ai challenges, careful considerations are made to ensure that the challenges meet industry standards. The Society for Industrial and Organizational Psychology (SIOP), the leading body on the scientific study of psychology-at-work, has guidelines on the effective use of job simulations in selection. According to these guidelines, work sample tests are most effective when they are:

  1. based on thorough and accurate job information,
  2. are developed with a consideration of quality test development,
  3. have a high degree of structure, such that all individuals are given the same opportunities and are evaluated on the same basis,
  4. Machine-learning AI keeps the rating standardized
  5. and include multiple raters where appropriate and possible.

When developing nugget.ai challenges, four of these guidelines are prioritized. When customizing nugget.ai challenges, clients are encouraged to answer a set of questions that target specific and relevant job information (guideline 1). The expertise of both our in-house industrial-organizational psychology researchers and in-house psychometric specialists are leveraged to ensure that quality test development practices are adopted (guideline 2). The automated display and distribution of the challenges maximizes the likelihood that the challenge-takers undergo a similar and consistent process (guideline 3). Lastly, our machine learning AI scoring model allows the rating to be standardized (guideline 4). The fifth guideline on multiple raters is less applicable to nugget.ai’s work, discussed later.

As nugget.ai challenges are designed to emulate aspects of assessment centres, best practices for developing assessment centre are also considered when developing nugget.ai challenges. Assessment centre raters should be using a “situational analysis” approach, which takes into account the level of performance required and the work context (Thorton & Gibbons, 2009). With nugget.ai challenges being tailored to the specific position and job context, we ensure that this practice is implemented.

Furthermore, high-quality assessment centres utilize multiple assessors for assessing each participant’s performance (Thorton & Rupp, 2006) and ratings of different tasks and behaviours are typically combined into a single overall assessment rating (Thorton & Gibbons, 2009). The rating of assessment centre performance is a difficult procedure that can be influenced by individual rater’s biases and can lead to an inaccurate picture of a candidate’s true performance. The rating of work sample data is subject to these same issues. Nugget.ai’s proprietary method for scoring challenges side-steps this issue.

The use of algorithmic scoring helps ensure that responses are scored similarly. In fact, some psychologists strongly support this statistical approach to integrating work sample results as a more accurate way to produce an overall assessment rating, in comparison to combining individual judgements from different assessors. Feltham, 1988; Sackett & Wilson, 1982; Thornton & Rupp, 2006

At nugget.ai, we continuously adopt best practices in the development of our challenges and evaluation methods, which combine elements of selection methods that are widely used and revered by selection researchers and practitioners, that help us identify and quantify behavioural patterns and level of skill acquisition among people.

To learn more about our research, contact Marian here.