6
min read

The pieces of a nugget challenge

In this blog, we will outline what inspired the nugget.ai challenge and what types of questions it is composed of.

Considering nugget.ai’s challenges to measure the skills of your employees or candidates but want some more information about each of the challenge components? Completed a nugget.ai challenge and wondered why you are answering these questions?

Look no further! In this blog post, we will outline what inspired the nugget.ai challenge and what types of questions it is composed of that allow us to assess people’s soft skills. After reading this blog, if you are looking for more information on how the challenge is created click here, or if you want more information about how we generate scores from the nugget.ai challenge, stay tuned for a blog post coming soon!

What are nugget.ai challenges?

At nugget.ai we use our assessments (what we call ‘challenges’) to assess people’s soft skills. Soft skills are interpersonal skills and personal attributes that allow people to effectively perform their work duties (Lievens & Sackett, 2012). Examples include critical thinking and communication skills. These are distinct from hard skills, which are technical skills that are task-focused and are very specific to the duties of the job (Hendarmann & Cantner, 2018). Examples include performing a needs analysis (for the job of a management consultant) or pitching to a client (for the job of a sales representative). At nugget.ai we assess nine main soft skill categories: Leadership, Collaboration, Interpersonal Sensitivity, Communication, Work Management, Systems Fluency, Self-Management, Improvement Focus and Critical Thinking. Each of these soft skills is assessed by multiple challenges that measure different facets of each skill. By assessing these skills through multiple challenges, we can more thoroughly understand why a user made the choices they made and better account for differences between people.

What are the two types of questions in the nugget.ai challenge?

nugget.ai challenges are broken into two types of questions, each of which are inspired by established assessment methods: rank-based questions derived from situation judgement tests (SJTs) and open format questions derived from work samples.

SJTs are a type of selection assessment that uses case-based scenarios to assess an individual’s judgement (Lievens et al., 2008). Individuals are presented with a scenario and asked to select (choose one), rate (the overall likelihood) or rank (from most to least likely) scenario-specific response options to illustrate what they would or think they should do in the scenario (Lievens et al., 2008). In our challenges we ask users to rank each of the provided response options to outline what they would do if they were placed in the specific workplace scenario. Our goal is to simulate a person’s behaviour on the job and decisions they would make as closely as possible, and therefore, we focus on ‘what they would do’ instead of ‘what they believe they should do’. We ask users to rank order the response options, instead of selecting just one option, to obtain more information about the user and more accurately assess their soft skills. The rank ordering a user produces is then assessed to determine how highly the user relies on the skill being measured (e.g., does the user rely highly on critical thinking in work situations).

Below is an example of what one of our rank-based questions would look like:

“You are a team lead running a meeting with Avery, the junior lead recently assigned to your team. The meeting is with your company’s most important client, Hayden. The stakes are high with this meeting, so you have been dreading the thought of it for weeks. It is the halfway point now and you call for a 15-minute break.

You exit the meeting room, with Avery following behind you.

Avery says, “Is it just me, or are we doing just absolutely horribly right now?”

You reply, “Yeah, the client hates pretty much everything. It seems to me that Hayden’s just not interested in changing any part of this process and that the company just wants to stick to their old ways.”

Avery nods in agreement and says, “Yeah, I definitely agree with you. It doesn’t seem like Hayden is really listening anymore. They are just taking every opportunity to shoot down what we say. So, what do we do?”

Order these options based on how likely you are to engage in the behaviour, from top as most likely to bottom as least likely

  1. Plan to go back into the room and ask the client, Hayden, for feedback to see if you should change directions.
  2. Suggest that you change directions to try to appease Hayden based on what you believe would be more interesting to them.
  3. Suggest you stick to your current ideas because you believe in your approach and have spent a lot of time on your meeting prep.
  4. Plan to go into the meeting and ask Hayden to pay attention if they seem distracted as they might just not be fully understanding your ideas.”

Work samples, the assessment method that inspires the open format questions within our challenges, are assessments that measure an individual’s skills or abilities by examining their behaviour when completing a task (or set of tasks) that is comparable and relevant to the tasks they would perform on the job (Robertson & Kandola, 1982). At nugget.ai, our open format questions ask users to provide a sample of what they would produce for a specific work task or activity such as crafting an email response, drafting a memo or writing a script of what they would say in the moment. The scenarios are work relevant but generic enough that they allow us to assess the soft skills of any user, as opposed to needing challenges specific to a job or company (we do allow our clients to customize their challenges to better fit a job or company context, however; more information coming soon). User’s responses to the open format questions are assessed by a proprietary natural language processing (NLP) algorithm that reviews each sentence for language that is indicative of each of our nugget soft skills, allowing us to produce a score on each skill (more information on the open format question scoring and our soft skill NLP algorithm coming soon).

Below is an example of what one of our open format questions would look like:

“After communicating your plan to Avery, you just finish filling up your water bottle and look down at your watch. You wince at the realization that the break is coming to an end.

You nudge Avery to start walking towards the meeting room. You both enter the room and see that Hayden is just getting off what sounded like an important call.

Hayden lets out a sigh and says, “What’s next?”

You take a deep breath and say….

Write a script of what you would say to Hayden.”

Conclusion

By using these two question formats, we’re able to understand not only what decisions a person is likely to make in work situations (the rank-ordered questions) but also what they would produce when trying to accomplish a work task or activity — whether that is sending an email, speaking to a co-worker, writing a memo to their team, or something else — (the open format questions). With this information, we can extrapolate which soft skills they rely on when they’re addressing work situations without depending on self-reports that can only provide accurate skills assessments if the person is self-aware and honest about their strengths and areas of development.

Interested in seeing more examples of our challenges? Click here!

References

Hendarman, A. F., & Cantner, U. (2018). Soft skills, hard skills, and individual innovativeness. Eurasian Business Review, 8(2), 139–169. https://doi.org/10.1007/s40821-017-0076-6

Lievens, F., Peeters, H., & Schollaert, E. (2008). Situational judgment tests: A review of recent research. Personnel Review, 37(4), 426–441. https://doi.org/10.1108/00483480810877598

Lievens, F., & Sackett, P. R. (2012). The validity of interpersonal skills assessment via situational judgment tests for predicting academic success and job performance. Journal of Applied Psychology, 97(2), 460–468. http://dx.doi.org.subzero.lib.uoguelph.ca/10.1037/a0025741

Robertson, I. T., & Kandola, R. S. (1982). Work sample tests: Validity, adverse impact and applicant reaction. Journal of Occupational Psychology, 55(3), 171–183. https://doi.org/10.1111/j.2044-8325.1982.tb00091.x

Nicholas Tessier 🧠

Product Manager