goc logo
Français

Talent Cloud Results Report

Skills Instead of Experience, and the Significance of this Choice

The Problem

Selection criteria (or job requirements) are at the centre of the merit based system that is the talent pipeline for government. It’s a big role for what can otherwise be a small section of the job advertisement. Nonetheless, this small section is what is primarily used to decide who will be hired by the Government of Canada and who won’t be.

In the Government of Canada, it’s standard practice to craft criteria that are several sentences long, often describing the experience that the manager and HR advisor think will ensure applicants have the skills needed to do the work. The problem with this approach is that it constrains the paths by which people could have acquired the skills. If those involved in creating the job advertisement are unaware that similar work is being performed in other sectors, they might even limit the experience to be “in government”.

At scale this type of experience based, or biography based staffing can have a huge impact on the life experiences that those entering government have. In other words, the type of selection criteria we use can directly shape the diversity of the talent we hire.

To address this limitation of experience criteria, Talent Cloud decided to shift to exclusively allow skills (and occasionally knowledge) as criteria on our job advertisements. Experience still plays a key role in the application process. Applicants are encouraged to submit their experience as evidence that they have the skills requested on the advertisement. But using skills, rather than experience, as a criteria means we’re not limiting what experience led to developing and demonstrating those skills.

A Problem of Our Own

Because of Talent Cloud’s lack of access to a Protected B server, the team was unable to collect Employment Equity data on the platform, which seriously undercut our ability to statistically prove (or disprove) the validity of the model.

To address this, we took extra care to conduct qualitative analysis, and interview applicants and managers wherever possible. We were also able to do a more in depth analysis of a small handful of positions on the platform. But a larger statistical study will be required before any definitive conclusions can be drawn.

The Hypotheses

The overall idea: Changing the definition of the requirements in the selection criteria for the job advertisement will change who gets hired in the end. By replacing a list of detailed required experiences with a list of skills, and broadening what is considered evidence of competence, more people are likely to have life paths that will allow them to claim and demonstrate that skill, which will lead to more diversity in the applicant pool.

The new model will be able work within the current government hiring ecosystem to translate value and influence decision making at critical points in the staffing process:

  1. Creating the job advertisement: Managers will be able to articulate their hiring needs in terms of skills, rather than previous experience, when they are developing the job advertisement.
  2. Decision to apply: Using skills on the job advertisement, rather than experience, will attract a more diverse group of applicants, as more people will see themselves as potentially qualified for the role.
  3. Application: Applicants will be able to put forward sufficient evidence in the job application to demonstrate that they have the required skills.
  4. Initial screening: Managers and HR advisors will be able to understand the value of this evidence so they can decide who to invite to further testing and who to screen out.
  5. Final hiring decision: A final result will show increased diversity, leveraging what is hopefully a higher percentage of equity-seeking applicants in the initial applicant pool and following the application screening stage.
Article continues below

The Experiment

This was a multi-year endeavour, with several stages in testing and development.

Initially, Talent Cloud worked on the Impact-Driven staffing model (see Section 2). This helped managers articulate job requirements by first focusing on what the impact of the hire would be, and then what key tasks would be done to deliver the impact, and what skills would be required to do those tasks. This led into testing of numerous different types of skills frameworks and methodologies (which could fill their own small report).

Once a framework for articulating skills requirements was selected, based on user testing with hiring managers, an MS word document version of the flow was given to managers to use for testing on live job advertisements. Following the success of this early testing, the framework was coded into the platform as part of the tool for building job advertisements.

User testing was also done with applicants to see if the skills design was appealing, and something that they could speak against in an application. This was then tested live through the early version of the job advertisement and application process, where applicants were able to claim skills at certain levels, and then provide both explanations and attached evidence (such as micro-reference contacts or portfolios).

The Talent Cloud team then carefully monitored how managers and HR advisors processed the information in the application forms. The team also tracked, to the extent possible, how applicants were viewed as they moved through the hiring process.

The team also interviewed numerous managers and applicants, and conducted a survey with applicants on their platform experiences.

The Results

While the approach appeared to produce the desired result in terms of diversity in the applicant pool (to the extent that we were able to determine), the early application process proved to be a stumbling block in the process. Some managers and HR advisors had a lot of difficulty assessing applicants who didn’t have more traditional experience, and many applicants didn’t provide the necessary evidence for the skill required when the application gave them the chance to do so. This problem persisted with the initial design, even after several smaller interventions and nudges were added on both the applicant and manager side of the equation.

Interestingly, despite the challenges at the screening stage, managers still ended up making diverse hires, and many reported that they found talent that they would never have normally managed to attract or would have hired. Applicants who were hired also regularly reported that they had previously not been successful in government applications or had not wanted to apply, and that the skill-based selection criteria had influenced how they perceived their qualifications when they saw the initial job advertisement.

So the problem with the model was clearly at the screening stage - not with the attraction of applicants or the final decision of managers. Talent Cloud determined that a better “value calculator” was needed if managers were going to be able to recognize and accept a wide variety of skills evidence from applicants. And applicants were going to need to be better equipped to give managers what they were looking for when making the decision.

This led to the development of a tagging system where applicants are able to identify experiences in one of five categories:

Once an experience is tagged, applicants are asked to connect skills to the experience, and then explain how the skill was used during that specific experience. The platform then produces a narrative of the skills experience that can be viewed either chronologically or per skill. Applicants are given a window into what managers see when they assess if an applicant possesses the necessary skill or not. And managers are presented with the information they need in a way that showcases skills gained through non-traditional work and education pathways.

Notably, this design was heavily influenced by user testing with Indigenous users and others from underrepresented groups who emphasized that there needed to be a better way to recognize the value of their lived experience.

In user testing throughout 2020, and in the very limited live testing in 2021, this new design performed spectacularly well. It has been one of our team’s most successful interventions, and one of the hardest pieces of theory and design the team has worked on. We deeply hope that someone will be able to continue this research with a statistically large enough sample size. If proven as a methodology, this could have an impact on the design of many inclusion and diversity approaches in the Government of Canada, far beyond just recruitment.

Insights

Setting skills as the essential criteria (rather than biographically based experience) proved to be a highly valuable intervention in terms of advancing diversity. But it’s not an intervention that works alone.

When it comes to advancing diversity and inclusion, it’s the totality of the ecosystem that matters. That means that diversity and inclusion must be structurally supported at every stage - from the initial appeal of a recruitment platform to the design of the job advertisement to the initial screening to the final assessment. While setting skills as the heart of the process made a difference, that difference would have been undone if we hadn’t found a methodology that allowed managers to see and translate the value of diverse lived experiences into a recognition of those skills.

Assessment for soft skills

In our user testing, managers wanted to validate soft skills (transferable skills) by testing them directly or through reference checks. Few even considered the information on soft skills applicants provided in their applications. And evidence of soft skills was one of the things in an application that people had the hardest job writing.

Because it takes a lot of time to apply to jobs, and we don’t want to ask for things that aren’t needed, in the new Timeline Application Model applicants no longer need to write descriptions on how they met the soft skill requirements. This saves time and energy for managers and applicants, and places more emphasis on providing strong, clear evidence of occupational skills in the initial application.

Bottom line: rewriting essential criteria won’t make a difference alone. But replacing detailed experience criteria with required skills is a powerful step forward when paired with an application design that connects diverse life paths with the structural representation of applicant abilities.

Return to Table of Contents