Assessments: Can’t Live With ‘em, Can’t Live Without ‘em

If you have been reading ERE over the last few weeks, you have probably been exposed to assessment argument overload. You might have read claims that unstructured interviews alone were sufficient to survive a guarantee period. You might have read selection scientists quoting numbers showing it took more than interviews to reduce turnover, increase training success, and increase on-the job-performance. And, of course, you might have read a few recruiters immodestly claim they knew more than anyone else on the subject. Well, good luck with that.

It reminded me of a scene from an old Monty Python movie where French soldiers high on a castle rampart shouted tauntings at the English below. The English, who spoke no French, had no idea what they were saying. The French, who spoke no English, had no idea their tauntings were being ignored. In frustration, the French hurled a cow at the English (i.e., a seldom-used medieval weapon of udder destruction).

Well, tauntings can also lead to confusion among bystanders who don’t know what to believe … scientists who study the effectiveness of different assessments under controlled circumstances, or someone with strong opinions and a product to sell. So, let’s see if we can clear away the smoke and mirrors.

Recruiting Objectives and Organization Objectives

I don’t direct these articles toward recruiting firms. The ones I know tell me their main quality of hire measure is surviving the guarantee period. Organizations, however, are different. They want lower turnover, successful training completions, and higher individual productivity. I have never heard an organization mention guarantee periods. So, if guarantee periods are your main metric, it’s time to stop reading and have some coffee or tea. If however, you are a typical organization, keep reading.

Assessment Defined

Assessment is just another term for measurement. Every method used to evaluate applicants is an assessment. That includes application blanks, recruiting sources, photographs, interviews, tests, training workshops, video interviews, and so forth. And, unless you hire everyone who applies, the choice is not whether to assess, but how accurate and consistent you want assessments to be.

Rolling the Dice

Hiring is a game of odds. In spite of statements to the contrary, nothing anyone can say, do, or ask will provide 100% certainty that a specific employee will survive a guarantee period, have long tenure, quickly learn, effectively solve job-related problems, or become a top performer. Anyone who maintains otherwise lives in a parallel universe. However, even though achieving hiring perfection is like reaching the carrot at the end of the stick, we can do a great deal to control our odds of success.

In most cases, a common interview has one good purpose: it screens out blatantly unqualified candidates. The more questions you ask, the more opportunity there is for a candidate to say something wrong. Upon passing the interview, however, research shows the odds of success are about 50/50. Starting with a base-rate of chance, a smart HR group has potential for improvement, providing, of course, they start with a clear understanding of job requirements and business necessity.

It’s critical to discover specific competencies associated with job performance or failure. Job descriptions and compensation bands are just one source of data. You need to extract trustworthy competency information from training programs, job holders, job managers, and a visionary manager or two. This is not easy because most people don’t think in competency terms. However, once you have a critical list of job competencies, you can start using assessments to mine three sources: a candidate’s past performance, future intentions, and present-day abilities.

Hiring Competencies: The Candidate’s Tool Box

Many people do not understand hiring competencies. I’ll keep it simple. A hiring competency is not something a candidate accomplishes on the job. That has too many variables. A hiring competency is a specific skill the candidate uses from time to time to get the job done; and, it has to be something we can accurately measure quickly.

On the simplest level, a hiring competency might include skills like learning ability, technical knowledge, problem-solving ability, organization skills, prioritization, coaching skills, persuasive skills, or the like. It might also include attitudes, interests, and motivations to apply those skills. Think of hiring competencies and AIMs as the candidate’s “personal toolbox.” It’s not a work product left behind at the end of the day.

Measuring Competencies: The Recruiters’ Toolbox

Hiring personnel have the responsibility for quickly and effectively measuring candidate competencies. The need to master questioning techniques that probe the candidate’s past performance while, at the same time, making it hard for the candidate to fake good. These are usually called behavioral event interviews or BEI’s. BEI’s gather complete stories, extract competencies, and compare them to job requirements. For example, if my job requires analytical skills, I might ask a candidate to share a time when they had to solve a problem, what the problem was, what they did, and what the result was. Once I learn how the candidate solves problems, I can use that information to predict performance in the new job. But, be cautious …

The high-structure of BEI makes it more accurate than garden-variety interviews, but BEI is not perfect. And, BEI is not a set of short questions. Candidates are still motivated to hide weakness and often give examples that are not even close to the job. On the other hand, BEI-trained interviewers must have the skills to dig for data, determine and evaluate hiring competencies, distinguish between hard facts and a good story, and know when to press for details. BEI accuracy requires thinking like a detective. It usually takes months or even years to develop the skills. And, even the best BE interviewer is only as effective as his or her job competency list.

That is why savvy organizations add other validated tools to the hiring process: Something they call a multi-trait-multi-method process.

Validation means the tool is tested and proven using some aspect of job performance. Validated tools include self-reported tests, knowledge tests, and generic ability tests. I’m intentionally excluding tests such as the MBTI or the DISC, as well as clinical tests like the MMPI. In my professional opinion, broad personality or clinical tests should never be used as hiring tools. There is often little or no proof they predict job performance (e.g., you can read about this in some of my earlier articles). Never use any test whose vendor who cannot provide documented proof the test was designed to predict job performance. Other assessment tools include simulations that require the candidate to perform critical parts of the job; skill tests that measure cognitive ability or technical knowledge; smart application blanks; and, realistic job previews that provide gut-honest descriptions of what it’s like to work the job.

About Correlations

Selection scientists do not trust personal stories or opinions. Because they have learned how easily people can be mislead, they only trust tests that measure something both necessary for the job and have a strong correlation with some aspect of performance. Knowing the difficulty of being absolutely, positively correct, they report results in terms of “strength” of the association between scores and job performance in terms of a correlation ranging from perfect negative (-1.0) to chance (0.0) to perfect positive (+1.0).

Correlations are frustrating for people who insist on certainty, and confusing for people whose last exposure to statistics might have lead to periods of prolonged rest and sedation. So let’s put stats on the shelf and examine a few facts that are no-brainers: 100 smart employees will outperform 100 dull ones; 100 motivated employees will outperform 100 unmotivated ones; 100 persuasive salespeople will outperform 100 unpersuasive ones; 100 coaching managers will outperform 100 non-coaching ones; and, 100 candidates who demonstrate they can do a job will outperform 100 ones who can only tell you about it. We may never be 100% accurate on a person-by-person basis (i.e., there are too many unexpected events that can affect our decision) but at the group level we can almost always skew the odds heavily in our favor.

So, which assessment methods do you think will deliver the best performing workforce? Those that start with job descriptions, or those backed with a detailed list of hiring competencies gathered from job holders, managers, and visionary managers? Those that use a few general questions, or those using validated tools such as structured behavioral or situational interviews, simulations that require the candidate to perform critical parts of the job, attitudes, interests and motivations tests, skill tests that measure cognitive ability or technical knowledge, smart application blanks, and, realistic job previews that provide gut-honest descriptions of what it’s like to work the job?

In all situations, we’ll use the gold-standard definition of quality of hire: collective turnover, training success, and on-the job-performance. Meanwhile, keep a sharp lookout for flying bulls!

The original post: ERE Articles