Research-based and evidence-based have become the new HR buzzwords. They are everywhere. Practitioners start their presentations with a bunch of academic references and the audience becomes quite. Nobody will ever question the validity and applicability of a Schmidt & Hunter (1998) paper, right?
I enjoy this professionalisation of the HR field. Because Schmidt & Hunter (besides others) significantly contributed to the fact that we are not making selection based on canididates’ interests and graphology scores anymore. A newer working paper by Schmidt (“The Validity and Utility of Selection Methods in Personell Psychology: Practical and Theoretical Implications of 100 Years of Research Findings”, instead of 85 Years, made available in October 2016) backs up the earlier findings and even provides us with results for newer selection tools such as situational judgement tests (SJT).
On the other hand there still exists enormous respect for complex psychological studies in the field. No average HR person needs to understand all the details, that’s for sure. But does this mean we have to apply all the study results at once without critically judging them? Now it’s chic to combine a general mental ability test (GMA) and an integrety test (alternatively a structured interview) because it’s research-based. Because making this effort is better than to toss a coin.
Here are some of my thoughts on this:
(1) As soon as sombody says that something is “research-based” ask for the exact resource this person is referring to. If you can, read the cited research yourself. If you don’t fully grasp it, look for support. There are plenty of resources available and it will expand your understanding of the interplay of research and practise. And you will be able to spot falacies in presentations about research-based selection, for example. I saw some practicioners adding “CV” as a selection method to Schmidt & Hunters results, indicating that when you look at a CV as a recruiter you will not make as good decisions combined to the combination of GMA and integrity test. In the original review, there is no such thing as a CV at all.
(2) I am surprised that new selection procedures are rarely followed up upon. There are short-term KPIs you could measure (e.g. numbers of interviews needed to fill the position) but the most important questions is: Do employees selected based on the new procedure perform better (and need less time for training) than those selected with the old procedure in your context (NOT: in any research paper). HR practicioners believe that only by applying the best selection tools based on research, they will for sure select the best canidate for the company they are working in. But how do you know if you never check?
(3) I am also surprised by how little we dare to experiment with our own talent indicators beyond GMA and integrity tests. If everybody tests for the same, how are we exploiting jagged talent? I recently read Todd Ross’ “The end of average” where he discribes how companies like Google and IGN experiment with individual selection indicators to untap talent. I would love to see more of that courage in other companies and can only recommend the book for inspiration.
(4) Finally, I am surprised by how few HR colleagues try the new procedure themselves. Nothing is more eye-opening (if not shocking at times) than your own test scores, non-intuitive user-interfaces and automatic reply e-mails which you have to experience by yourself.