The Wiley Blackwell Handbook of the Psychology of Recruitment, Selection and Employee Retention,
First Edition. Edited by Harold W. Goldstein, Elaine D. Pulakos, Jonathan Passmore and Carla Semedo.
© 2017 John Wiley & Sons Ltd. Published 2017 by John Wiley & Sons Ltd.
Using Personality
Questionnaires for Selection
David J. Hughes and Mark Batey
8
Introduction
Employee selection is the process of choosing which member(s) of an applicant pool is
(are) most likely to behave in a manner that will achieve or surpass organizationally defined
metrics of success, such as selling products direct to consumers, preventing crime, building
and nurturing business‐to‐business relationships, caring for the sick and educating or
inspiring others to perform to the best of their ability. The definition of successful job
performance varies greatly across roles and organizations. Thus, while some elements of
behaviour are important for all jobs (e.g., exertion of effort), it is likely that many other
behavioural patterns will be suited to performance in some roles but not others. This
straightforward observation has led to a broad consensus from industry and academia that
personality testing, which as we discuss below is the fundamental descriptor of human
behaviour, should be useful during selection programmes. However, whether and to what
extent personality is useful remains a contested issue (see Morgeson, Campion, Dipboye,
Hollenbeck, Murphy & Schmitt, 2007a; Ones, Dilchert, Viswesvaran & Judge, 2007).
In this chapter we critically consider the evidence regarding the use of personality assessments
in selection.
We begin by setting the scene of personality test use in selection before defining personality,
considering why it should be of value in selection and briefly considering how we
arrived at the current state of knowledge in personality research generally. We then examine
the predictive validity evidence for personality in selection, considering personality as a
single predictor of job performance and as a part of a broader selection programme.
We then explore debates regarding what level of the personality hierarchy (broad factors
vs. narrow traits) is more useful during selection, whether universal job performance exists
or whether different jobs require different behaviours and thus nuanced personality
assessment, and we consider the potential utility of ‘other ratings’ of personality. We then
move on from predictive validity and discuss how and when personality measures might be
152 Selection
used within a selection programme. Finally, we suggest areas of research that offer great
promise for improving our understanding, and subsequently evidence‐based practice
within selection.
Setting the Scene
There are three key stakeholders in the personality–selection domain: academia, organizations
and test publishers. In principle, these three stakeholders share one objective: to produce
and use assessments that are reliable and valid. However, each constituency possesses
potentially conflicting drives and foci, which have led to some disarray in the development
and use of personality assessments in selection.
Academics have a primary interest in understanding the nature, theory and structure of
personality. A focus on considering what personality is and what it is not, how it is structured,
the processes underlying personality observations and the nomological net that
informs our understanding of how different aspects of personality and other individual
difference constructs relate. Organizations have a primary interest in using personality
assessments to deliver a return on investment. A focus is what predicts both productive
and counterproductive behaviour and performance in organizational contexts. Finally, test
publishers hold a primary interest in commercializing personality assessments thereby
making money from personality measures – a focus on what is marketable and useable by
those willing to pay for their assessments.
The result is a marketplace where the tools that organizations use are often at odds
with the theoretical foundations prized by academics. Further, in an effort to present
what appears to be either a unique or similar product, test publishers produce tools that
possess the same trait labels, but measure different constructs, or tools with different
trait labels that measure the same constructs. This is often referred to as the ‘Jingle Jangle
Fallacy’ (Kelley, 1927; Thorndike, 1904). Ultimately, these trends stifle scientific
progress and lead to confusion for practitioners and organizations, neither knowing
which personality measures,
if any, to use. What we have in the case of personality in
selection is a classic example of a scientist/practitioner divide and often a lack of evidencebased
practice (Rynes, Gyluk & Brown, 2007). The most theoretically and empirically
valid measures are often passed over for less‐grounded counterparts, with many test
publishers
failing to publish
their validity studies and others simply not conducting them.
These issues muddy the waters when we attempt to assess the utility of personality
measures
in selection. It is far beyond the scope of this chapter to put an end to this
confusion,
but we can at least start to address some of the important issues regarding
how useful personality testing really is and, perhaps most importantly, what can we do to
maximize its utility.
What is Personality?
Before discussing personality in selection we must first clarify what we mean by personality.
Personality has been variously defined as: ‘One’s habits and usual style’ (Cronbach, 1984,
p. 6); ‘a dynamic organization, inside the person, of psychophysical systems that create the
person’s characteristic patterns of behaviours, thoughts and feelings’ (Allport, 1961,
p. 11); ‘a person’s unique pattern of traits’ (Guildford, 1959, p. 5); and ‘relatively stable,
internal factors, which produce consistent individual differences at the emotional and
motivational level’ (Pervin & John, 2001, p. 4).
Personality Questionnaires 153
A single definition would never satisfy all stakeholders. However, a review of definitions
reveals that certain features are agreed. Personality is seen as a relatively stable and consistent
set of traits that interact with environmental factors to produce emotional, cognitive
and behavioural responses. Such theoretical views are supported by empirical evidence that
shows that there are numerous identifiable personality traits (Cattell, 1954) with some
cross‐situation stability (Funder & Ozer, 1983; Mischel, 1968), and develop through
maturation (e.g., conscientiousness and emotional stability increase with age), while
demonstrating
relative and rank‐order consistency in adulthood (Roberts & DelVecchio,
2000; Roberts & Mroczek, 2008). Importantly, measures of personality traits can be used
to explain and predict a wide range of behaviours and outcomes both cross‐sectionally
(Roberts, Kuncel, Shiner, Caspi & Goldberg, 2007) and longitudinally (Chamorro‐
Premuzic
& Furnham, 2003).
For the purposes of this chapter we suggest that personality be defined as a collection of
traits that influence a person’s typical thought patterns (e.g., how deeply one considers the
elements of a task), feelings (e.g., how anxious one is when faced with deadlines) and
behaviour (e.g., how organized one is). There are three main assumptions with regard to
the nature of personality traits that we adopt: 1) they are relatively stable (we discuss this
further below); 2) each individual has a unique constellation of traits; and 3) they drive
behaviour. Each of these assumptions is vital if personality is to predict behaviour at work.
Why should personality be relevant at work?
Given the broad agreement that personality is in part responsible for emotional, cognitive
and behavioural responses, it must be relevant for the prediction of conduct at work.
Workplace behaviour is not only defined in terms of what we can do (ability) but also how
(style) we do it. Some people work systematically, others more haphazardly; some communicate
empathetically, others in an authoritarian style; some are resilient under pressure,
others appear less so. Is it possible to achieve the same level of performance regardless of
these differences in style? Perhaps. Nevertheless, the manner in which tasks are conducted
is undoubtedly important at work.
No employee or organization can operate in a vacuum. From a single person start‐up,
through to a multinational corporation, people must interact with others. Personality has
a notable role in determining the quality and utility of these interactions. Indeed, ‘personality
clashes’ are an often‐cited cause of workplace conflict and finding like‐minded
colleagues
is an often‐cited contributor to job satisfaction.
Personality relates to the degree of enjoyment we take from certain elements of work
and thus how much motivation we have to carry out certain tasks (Ackerman, 2000).
For example, if employees are socially anxious and fearful of a negative evaluation, they
will be less motivated to speak publicly. If they are particularly anxious, it might even
reduce the quality of the communication and thus influence their job performance. Even
if they are able to manage anxiety within the presentation effectively, the emotional labour
and additional effects of the task on energy levels, well‐being and subsequent performance
could be considerable. If, however, an employee enjoys being in the limelight and finds
performing a presentation is a fun opportunity to relish, it is likely that job satisfaction and
performance will be higher.
In sum, personality influences how we approach a task, how we interact with others and
how natural or enjoyable we find a task or environment. Different approaches to these
aspects of working life may well influence job performance, yet even if they do not, variations
in these three areas are still pivotal to a wide range of other organizational variables
(organizational
commitment, citizenship behaviour, tenure, employee relations, etc.).
154 Selection
Trait, State or Type?
Above, we assumed that personality is the product of a constellation of traits, yet a number
of personality models and measures conceptualize personality through ‘types’ (e.g., Myers‐
Briggs type indicator; Myers, 1978). Personality types posit people as members of distinct
and discontinuous categories (Carver & Scheier, 1996); for example, a person is either an
extravert or an introvert. Typologies are suggested to have useful features, most notably
that they are relatively simple to grasp, which can be beneficial when discussing personality
with non‐expert individuals, as we often do within organizations. Type approaches are
often contrasted with the trait view of personality, which suggests that an individual can
fall on a continuum for each trait, so that positioning towards either extreme of the
continuum
is indicative of a stronger tendency to think, feel or behave in that manner.
A person is not simply extraverted or introverted, but rather is positioned somewhere
along a scale ranging between the two extremes. A simple consideration of human personality
and behaviour favours a continuum approach over a type approach: people do differ
in their level of extraversion (or indeed any other trait) and are not simply one type or
another. For this reason alone, we can say that trait theories are more valid than typologies.
Typologies come under further scrutiny when we consider the measures designed to assess
them. For example, the MBTI, despite being widely used, lacks internal consistency, test–
retest reliability and predictive validity (Pittenger, 2005). Thus, due to poor reliability and
questionable validity, the current authors recommend that regardless (or perhaps because)
of their simplicity, typologies be treated with caution in all organizational contexts, and
under no circumstances should be used for selection. That this point still needs to be
raised is testament to the gulf between science and practice we raised in the introduction
to this chapter.
A similarly contested yet more nuanced debate of real relevance to the personality in
selection discussion relates to personality stability and the influence of situational variables.
The extreme explanations that all behaviour is a product of the environment (if this were
true no cross‐situational consistency would exist) or that traits alone explain everything
(if this were true any cross‐situational variability would not exist) are inadequate. Indeed,
both situational variables and traits can be of equal relevance to explaining any single
behaviour. Often, traits share only modest correlations (0.3) with behaviour (Mischel, 1968),
as do situational variables (Funder & Ozer, 1983).
Thus, behaviour is not simply the product of either traits or the environment. Rather,
most behaviour is the product of complex trait × state interactions, whereby the influence
of the trait tends to be greater than that of the state in circumstances where situational
pressures are weak, and vice versa (e.g., Carver & Scheier, 1996; Judge & Zapata, 2015;
Monson, Hesley & Chernick, 1982). Thus, the influence of personality traits differs across
scenarios. Despite the role of situational variables, what we can conclude is that traits do
predict behavioural patterns across situations and time (e.g., Feist & Barron, 2003); those
who score high on measures of anxiety tend to be more anxious than those who score low
on anxiety across situations. Such consistency is essential; without it, personality would not
be a relevant construct to consider in a selection equation.
Identifying and organizing personality traits
Models of personality as they stand today are largely the result of work in two parallel
traditions:
the lexical and psychometric. The lexical hypothesis (Galton, 1869) suggests
that if a trait is important in influencing how we think, feel and act, it will be enshrined in
language;
the more important the trait, the more likely that it will be encoded in language
Personality Questionnaires 155
in a single adjective. Thus, researchers scoured dictionaries and psychological theories and
compiled lists of adjectives that describe personality (see Allport & Odbert, 1936;
Baumgarten, 1933; Cattell, 1943; Galton, 1869). This iterative work eventually culminated
in the development of Cattell’s bipolar personality–descriptor scales. These scales
represent the foundations of many currently held trait measures of personality and served
to generate, through numerous factor analyses, Cattell’s 16PF, which is today widely used
in selection.
Early attempts to replicate Cattell’s work were not wholly successful, with numerous
researchers finding that five broad personality traits consistently emerged from factor
analyses of personality ratings (Borgatta, 1964; Fiske, 1949; Norman, 1963; Tupes &
Christal, 1961). Further work in the area of personality structure continued to point
towards five broad factors and as a result led to the general consensus that ‘analyses of any
reasonably large sample of English trait adjectives in either self‐ or peer descriptions will
elicit a variant of the Big‐Five structure’ (Goldberg, 1990, p. 1223).
Today there are two main variants of these five traits: the lexical Big Five and the
psychometric
five‐factor model (FFM). The five factors are neuroticism, extraversion,
openness‐to‐experience (intellect in the lexical Big Five), agreeableness and conscientiousness
(Costa & McCrae, 1992; for a historical description of the emergence of the five
factors, see Digman, 1990). Despite some rather substantial differences in item content
and structural relations between the two models (e.g., the trait warmth is considered a
facet of extraversion in the FFM but a facet of agreeableness in the Big Five), researchers
and practitioners often conflate the two and use them interchangeably (Pace & Brannick,
2010). These differences are often given only cursory discussion but are potentially critical
in a selection environment. For example, if evidence from a job analysis or research literature
suggests warmth is one of the most important behavioural characteristics of a care
worker, how much emphasis is placed on extraversion or agreeableness in the selection
equation should depend on which inventory is being used. This example also draws on
another important debate that we will note here and consider in detail later. The bandwidth
fidelity argument concerns the question of whether narrower personality traits (e.g.,
warmth) or broader factors (e.g., agreeableness) are more useful in predicting behaviour.
Ultimately, this debate contrasts the measurement specificity one can gain using narrow
traits versus the superior reliability one can get from a broader trait. Equally, it is suggested
that predictors and outcomes should be matched in specificity, so when predicting
complex and aggregate outcome variables such as job performance, complex and aggregate
personality variables would be best.
Despite the differences between the two five‐factor approaches, there is a considerable
amount of evidence in favour of the broad five factors. In particular, the psychometric
FFM, which is argued to be ‘exhaustive of the personality sphere’ (McCrae & Costa,
1985, p. 558), is the most dominant measurement framework in research. The widespread
adoption of the FFM has undoubtedly benefited personality research. The FFM provides
a parsimonious model to guide the accumulation of research findings, allowing for crossstudy
comparison and accelerating knowledge production. The ability to empirically
aggregate research findings has ultimately resulted in the generation of meta‐analytically
derived estimates of magnitude of prediction (e.g., Barrick & Mount, 1991, 1996; Barrick,
Mount & Judge, 2001; Judge & Ilies, 2002). Meta‐analyses of the personality–job
performance relationship are very important in understanding the role personality can play
in selection.
Despite the popularity of the FFM, the adequacy of the model and even the fundamental
notion that five broad orthogonal factors top the personality hierarchy, is frequently contested.
Briefly, there are valid concerns of both a theoretical and methodological nature
156 Selection
with regard to the development of the five‐factor measures (e.g., Block, 1995, 2001,
2010). Further, research has been inconsistent in returning five factors from structural
analyses (Booth & Hughes, 2014), and where five factors have been identified there has
been debate as to whether or not these five factors are consistent (Pace & Brannick, 2010).
In addition, the FFM does not fit in confirmatory factor analyses (Vassend & Skrondal,
2011) or less restrictive exploratory structural equation models (Booth & Hughes, 2014;
Marsh, Lüdtke, Muthén, Asparouhov, Morin, Trautwein & Nagengast, 2010), suggesting
that the models are in need of some revision. These concerns may seem like excessive
academic navel‐gazing, but quite simply if the measures do not offer optimal measurement,
they are unlikely to produce optimal prediction. As a result, concerns of a structural nature
are of utmost importance to personality in the selection debate.
A further consideration relates to claims of the exhaustive nature of the FFM. This is
simply not the case. Many investigations have focused on traits that fall outside the FFM
(Ashton, Lee & Son, 2000; Jackson, Ashton & Tomes, 1996; Jackson, Paunonen, Fraboni &
Goffin, 1996; Lee & Ashton, 2004; Lee, Ashton, Hong & Park, 2000). Some of this
research has led to the development of the HEXACO model, a six‐factor model with more
facets than the FFM. There is also ample evidence of narrow, facet‐level personality traits
being omitted. For example, Paunonen and Jackson (2000) noted that traits of conventionality,
egotistical, integrity, femininity, seductiveness, manipulativeness, humour, thriftiness
and religiosity were missing. From a cursory perspective, one can see that these traits
might be of value in explaining some workplace behaviours. Further, these traits offered
incremental predictive validity over and above the FFM in relation to 19 criteria across
samples from Canada, England, Germany and Finland (Paunonen, Haddock, Forsterling &
Keinonen, 2003).
There are three main concerns to be recapped here. First, the FFM was not developed
in a theoretically or methodologically optimal manner. Second, FFM measures often provide
suboptimal measurement. Third, the FFM is not exhaustive and the traits it excludes
might be of value in selection. These limitations do not preclude the use of the FFM in
selection, but we must keep them in mind when evaluating the evidence pertaining to the
predictive validity of personality in selection. It is also important to note that these
concerns
are not exclusive to the FFM. Many other broad personality measures offer poor
measurement and miss (or incorrectly model) important aspects of personality.
What Does the Evidence Say about the Utility
of Personality within Selection?
When considering the role of personality in selection one question is of utmost importance:
is the tool a valid predictor of relevant work‐related criteria? Usually, the focus is on
job performance, but it can also span other important related criteria (e.g., training
performance, counterproductive work behaviour, citizenship behaviour). This section
addresses the vexed question: what does empirical research say about the use of personality
measures during personnel selection?
In 2007 a series of well‐respected organizational researchers considered the use of personality
in selection and concluded that ‘Due to the low validity and content of some
items, many published self‐report personality tests should probably not be used for
personnel
selection’ (Morgeson et al., 2007a, p. 720). In direct response, Ones and
colleagues
(2007, p. 1020) argued, ‘Any selection decision that does not take the key
personality
characteristics of job applicants into account would be deficient.’ Clearly, the
jury on the utility of personality measures is still out.
Personality Questionnaires 157
The use of personality measures in selection remains a contested subject and the literature
has been reviewed, in compelling fashion, to support both sides of the debate. There
is evidence to suggest that personality measures can be useful, but the same evidence tends
to suggest that their utility is limited. The current authors believe that the evidence shows
personality can add value to selection decisions, but only if used appropriately. In this
section,
we discuss the evidence for and against the predictive validity of personality in
selection, but we also address the perhaps most compelling aspect of this discussion:
how can we maximize the utility of personality measures?
Predictive validity: Meta‐analyses
The interpretation of meta‐analytic correlations between personality ratings and job
performance, often assessed by supervisor ratings, is central to the debate regarding the
use of personality during selection. Before we look at some of those relationships, we must
note that there are two main estimates of the correlation between personality and job
performance: a raw correlation and a corrected correlation. Within a meta‐analysis it is
common practice to adjust or correct correlation coefficients based on estimates of
unreliability.
Often, there is an acknowledgement that the criterion variables (e.g., job
performance metrics) and sometimes the predictor variables (in this case personality) lack
reliability, and as a result attenuate the estimated relationship. Corrections increase the
accuracy of population‐level estimates of correlations and are well supported both
theoretically
and statistically (Hunter & Schmidt, 2004; Schmidt, Shaffer & Oh, 2008).
When addressing arguments or building theories and models, the more accurate our
empirical estimates are the better. Nevertheless, despite the well‐accepted practice of
correcting
correlations in meta‐analyses, practitioners generally do not correct estimates in
selection decisions (Morgeson et al., 2007a). Thus, there is a good argument for considering
the magnitude and pattern of relationships of both the uncorrected and corrected
estimates. In this chapter, we present both estimates where applicable.
In 1991, Barrick and Mount published a seminal paper describing the meta‐analysis of
117 American and Canadian studies (undertaken between 1952 and 1988; N = 23,994).
Conscientiousness proved a reliable and valid correlate of job performance across occupations
(r = 0.13, corrected r = 0.23). The remaining traits (extraversion, neuroticism, agreeableness
and openness to experience) were unrelated to job performance en masse. Barrick
and Mount (1991) examined the correlations between the Big Five and a composite variable
of job performance, training performance and personnel data (e.g., salary, tenure) in
the whole sample, but also provided estimates for different job roles. Once again, conscientiousness
proved a valid and reliable predictor across all roles (r = 0.09–0.13, corrected
r = 0.20–0.23). Extraversion was found to be relevant for those in sales (r = 0.09, corrected
r = 0.15) or managerial roles (r = 0.11, corrected r = 0.18), while the other traits were
generally
unrelated. In a European equivalent, Salgado (1997) convergently found conscientiousness
to be a valid and generalized predictor across occupations and performance
criteria with a very similar magnitude of correlation coefficients. Divergently, Salgado reported
a role for neuroticism (r = –0.08, corrected r = –0.12) across all occupational groups,
which corresponds with other meta‐analyses (Hough, Eaton, Dunnette, Kamp & McCloy,
1990). Again, in line with Barrick and Mount (1991), Salgado found that the other personality
traits were not relevant to job performance but were relevant to some other important
organizational criteria (e.g., training performance) in specific occupational groups.
The most compelling study examining the relationship between the Big Five traits and
job performance was a meta‐analysis of meta‐analyses conducted by Barrick, Mount and
Judge (2001). Conscientiousness was found to be important across occupational groups
158 Selection
in terms of job performance (objective rating: r = 0.10, corrected r = 0.23; supervisor
rating:
r = 0.15, corrected r = 0.31) and all other job‐relevant criteria examined. Neuroticism
was also shown to be a generalizable predictor of supervisor‐rated job performance
(r = –0.07, corrected r = –0.13) but was lower in magnitude than conscientiousness and
less consistent across the other criteria examined. Thus, a relatively firm conclusion can be
made that conscientiousness is important for performance in all roles, and that in most
instances lower levels of neuroticism are also related to improved performance. The three
remaining traits, while not relevant to job performance across occupations, can be relevant
in certain roles, for example management (extraversion: r = 0.10, corrected r = 0.21),
and are related to specific work‐related behaviours such as training performance (openness
to experience: r = 0.14, corrected r = 0.33) and team working (agreeableness: r = 0.17,
corrected r = 0.34).
Thus, decades of meta‐analyses have now shown that working in an organized, responsible
and industrious manner (conscientiousness), while maintaining a degree of emotional
stability (neuroticism), is related to successful job performance across the board. Some
researchers (and indeed practitioners) have argued that while evidence of some generally
stable patterns of association between personality traits and job performance are informative,
the magnitude of the relationships raises some serious questions (Guion & Gottier,
1965; Morgeson et al., 2007a). Indeed, with uncorrected rs of –0.10–0.15 and even
corrected rs of –0.20 –0.30 the predictive validity of personality measures is roughly
equivalent to many selection methods broadly considered unusable within selection
(e.g., unstructured interviews, Schmidt & Hunter, 1998).
At a cursory level, we have to agree with Morgeson and colleagues (2007a). These
results do pose serious questions about the utility of personality within selection. The
current authors would certainly feel uncomfortable being selected, or not, based on our
conscientiousness and neuroticism alone. However, we must travel beyond a cursory level
and consider a number of important and substantial nuances within the personality–selection
debate before we reach any firm conclusions. The remainder of this section focuses on five
important nuances within this debate. First, when considering the personality of potential
employees we rarely, if ever, focus on a single trait. Thus, we must look at the combined
explanatory power of multiple personality traits not just univariate correlations. Second,
selection by personality alone (or indeed any single selection method) would be indefensible.
Thus, we must consider the relative and incremental explanatory power of personality
when considered alongside other valid selection tools. Third, broad factors of personality,
such as the FFM/Big Five, currently dominate personality assessment; we consider
whether they offer superior levels of prediction compared to their constituent lower‐order
facets. Fourth, we contest the very premise of universal job performance: that successful
job performance across occupational roles should or would require the same degree and
combination of behaviours seems an odd assumption, one that has perhaps masked the
true potential of personality in the prediction of job performance. Fifth, we consider longstanding
concerns about the measurement error produced by response distortions during
personality assessment with a special focus on the potential utility of partially ipsative
measures
and ‘other ratings’.
Personality is multidimensional
Behaviour is complex. In seeking to explain complex behaviours at work (or anywhere
else), we rarely expect a single trait to be sufficient. Rather, we identify multiple traits that
might contribute and examine their combined ability to explain the behaviour of interest.
Thus, univariate relationships between individual personality traits and job performance
Personality Questionnaires 159
may underestimate the value that personality has to offer. In the same way we would not
calculate the predictive validity of a structured interview or cognitive ability test based on
their constituent parts, we should not judge personality based on single trait associations.
This line of argument has been most convincingly put forward by Ones and colleagues
(2007), who re‐examined the meta‐analytic correlations presented by Barrick and colleagues
(2001; discussed above) and computed the multiple correlations for all of the Big
Five and job performance. The results show that personality predicts objective job
performance with a multiple r of 0.27 (uncorrected r = 0.23) and a composite overall job
performance variable of r = 0.23 (uncorrected r = 0.20). Ones and colleagues (2007) also
demonstrated that personality variables measured at the Big Five level are even more
predictive
of other important elements of workplace behaviour. For example, counterproductive
work behaviours (r = 0.44 and 0.45 for avoiding interpersonal and organizational
deviance, respectively), organizational citizenship behaviours (r = 0.31), leadership
(r = 0.45), teamwork (r = 0.37) and training performance (r = 0.40).
There is no doubt that Ones and colleagues’ (2007) evidence provides a much more
optimistic view of the role of personality in understanding workplace behaviour. Notably,
however, and despite increases from the univariate estimates, the multivariate estimates
relating to job performance – the crucial criterion for selection decisions – are still less than
impressive. Indeed, the multivariate estimate is only slightly greater than that reported for
conscientiousness alone (Barrick et al., 2001) and collectively the Big Five account for
around 5–7% of variance in job performance measures. Thus, some have argued that these
results still provide underwhelming support for the use of personality in selection (Morgeson
et al., 2007b). Again, we generally agree, explaining the same amount of variance in
job performance as unreliable selection methods such as unstructured interviews is hardly
compelling. However, we do not believe that this means that personality tests are not
or cannot be useful. Below, we continue to consider the ways in which personality
assessments
can be used effectively within selection.
Incremental predictive validity of personality measures
Selection decisions are never made based on personality assessments alone and nor should
they be. Given that personality assessments are used as a part of a selection programme,
the practical value of any validity debate pertains to the incremental predictive validity that
personality assessments offer over and above other selection methods. Personality tends to
be weakly correlated with other selection tools, and in particular other individual difference
variables such as cognitive ability. For example, in their meta‐analysis, Judge, Jackson,
Shaw, Scott and Rich (2007) show that the correlations between general mental ability
and the Big Five are small, with the largest correlation being just 0.22 with openness.
Thus, personality and cognitive ability measures capture different information about an
employee and thus personality measures may offer unique predictive validity beyond that
obtained from cognitive ability measures.
Schmidt and Hunter (1998) estimated the incremental predictive validity of 18 selection
methods beyond general mental ability using data from previous meta‐analyses. Their
analyses suggested that, when combined with general mental ability, the personality measures
of integrity (a compound trait consisting of specific traits selected due to their likely
relevance, e.g., conscientiousness, dependability, honesty) and conscientiousness offered
27% (multiple r = 0.65) and 18% (multiple r = 0.60) increases in prediction respectively.
The only other methods to offer similar levels of incremental predictive validity were structured
interviews and work samples (both multiple r = 0.63), which are typically more
expensive and time‐consuming to construct and administer than personality assessments.
160 Selection
Given the positivity of these results, it surprising that little additional empirical study has
followed. In 2006, Rothstein and Goffin, when reviewing personality and selection, found
only two studies examining this question, one demonstrating that personality assessments
offer incremental predictive validity over an assessment centre when predicting managerial
potential (Goffin, Rothstein & Johnston, 1996) and the second showing personality
supplements biodata in predicting job performance (McManus & Kelly, 1999).
In the years following Rothstein and Goffin’s (2006) review a number of researchers
have addressed this issue. In 2014, Oh and colleagues examined the incremental predictive
validity of the Big Five and honesty–humility over and above cognitive ability when
predicting task‐based performance and also contextual performance (the extent to which
employees support non‐performance‐related organizational and psychosocial aspects of
work) of 217 military officer candidates. In relation to task‐based performance, both
cognitive ability (β = 0.25), and conscientiousness (β = 0.34) were significant predictors,
with personality accounting for a 0.22 increase in the multiple correlation. When considering
contextual performance, cognitive ability was not a significant predictor but the personality
traits of conscientiousness (β = 0.32), extraversion (β = 0.16), and honesty‐humility
(β = 0.13) were, and collectively produced a multiple r of 0.37. (The figures presented here
are the uncorrected estimates; the corrected estimates provided by Oh and colleagues
(2014) show no deviation from this pattern but are generally increased in magnitude by
around 0.1–0.15.) Similar results were reported by Colodro, Garcés‐de‐los‐Fayos, López‐
García and Colodro‐Conde (2015), who showed that personality traits (assessed using the
Spanish 16PF) accounted for incremental predictive validity beyond cognitive ability, with
personality explaining three times as much variance in performance.
Despite the positive trend in the incremental validity literature, a recent advance in metaanalytic
corrections suggests that the increment offered might only be, on average, in the
region of 5% variance explained (Schmidt, Shaffer & Oh, 2008). Nevertheless, personality
does offer somewhere in the region of a 5–30% increase in predictive validity. Whether the
validity is towards the upper of lower estimates will depend on the characteristics of the
role and the quality and job relevance of the personality measurement.
Clearly, personality offers novel and useful information that can improve selection
decisions regardless of whether assessments focus on task performance or broader definitions
of performance. The incremental predictive validity offered by personality measures
is particularly valuable to organizations because, in comparison to work samples, role‐plays
or structured interviews, personality scales can be purchased and administered in a
time‐ and cost‐effective manner.
Broad factors or narrow traits
Personality models generally build from large item pools, through facets to higher‐order
factors. For example, each of the Big Five factors as measured by the NEO‐PI‐R subsumes
six narrower facets/traits, each measured by eight items. It has been suggested that there
is little added value in measuring narrow facets, when the five broad factors account for
much of the variance in their lower‐order constituents. For instance, Ones and Viswesvaran
(1996) have argued that the direct measurement of broad personality factors alone is
sufficient, and in the case where the outcome variable is itself broad or complex (e.g., job
performance), preferable, as they suggest predictor and outcome variables of similar
bandwidth give optimal prediction.
However, this approach is also the subject of debate, with suggestions that regardless of
the bandwidth of the outcome, narrow traits still offer important insights. Lower‐order
facets and the broad factors supposed to subsume them are not perfectly correlated; facet
Personality Questionnaires 161
measures possess specific and reliable (non‐random) variance that might offer increased
predictive validity (Paunonen et al., 2003), which is lost when using broad factors.
A number of studies have empirically assessed the predictive validity offered by
broad factors and narrow facets. The conclusion is that narrow facets consistently offer
better and/or incremental predictive validity regardless of the complexity of the behavioural
outcomes
(Ashton, Jackson, Paunonen, Helmes & Rothstein, 1995; Jenkins &
Griffith, 2004; Lounsbury, Sundstrom, Loveland & Gibson, 2003; Paunonen &
Ashton,
2001; Rothstein, Paunonen, Rush & King, 1994; Tett, Steele & Beauregard,
2003; Timmerman, 2006).
Perhaps the most compelling evidence for the superiority of narrow facets when predicting
job performance comes from Judge, Rodell, Klinger, Simon and Crawford’s (2013)
meta‐analysis of 1,176 studies derived from 410 independent samples. Judge and
colleagues
examined the relationships between three hierarchical levels of personality and
task performance, contextual performance and an overall composite performance variable.
At the highest level of the trait hierarchy were the Big Five factors. At the next level, each
factor split into two mid‐level factors consistent with the framework derived by DeYoung,
Quilty and Peterson (2007). At the lowest level of the hierarchy, each factor split into the
six facets defined by FFM framework (Costa & McCrae, 1992).
Judge and colleagues’ (2013) meta‐analysis reveals that optimally weighted composites
of facets resulted in greater criterion‐related validity for predicting all performance outcomes
than did the Big Five factors, often accounting for 3 or 4 times more variance in
performance. With the exception of conscientiousness, which showed similar predictive
validity at all three levels, there was a clear pattern of facets outperforming the De Young
factors, which were in turn better than the Big Five, so providing clear evidence that the
broader the factor the weaker the prediction. A summary of the correlations is shown in
Table 8.1.
The muddying effect of aggregating personality facets into broad factors is detrimental to
predictive validity. For example, when considering task‐based performance, neuroticism at
the Big Five level and facet level accounted for 0.7% and 6.4% of variance, respectively. One
might conclude that 6.4% is not particularly impressive, but as an incremental addition to
other personality traits and other selection methods such as cognitive ability it might prove
very useful. Another stark example of the obscuring effect of broad factors is observed in
the extraversion to contextual performance link where the facets accounted for 24.1% of
variance compared to just 5.4% for the DeYoung factors and 4.8% for the Big Five factor.
Table 8.1 Correlations between personality at three levels of aggregation and overall, task,
and contextual job performance.
Correlations with Job Performance
Overall Task Contextual
Trait Facet Mid Broad Facet Mid Broad Facet Mid Broad
Emotional stability 0.23 0.12 0.10 0.25 0.10 0.08 0.30 0.21 0.16
Extraversion 0.40 0.21 0.19 0.18 0.14 0.12 0.49 0.23 0.21
Openness 0.30 0.10 0.08 0.17 0.13 0.12 0.18 0.06 0.03
Agreeableness 0.19 0.17 0.17 0.24 0.11 0.10 0.33 0.18 0.18
Conscientiousness 0.26 0.27 0.26 0.24 0.25 0.25 0.33 0.32 0.32
Facet = 6 NEO facets, Mid = DeYoung et al. (2007) factors, Broad = FFM
Source: Judge et al. (2013).
162 Selection
We noted earlier that conscientiousness did not as obviously follow this pattern and that
the differences across the three levels are so marginal that they would have little practical
significance. Thus, there seems to be something unique about conscientiousness in the
selection context.
Each of the conscientiousness facets – achievement striving, competence, deliberation,
dutifulness, orderliness – is positively related to job performance in the range of 0.11 to
0.28. This is not surprising given that the aspects of personality assessed by the items (e.g.,
pays attention to detail, follows a schedule, always prepared, determined to succeed) are
clearly of importance for performance in a range of jobs. Thus, the common variance
between
these facets represented by broad conscientiousness is generally useful in every
aspect of work. However, even some conscientiousness facets are unrelated or even
negatively
related to some aspects of job performance in certain roles (e.g., Bunce & West,
1995; Driskell, Hogan, Salas & Hoskin, 1994; Tett, Jackson, Rothstein & Reddon, 1999).
In such cases, it is likely that facets would outperform aggregated factors. The same is true
but more common and extreme for facets of the other factors, which more often show
differential and even opposite relationships.
Further, from a measurement perspective the conscientiousness factor tends to fit well
relative to the other four factors in structural examinations, suggesting that the broad
factor accounts for a good proportion of the facet‐level variance (Vassend & Skrondal,
2011). The other factors often do not fit as well in structural examinations (Vassend &
Skrondal, 2011), suggesting that the facets are less closely related, which is also evident in
the differential relationships displayed with performance criteria.
Given that organizations want to maximize predictive validity, narrow facets rather than
broad factors – which lead to underestimates and/or distorted estimates of relationships
– are evidently of greater value (even in some cases for conscientiousness). However,
it is often not practically feasible or sensible to administer tests of all known personality
traits. Thus, a process of identifying the specific traits to assess is needed.
Job analysis and the selection of relevant traits
So far we have reviewed evidence pertaining to the predictive validity of personality in
predicting job performance en masse, whether that be for police officers, managers in an
investment bank, managers in an ethical bank, nurses, teachers, or military personnel. Yet
we contest the very premise of universal job performance and argue that it is not at all surprising
to find that personality is not a simple universal predictor (Tett & Burnett, 2003;
Tett et al., 1999). Cognitive ability is a linear predictor of performance, so the quicker you
can acquire and utilize information the better, regardless of the job. Yet even for cognitive
ability there are quite marked differences in the magnitude of predictive validity coefficients
across roles. Typically, cognitive ability is most valid in cognitively demanding roles,
with corrected correlations ranging from 0.3 for clerical workers and drivers to 0.7 for
professionals and engineers (Bertua, Anderson & Salgado, 2005).
We suggest that most personality traits are differentially related to performance across
job roles and that the Big Five level of aggregation is not specific enough to maximize
prediction. The diversity of the narrow facets subsumed by the Big Five – warmth and
excitement seeking in extraversion or impulsiveness and depression in neuroticism, for
example – is such that knowing exactly why one of the Big Five should or should not
correlate
with performance is rather difficult. A ski instructor would probably benefit from
scoring high on both warmth and excitement seeking; a nurse would probably benefit
most from warmth but less so from excitement seeking (excitement seeking might even
be detrimental if the nurse works on a rehabilitation ward where novelty is low); while a
Personality Questionnaires 163
soldier would probably benefit most from excitement seeking but less from warmth. So, is
extraversion relevant for ski instructors, nurses and soldiers? Some aspects are and some
are not; some might even be negatively related (Tett, Jackson & Rothstein, 1991). Put
another way, one would not ask the same structured interview questions or use the same
assessment centre tasks when selecting a ski instructor, nurse or soldier. Nor should we use
and weight personality assessments generically.
So, how do we choose which traits to measure during a selection programme? The
answer is not particularly novel: job analysis. For years, recruiters have undergone a process
of describing the characteristics of jobs in order to identify the knowledge, skills and
abilities that are relevant for performance (Brannick & Levine, 2002). Often, such job
analyses will reveal that broad sociability is important, thus a measure of extraversion is
used. We suggest that job analysis should go one step further and identify which aspects of
extraversion; is it warmth, excitement seeking, or both? By considering the role in such
detail, recruiters can then choose facet‐level measures with greater predictive validity than
the Big Five and which ones are probably more time‐efficient to administer.
Numerous authors have discussed personality‐oriented job analysis methods (e.g.,
Costa, McCrae & Kay, 1995; Goffin, Rothstein, Rieder, Poole, Krajewski, Powell & Mestdagh,
2011; Jenkins & Griffith, 2004; Raymark, Schmit & Guion, 1997; Tett, Jackson,
Rothstein & Reddon, 1999) and this area is now receiving more attention in both academia
and practice.
If organizational researchers and practitioners measure narrow facets that
are theoretically
and empirically demonstrated (through job analyses and existing research)
to be relevant to performance in a specific role, we can begin to increase the predictive
validity offered by personality within selection. Empirical support for this confirmatory
approach to personality selection is provided by Tett and colleagues (1991, 1999), who
showed that confirmatory strategies (i.e., those based on job analyses) yield predictive
validity coefficients double in magnitude (uncorrected r = 0.20, corrected r =0.30) compared
to those derived using exploratory strategies (uncorrected r = 0.10, corrected r = 0.16).
Further evidence is reported in a meta‐analysis by Hogan and Holland (2003), who show
corrected correlations between theoretically‐related facets and performance‐based criteria
ranging from 0.3 to 0.4 (uncorrected 0.1–0.3).
The use of personality‐oriented job analysis yields more detailed and precise hypotheses
regarding the associations between personality and job performance. For example, one
might find that being an effective team player is crucial to job performance. The literature
might suggest that gregariousness and assertiveness (facets of extraversion) predict team
functioning in similar industries. One can then begin to model personality appropriately
as X (gregariousness, assertiveness) → M (team effectiveness) → Y (job performance).
We know from years of mediation research that it is possible that X is modestly related to
Y but strongly related to M, while in turn M is strongly related to Y. Such process models
of personality more accurately approximate the real world and as a result are more informative
and predictive (Hampson, 2012; Hughes, 2014). In order to evaluate such models
one would also need robust measures of job performance that include ratings of the
M variables. Such measures would not be particularly difficult to generate as these variables
will be identified during job analysis. Presumably, managers would be interested in these
facets of performance as well as an overall composite. Equally, one could adopt existing,
multifaceted measures of job performance, such as the Universal Competency Framework
(Bartram, 2005; Rojon, McDowall & Saunders, 2015).
Before closing this section, it is worth noting that there is likely one large hurdle to
overcome before targeted, facet‐level programmes are widely adopted in research and
practice, namely, identifying a satisfactory list of facets that prove as marketable as the
Big Five. One initial reaction might be simply to use the facets of the five‐factor model.
164 Selection
This is certainly not a bad starting point. However, as we discussed earlier, neither the
NEO‐PI‐R facet list nor a facet list from any popular omnibus measure is exhaustive of the
personality sphere. Thus, one major goal has to be to develop such a list.
The first author of this chapter and colleagues (Booth, Irwing & Hughes, in preparation;
Irwing & Booth, 2013) are in the process of building on the work of Booth (2011),
who semantically sorted and then factor‐analysed 1,772 personality items. The items were
drawn from seven major omnibus personality inventories (NEO‐PI‐R, CPI, 16PF, MPQ,
JPI‐R, HEXACO, 6PFQ) and four specifically chosen narrow inventories with several
specific measures (social dominance orientation, right‐wing authoritarianism, Machiavellianism,
need for cognition). In total, this analysis identified 78 seemingly unique personality
facets, though it is evident that some important traits appear to be missing from this
list and further validation work is ongoing (for details contact david.hughes@mbs.ac.uk).
Should this research produce a final list of 100 or so reliable and valid traits, it can serve as
a personality trait dictionary to be utilized to measure the important aspects of personality
identified during a job analysis in a manner that reduces the general problems of the Jingle
Jangle Fallacy discussed in the introduction to this chapter.
In the meantime, regardless of whether a universal list of personality facets is available,
researchers and practitioners alike can adopt the targeted facet approach within whichever
personality framework they deem most suitable. All personality measures have facets,
which can be used to measure personality traits closely aligned with crucial aspects of job
performance.
Response Distortions
The final predictive validity discussion concerns the persistent problem of response distortions.
Personality items are designed to measure respondents’ characteristic patterns of
thinking, feeling and behaving. The utility of any measure corresponds to its reliability and
validity, both of which are attenuated by measurement error (i.e., measuring things other
than the intended target). Self‐report questionnaire items are susceptible to a wide variety
of measurement errors, from differences in item and response scale interpretations to
systematic differences in response styles (e.g., acquiescence; extreme responding; Furnham,
1986). Many of these sources of measurement error are well known. However, of more
interest in the selection literature is measurement error arising from response distortions
due to low self‐awareness and deliberate faking.
There is a number of excellent discussions of faking in the literature, covering areas such
as how much people fake, how successful people are at faking and how faking influences reliability
and validity (see Birkeland, Manson, Kisamore, Brannick & Smith, 2006; Morgeson
et al., 2007a; Mueller‐Hanson, Heggestad & Thornton, 2003; Ones & Viswesvaran, 1998;
Tett & Christiansen, 2007), and we do not intend to reproduce these discussions here.
For the current authors, the bottom line is that response distortions such as faking almost
certainly occur and that some individuals fake more than others, which is of course problematic
from measurement, validity and ethical perspectives (Birkeland et al., 2006). Nevertheless,
personality measures retain predictive validity, as discussed above, regardless of response
distortions (Ones & Viswesvaran, 1998). Thus, personality measures are still useful in a
world where response distortions are quite common. This does not mean, however, that
we should ignore response distortion or, as some have suggested, see it as a desirable social
skill (Morgeson et al., 2007a). Rather, we should aim to measure, model and prevent it.
If we can reduce response distortions, then we may be able to improve the predictive
validity of personality tests further and certainly reduce the associated ethical issues.
Personality Questionnaires 165
Numerous solutions to combat response distortions have been suggested. Solutions
such as social desirability scales (Feeney & Goffin, 2015), forced‐choice or ipsative
measures
(presenting candidates with multiple trait statements that are matched for social
desirability and allowing them only to indicate one that is most like them; Heggestad,
Morrison, Reeve & McCloy, 2006; Johnson, Wood & Blinkhorn, 1988; Meade, 2004),
and imposing time limits for candidates (Holden, Wood & Tomashewski, 2001; Komar,
Komar, Robie & Taggar, 2010). While each of these methods shows some promise, none
has shown any genuinely compelling empirical support.
That forced‐choice personality measures have little influence over social desirability
ratings,
despite appearing to be more difficult to fake, is surprising (Heggestad et al.,
2006). It is also the prevailing view in the organizational psychology community that
compared to Likert‐type formats, ipsative measures produce lower predictive validity.
Recent studies, however, provide evidence to the contrary (Bartram, 2007; Salgado,
Anderson
& Tauriz, 2014; Salgado & Tauriz, 2014).
Forced‐choice measures come in two broadly different formats: fully ipsative (e.g.,
rank order four items/traits beginning with the one most like you) and partially ipsative,
which contain a forced‐choice element while retaining some flexibility (e.g., choose from
a list of four the item/trait least and most like you; see Hicks, 1970). A recent metaanalysis
by Salgado and colleagues (2014) suggests that fully ipsative measures perform
poorly with regard to predictive validity, but that partially ipsative measures produce
impressive levels of predictive validity (Salgado et al., 2014). Compared to validity estimates
derived predominantly from Likert‐type measures (Barrick et al., 2001), partially
ipsative assessments of emotional stability, openness and conscientiousness are considerably
larger, while measures of extraversion and agreeableness are equivalent across formats
(See Table 8.2).
Salgado and colleagues (2014) also examined associations within eight job roles: clerical,
customer service, health, managerial, military, sales, skilled and supervisory. The primary
study numbers (k = 2–11) and sample sizes (N = 171–3,007) are small and vary markedly
across job roles. Equally, estimates for each of the Big Five were not available across all
roles (e.g. emotional stability not reported in customer service roles). We suggest that
these notable limitations preclude firm conclusions regarding which traits best predict
which role, however, the pattern of the results remains very interesting. Particularly
striking is the range of validities reported, which in raw correlations vary from 0 to 0.4 and
in corrected validates vary from 0 to 0.7. Table 8.3 includes the highest and lowest predictive
validity reported for each of the Big Five. The difference in variance explained between
the mean and largest validity estimates is substantial, with the largest estimates between 4
and 10 times as large as the mean.
The variation in predictive validity provides compelling support for the arguments put
forward in the ‘Job analysis and the selection of relevant traits’ section, specifically, that
universal job performance does not exist and that the nature of the role moderates the
correlations between personality and job performance. Further research using job‐relevant,
partially ipsative personality measures identified through personality‐oriented job analysis
or theoretical frameworks appears warranted.
The results from partially ipsative measures appear compelling. However, self‐report distortions
remain resilient. One potential avenue for mitigating the problems with self‐ratings
altogether is not to rely on them but instead have ‘others’ rate candidates’ personality. Two
meta‐analyses indicate that other ratings of personality might offer improved predictive
validity over self‐ratings. Connelly and Ones (2010) conducted a meta‐analysis of 44,178
targeted individuals rated across 263 independent samples. Each target participant had
at least one set of other ratings for the Big Five. Similarly, Oh, Wang and Mount (2011)
166 Selection
conducted a meta‐analysis of some 2,000 target individuals from 18 independent samples.
Table 8.3 displays a summary of the main findings of these meta‐analyses.
The predictive validities of other ratings were substantially higher than for self‐ratings,
regardless of whether or what type of correction was utilized. In many instances the predictive
validity of other ratings are 2, 3 or 4 times the magnitude of self‐ratings. In the case
of openness, the other ratings are 6 times the magnitude of self‐ratings. The magnitudes
of these relationships are impressive. If we use the estimates provided by Schmidt and
Hunter (1998) as a guide, the univariate validities are equivalent to some of our most valid
selection methods, the multivariate validity would no doubt surpass many of these other
methods and the potential incremental predictive validity over and above other methods
is substantial.
Oh and colleagues’ (2011) meta‐analysis provides two more particularly interesting
findings for the selection domain. First, it appears that combining self‐ratings with other
ratings is of little value as self‐reports offer little incremental predictive validity over other
reports. Second, while predictive validity increases in line with the number of other ratings,
the increment is generally small. Specifically, the increase from 1 to 3 other ratings ranges
from 0.04 to 0.06 (uncorrected) and 0.05 to 0.09 (corrected), suggesting that while
multiple
other ratings are optimal, the value of a single other rating is still substantial
(Oh et al., 2011).
Table 8.2 Mean, lowest and highest predictive validities of partially ipsative measures of the Big Five.
Correlations with Job Performance
Partially Ipsative Likert‐type
Trait r r1 r2 r r1 r2
Emotional stability
Highest: Supervisory 0.37 0.68 460.2
Lowest: Managerial –0.01 –0.02 00.0
Mean 0.11 0.20 40.0 0.09 0.10 10.0
Extraversion
Highest: Managerial 0.21 0.34 110.6
Lowest: Sales 0.05 0.08 00.6
Mean 0.07 0.12 10.4 0.06 0.13 10.7
Openness
Highest: Clerical –0.27 –0.44 190.4
Lowest: Sales 0.11 0.17 20.9
Mean 0.14 0.22 40.8 0.02 0.03 00.0
Agreeableness
Highest: Skilled 0.28 0.42 170.6
Lowest: Managerial –0.04 –0.07 00.5
Mean 0.10 0.16 20.6 0.07 0.17 20.9
Conscientiousness
Highest: Skilled 0.43 0.71 500.4
Lowest: Supervisory 0.09 0.18 30.2
Mean 0.22 0.38 140.4 0.10 0.23 50.3
Likert‐type estimates taken from Barrick et al. (2001); r = uncorrected correlation; r1 = corrected for unreliability
in criterion only and indirect range restriction in the predictor; r2 = percentage of variance explained.
Personality Questionnaires 167
Clearly, other ratings offer a marked improvement over self‐ratings in predicting job
performance. One likely contribution to the increase is that other ratings mitigate the
response distortions commonly associated with self‐ratings, which is highly desirable.
Perhaps
less desirable is the possibility that observer ratings and job performance ratings
are highly correlated due to an element of common method bias. It is plausible that other
ratings of personality are assessing reputation and likeability, which is arguably what supervisor
ratings of overall job performance are assessing. Whether this shared variance is a
good or bad thing remains to be debated. Nevertheless, the results from studies of other
ratings are highly promising.
Predictive validity: Conclusion
Self‐ratings of higher‐order factors of personality modestly relate to supervisor and
objective ratings of overall job performance. The magnitude of the correlations (or
corrected operational validities) typically ranges from 0.0 to 0.3 and this is true whether
personality factors are examined in univariate (single‐factor correlations) or multivariate
(as a group of five factors) fashion. With the exception of conscientiousness, many broad
personality factors appear to be generally unrelated to ratings of overall job performance.
However, broad personality factors do offer much greater levels of prediction of other
crucial elements of workplace performance, such as counterproductive behaviours,
leadership
and teamwork.
Table 8.3 Correlations between job performance and personality as assessed by self‐ratings and
other ratings.
Correlations with Job Performance
Connelly and Ones (2010) Oh et al. (2011)
Trait and rating type r r1 r2 r r3
Emotional Stability
Other rating 0.14 0.17 0.37 0.17 0.24
Self‐rating 0.06 0.11 0.12 0.09 0.14
Extraversion
Other rating 0.08 0.11 0.18 0.21 0.29
Self‐rating 0.06 0.11 0.12 0.06 0.09
Openness
Other rating 0.18 0.22 0.45 0.20 0.29
Self‐rating 0.03 0.04 0.05 0.03 0.05
Agreeableness
Other rating 0.13 0.17 0.31 0.23 0.34
Self‐rating 0.06 0.11 0.13 0.07 0.10
Conscientiousness
Other rating 0.23 0.29 0.55 0.31 0.41
Self‐rating 0.12 0.20 0.23 0.15 0.22
r = uncorrected correlation; r1 = corrected for unreliability in criterion only; r2 = corrected for unreliability in
the predictor and criterion; r3 = corrected for unreliability in the criterion measure and range restriction in
the predictor; Other ratings for Oh et al. (2011) refer to the mean predictive validity taken from three
observers.
168 Selection
At this point, some might conclude that personality is generally not fit for purpose in
the selection context (e.g., Morgeson et al., 2007a). However, that personality measures
as a stand‐alone do not offer particularly grand levels of predictive validity does not mean
that they are useless. Rather, personality measures offer significant and cost‐effective
(in terms of time and money) incremental predictive validity over other selection
methods. Notably, the combination of cognitive ability and personality is among the
most powerful combinations of selection methods. Thus, we can endorse the use of personality
as a component
of a rigorous selection programme (Schmidt & Hunter, 1998;
Schmidt et al., 2008).
Further, when we step away from meta‐analytic correlations of the Big Five the picture
is much more interesting. Narrow, lower‐order facets offer much greater predictive
validity (with the exception of conscientiousness, between 2 and 6 times more) than do
their broad composite factors. While facet‐level analyses are clearly superior to broad
factor analyses, it is also likely that our current estimates of this superiority represent
underestimates. Currently, the data we have lack nuance as they pertain to job
performance en masse across numerous industries, organizations and roles. However, as
we suggest, personality is not a universal predictor: different roles require the utilization
of different levels and combinations of behaviours. In addition, no single facet list from
popular measures of personality is exhaustive, and thus omits potentially important
personality
traits (e.g., the dark triad) and further underestimating the predictive validity
of personality.
In spite of the current limitations on our estimates, it is clear that matching a few narrow
traits on the basis of existing empirical evidence and theory leads to increased predictive
validity (Judge et al., 2013; Paunonen & Ashton, 2001). Personality‐oriented job analysis
offers an avenue to identify the narrow facets of relevance and, if utilized appropriately, can
further increase the predictive validity of self‐ratings of personality. We know of no studies
that have examined the incremental predictive validity of facet‐level personality ratings,
based on job analysis, over and above cognitive ability. We suggest that such a study is of
great importance in furthering this debate.
One of the likely limiting factors in the validity of personality measures is their susceptibility
to response distortions (e.g. low self‐awareness, faking). The evidence reviewed here
suggests that replacing self‐ratings with other ratings might mitigate self‐report response
distortions and offer substantially increased predictive validity. An intriguing question
remains just how much predictive validity increases by the simultaneous use of job analysis
to identify relevant narrow facets, which are rated by others and used in a multivariate
manner to predict nuanced measures of job performance. The evidence reviewed in this
chapter suggests that this approach could yield substantial gains in predictive validity and
ultimately improve our selection practices.
Equally, recent research suggests that partially ipsative personality measures have
improved predictive validities compared to traditional, Likert‐type measures. The utility of
partially ipsative measures is even more pronounced when the moderating effects of job
role are taken into account, with univariate relationships with performance within specific
roles ranging from 0.3 to 0.7. In addition, recent advances in the scoring and modelling
of ipsative items (Brown & Maydeu‐Olivares, 2013, in press) make the measures more
appealing and practically useful.
In sum, we believe that the predictive validity evidence suggests that personality traits are
valuable during selection. Even a simple measure of conscientiousness offers incremental
predictive validity in most selection scenarios. However, more nuanced use of personality
measures leads to even greater levels of predictive validity which, in our view, make
personality
an important component of the selection toolbox.
Personality Questionnaires 169
How and When to Use Personality Assessments in Selection
The previous sections have examined the question of whether or not personality assessments
are useful for selection. Having concluded that they can be, we now provide a
slightly more practitioner‐focused discussion of the questions how and when personality
assessments can be useful.
Employees are often the single largest cost and most complex ‘resource’ to manage;
they are also the source of the knowledge, skills and abilities needed for organizations to
thrive. Therefore, effective employee selection is a crucial component of organizational
functioning and there is an indisputable need for organizations to ensure that they manage
the flow of talented people within their organizations. The activities and processes often
identified as vital for talent management include recruitment, selection, development,
reward, performance management and succession planning. Personality data can be useful
in all these areas. Equally, selection does not refer exclusively to the selection of new
employees. Personality data can be useful for the selection of redeployed staff, short‐term
secondments, expatriate workers and future talent.
In order to elucidate how and when personality assessment can be useful in selection
and talent management more broadly, a selection paradigm is presented in Figure 8.1.
There is no established framework for the selection paradigm, but authors have agreed on
some key elements (Guion & Highhouse, 2006; Smith & Smith, 2005), which range from
identifying the needs of the organization through to the evaluation of the selected
candidate(s). As discussed above, personality‐oriented job analysis offers a very useful
framework but rather than repeat this discussion, in this section we focus on considerations
when choosing selection methods (beyond predictive validity), administering
selection
methods (initial and additional) and how to use personality data after the initial
selection decision is made.
Choosing selection methods
Selection methods should be chosen based on consideration of seven key criteria in four
main areas: 1) reliability and predictive validity; 2) legality and fairness; 3) cost and practicality;
and 4) candidate reactions. Personality assessments generally perform well against
these key criteria (Hough, Oswald & Ployhart, 2001; Mount & Barrick, 1995).
The first and most important consideration, and the main focus of this chapter, is reliability
and predictive validity. Put simply, if the method does not predict job performance,
then it is of no interest in a selection context. It is important to note here that the reliability
and predictive validity of each personality assessment will vary, and often free research
scales perform better than for‐pay scales (e.g. Hamby, Taylor, Snowden & Peterson, 2016;
Salgado, 2003). Given the detailed exploration of predictive validity discussed above, little
remains to say beyond check the research evidence pertaining to reliability and validity and
choose the measure with the best predictive validity. There are, however, four caveats.
First, make sure the test measures traits shown to be of relevance during the job analysis.
Second, ensure the measure has been tested and validated on an appropriate sample that
approximates the likely candidate pool. Third, be wary of wild claims about predictive
validity – if a self‐report measure of a construct closely approximating conscientiousness
claims predictive validities much greater than 0.3, carefully examine this evidence. Fourth,
gains in predictive validity should be considered alongside testing time and format. Choosing
a more complex, demanding, lengthy or expensive measure for a marginal increase in
validity (e.g., 0.33 vs. 0.35) makes little sense.
170 Selection
Once predictive validity has been assessed, selection method choice moves on to
concerns
of fairness and practicality. Turning to legality, there is a requirement to check
the relevant legislation in each geographical area of usage. That said, if a valid personality
assessment was chosen based on a thorough job analysis and the measures are appropriate
(e.g., are not clinical in nature), administered and interpreted by qualified personnel and
are used as part of a fuller selection programme, then there is little to suggest that using
them will be indefensible. This claim is further supported by the comprehensive body of
evidence that shows personality assessments to be less prone to adverse impact than many
other selection methods such as interviews, assessment centres or references (Hough
et al., 2001; Sackett & Lievens, 2008). In general, there are minimal racial group and age
group differences observed in personality assessments, and certainly these are much smaller
than those observed in measures of cognitive ability and situational judgement tests.
Organisational needs analysis
Job analysis
Job description
Person specification
Competency framework
Identify selection criteria
Choose selection methods
Attract candidates
Administer selection methods – initial
Administer selection methods – additional
Hiring decision
Placement
Evaluation
Figure 8.1 Overview of a typical selection paradigm.
Personality Questionnaires 171
However, there are some notable differences between men and women on personality
measures, which may not be a result of measurement error but reflections of actual group
differences (Costa, Terracciano & McCrae, 2001; Del Giudice, Booth & Irwing, 2012).
These should be considered during selection (Hough et al., 2001). It is also currently
unknown whether partially ipsative measures and other ratings of personality influence
adverse impact. Nevertheless, we can conclude that self‐ratings of personality perform
relatively well in terms of fairness and legality.
The next considerations when choosing selection methods (or perhaps first in practice)
are cost and practicality. Personality assessments are an extremely cost‐effective selection
method. They are largely inexpensive to procure and can be administered digitally without
any negative effects on response patterns (Ployhart, Weekley, Holtz & Kemp, 2003),
allowing
time‐ and cost‐efficient assessment of many candidates in multiple geographic
locations. Personality measures are among the best selection methods when considering
cost and practicality.
The final consideration is candidate reactions. Organizations do not want new employees’
first interaction with the company to be unpleasant. Equally, organizations do not want
talented but unsuccessful applicants to be deterred from applying for future vacancies. So
it is important to consider how the candidate might feel during the selection process
(
however, this is much less important than predictive validity and fairness). Despite some
concerns regarding intrusiveness and a lack of perceived job relevance (Rosse, Miller &
Stecher, 1994), personality assessments are ‘favourably evaluated’ by selection candidates
(Anderson, Salgado & Hülsheger, 2010, p. 291), but less so than interviews, work samples
and cognitive ability tests (Anderson et al., 2010). Nevertheless, personality assessments
are a ‘scientific method of selection’ (Steiner & Gilliland, 1996) and, when used, as we
suggest throughout this chapter, in conjunction with cognitive ability, they receive positive
candidate reactions (Anderson et al., 2010; Rosse et al., 1994). Indeed, organizations that
employ rigorous selection procedures and use scientific selection methods are deemed to
be more attractive to potential employees (Steiner & Gilliland, 1996), which is hugely
important in recruiting a larger applicant pool. This is of course important, as selection will
be poor, regardless of the quality of the selection methods employed, if the candidate
pool does not consist of individuals with the knowledge, skills and abilities necessary to
perform well.
Administering selection methods – Initial
An early stage of the selection process often involves the filtration of potential applicants
(Cook, 2009). Traditionally, this first sift or filter is achieved through a number of methods
which may include examination of application forms or curricula vitae, situational judgement
tests, job knowledge or skills, minimum experience or qualifications, criminal record
check or cognitive ability assessments. Personality assessment, if based on a job analysis
and used in conjunction with other selection methods (e.g. cognitive ability), can be used
to sift the initial candidate pool. This approach allows more expensive and labour‐intensive
methods to be applied to a reduced candidate pool. In general, initial assessments can be
conducted online and thus, as discussed above, become very time‐ and cost‐effective
without any reduction in response quality and an improvement in candidate reactions
(Salgado & Moscoso, 2003).
A recent trend concerns the ‘selecting out’ (removing from the candidate pool) of
candidates with specific traits. Identifying the personality traits of the ‘dark triad’ – psychopathy,
Machiavellianism and narcissism (Paulhus & Williams, 2002) – is popular in
this area given their negative influence in the workplace (e.g., Moscoso & Salgado, 2004;
172 Selection
O’Boyle, Forsyth, Banks & McDaniel, 2012). However, further research is needed
regarding selecting out in general and the potential adverse impact such a practice might
induce in the selection process before any firm practitioner points can be made. For
example, some research examining the dark triad suggests that higher scores are not
universal indicators of poor performance or delinquency (Hogan, 2007).
Administering selection methods – Additional
Following initial sifting, most organizations employ additional selection methods before
making a final selection decision. Personality assessments can be used at this stage to
improve the efficacy of the selection process. First, as we have discussed throughout this
chapter, personality assessments can be used to identify the extent to which candidates
may possess the characteristics that will help them excel against competences or duties
essential for the role. Second, the analysis of candidates’ personality profiles can be used to
identify specific interview questions which can be used alongside traditional structured
interview questions (e.g., Morgeson et al., 2007a; Schmitt & Kunce, 2002). For
example,
candidates who report an extreme tendency to be introverted might be asked to explain
how they tend to collaborate, while candidates who report an extreme tendency
towards
extraversion might be asked how they work independently. Third, personality assessments
can be used to identify values and motives in order to ascertain potential cultural fit
between
the candidate and the recruiting organization (Blackman, 2002).
Using personality assessment data post‐selection
Selection processes can be expensive and the data gathered can be of use beyond a final
selection decision. The re‐utilization of personality data after selection makes for a better
return on investment and ensures that the data collected has continuing benefits. If personality
data are to be used after selection, it is important that candidates be informed of
this prior to completing the measures. If this is done, we see personality data as useful in
four ways after selection.
First, when selecting multiple candidates, personality data can inform initial placement
by matching the candidates with mentors (Wanberg, Kammeyer‐Mueller & Marchese,
2006), teams (Morgeson, Reider & Campion, 2005) or leaders (Monzani, Ripoll & Peiró,
2015). In this approach, the personality profiles of the selected employee will be compared
with those of existing team members or managers. It is important to note that this does
not represent the ‘cloning’ of existing team members or leaders, which is generally to be
avoided, but ensuring a complementary fit of typical tendencies for thinking, feeling and
behaving.
Second, personality assessment data can inform initial employee coaching and development
(Batey, Walker & Hughes, 2012). Here, the new employee can discuss their likely strengths
and development areas on starting in their new role, and the same information can be
shared with team colleagues as part of the induction process.
Third, the personality data might indicate that new employees possess managerial
potential (Goffin et al., 1996) and are well suited to leadership positions (Judge,
Bono, Illies & Gerhardt, 2002) or expatriate roles (Caligiuri, 2000) and thus they
could be considered
for ‘high potential’ or ‘rising talent’ programmes (Silzer &
Church, 2010).
Fourth, if the role, team or department that new employees have joined is subsequently
subject to redesign, restructuring or redeployment, the personality data could partially
inform what new roles they could perform.
Personality Questionnaires 173
The key issue stressed here is that personality data collected during selection can be
effectively used later, provided the candidate is informed of these potential uses during the
selection process. Using selection data to inform placement and development offers other
advantages. Framing personality assessment during selection as the first step in a developmental
trajectory and explaining to candidates how the data are to be used increases the
face validity and job relevance of the measure, thus improving candidate reactions during
selection. It is also possible, though at this point speculative, that candidates may be more
engaged in the selection process and ‘fake’ less if they understand that the personality
assessment will influence with whom they will work and the training they will receive.
One final note is that while self‐ratings are useful, other reports appear to offer greater
predictive validity. Thus, we suggest that once a candidate has been in role for a year or so,
the company ceases to use self‐ratings and instead uses other ratings gleaned from
colleagues,
managers and subordinates as part of ongoing 360‐degree development
(Batey et al., 2012).
Future Research
To our mind, the study of personality is deeply fascinating. That we can measure the very
essence of human character is marvellous. That those measurements predict workplace
behaviour is hugely useful. The focused review and analysis in this chapter demonstrate
that research regarding personality and selection has made incredible progress over the
past 30 years and that the area remains vibrant with novel studies frequently challenging
assumptions, improving knowledge and creating a very solid platform for evidence‐based
practice. In keeping with the dynamic nature of the personality–selection field, we finish
this chapter by presenting a number of exciting avenues that are ripe for future research.
Throughout the chapter, we have noted areas of research with promising findings that
need further exploration. We will briefly recapitulate these and discuss some other areas we
believe deserve research attention.
Our first suggestion for future research – the further development of personality measures
– is unlikely to prove universally popular. Many researchers and practitioners have a
preferred tool to which they are strongly committed. However, as discussed in the early
sections of this chapter, there are limitations with currently popular measures based on the
Big Five (e.g., NEO‐PI‐R, HEXACO, HPI). Specifically, the models were developed without
a guiding theoretical framework and use suboptimal psychometric procedures, meaning
that debate remains regarding how many factors exist at each level of the personality
hierarchy and most omnibus personality measures have less than spectacular psychometric
properties (Block, 2010; Booth & Hughes, 2014). In addition, and of great importance
to the selection domain, all omnibus personality measures omit a large number of potentially
important traits (Booth, 2011; Paunonen & Jackson, 2000). Further research that
improves personality theory and measurement along these lines is very welcome.
We believe it is time to move away from producing meta‐analyses of correlations between
broad personality factors and broad measures of performance. Instead, we wish to
see more theory‐driven model testing approaches to personality–job performance
research. Particularly, process models examining the effects of mediators (e.g., teamwork,
communication, motivation) and moderators (e.g., organizational culture, team composition,
leader behaviour) within the personality–job performance link appear to be a fruitful
avenue of exploration.
In line with the argument that universal job performance does not exist and that job
roles moderate predictive validity, we call for researchers to begin building a picture of
174 Selection
role‐specific associations (e.g., leadership roles, clerical roles, sales roles, policing, teaching).
Within this call, we see a crucial role for personality‐oriented job analysis and narrow facets
of personality. In order to facilitate such research we believe that the production of a
single, exhaustive list of narrow facets would reduce the common Jingle Jangle Fallacy
problem and allow for the systematic exploration of the relations between narrow traits
and job performance. It is also important within this research that we move away from
unidimensional measures of performance and towards more realistic multidimensional
models such as that proposed by Bartram (2005). Such research would be much more
theory‐laden and have great practical value. In time, we will be able to aggregate these
studies to provide meta‐analytic estimates while retaining useful, role‐specific information.
Similar efforts have been successful in cognitive ability research (e.g., Bertua et al., 2005).
Traits do not exist or act in isolation; as discussed above, personality is multidimensional.
Currently, most multidimensional personality research adopts a simple, cumulative
regression or aggregation approach. However, we believe that the value of traits is not
simply additive. Rather, traits interact to drive motivation and behaviour. A number of
studies show that trait interactions are of value in understanding performance at work
(e.g., Blickle et al., 2013; Judge & Erez, 2007; Oh, Lee, Ashton & De Vries, 2011).
Accordingly, we call for further research in this promising area.
Similarly, the relationship between personality and job performance in some roles might
be curvilinear. It is possible that too much conscientiousness or too much extraversion
will be counterproductive in some roles (e.g., Bunce & West, 1995; Driskell et al., 1994;
Tett et al., 1999). Examinations of curvilinear relationships might increase understanding
regarding when and where traits are most relevant and potentially indicate optimal trait
levels for specific workplace tasks. Studies have been undertaken examining curvilinear
effects, but to date the results are generally inconclusive (e.g., Le, Oh, Robbins, Ilies,
Holland
& Westrick, 2011).
One area of work we have not discussed in any detail is teamwork. People often work
in teams, at least to some degree, with truly solitary work virtually unheard of in most
roles. Despite the fact that workplace interdependence is the norm, we measure only
individual traits and individual performance. While conscientiousness is the single most
important predictor for individual task performance, it is possible that other traits are very
important because they have an impact on the performance and well‐being of others.
Examining how personality enhances or suppresses group performance is a much‐needed
avenue of exploration.
Response distortions remain a problem for self‐ratings of personality. Further research
is required to understand these distortions and generate useful methods to overcome
them. Forced‐choice measures have generally offered limited utility in combating social
desirability. However, recent research suggests that some of this underwhelming
performance might be the result of suboptimal test construction, variable scoring and
analytical
procedures (Brown & Maydeu‐Olivares, 2013, in press; Meade, 2004).
Regardless of effects on social desirability, partially ipsative forced‐choice measures offer
impressive levels of predictive validity and outperform those achieved using Likert‐type
measures. Further examinations of the predictive validity of partially ipsative measures are
warranted, as are explorations of how these rating formats influence adverse impact.
In related fashion, the results from other ratings are so promising that we must continue
to examine them as a plausible measurement approach. Research must consider how other
ratings perform when using facet measures and compare these to broad measures. If the
increment in predictive validity offered by narrow traits in self‐ratings applies equally to
other ratings, then other ratings become even more attractive. We also need to explore
more thoroughly how other ratings differ from self‐ratings: do other ratings still perform
Personality Questionnaires 175
fairly across different groups (e.g., are there sex or racial differences in ratings), does the
rank order of applicants change from self‐ratings and other ratings and to what extent does
common method bias account for the increased correlations with job performance metrics?
Equally, pragmatic research regarding how to source other ratings reliably is required.
Finally, we call for a tighter integration between academia, selection practitioners and
test publishers. Practitioners have the ability to accelerate progress by adopting some of
the approaches outlined in this chapter and collecting real‐life, real‐time data which can
only serve to enhance our understanding of the personality–job performance link. Bridging
the science–practitioner divide discussed in the introduction is paramount to the fruitfulness
of our field.
Conclusion
In this chapter, we have reviewed the evidence for the utility of personality trait assessments
within selection. We conclude that personality assessments can be a very useful
component of the selection toolbox. We have also considered that the latest evidence suggests
that the use of narrow facets based on theoretical and empirical reasoning offers
superior predictive validity to broad factors and currently represent the most effective
method of utilizing self‐ratings of personality within selection. In addition, we discussed
the potential of partially ipsative measures and other ratings to bypass response distortions
and greatly increase predictive validity. Further, we have outlined a defensible and robust
paradigm for integrating personality assessments into the selection process.
What we believe this review demonstrates for practitioners is this. The utilization of personality
assessments in selection must operate concurrently with a broader selection
programme involving cognitive ability or similar selection tools. Further, if personality is
used in an off‐the‐shelf and uncritical fashion it will almost certainly yield modest values
(correlations with job performance in the region of 0.1–0.3). In fact, using personality
measures in this way is questionable given that trait measures are not linked to job
performance. However, practitioners prepared to employ more nuanced job analyses and
trait selections, within a rigorous selection paradigm, will maximize the value of personality
assessment, thus increasing the likelihood that the chosen candidate(s) will think, feel
and behave in a manner that will contribute to organizationally defined metrics of success.
References
Ackerman, P. L. (2000). Domain‐specific knowledge as the ‘dark matter’ of adult intelligence: gf/gc,
personality and interest correlates. Journal of Gerontology: Psychological Sciences, 55B(2), 69–84.
Allport, G. W. (1961). Pattern and Growth in Personality. New York: Holt, Rinehart & Wilson.
Allport, G.W., & Odbert, H. S. (1936). Trait‐names: A psycho‐lexical study. Psychological Monographs,
47(1, Whole No. 211).
Anderson, N., Salgado, J. F., & Hülsheger, U. R. (2010). Applicant reactions in selection:
Comprehensive meta‐analysis into reaction generalization versus situational specificity.
International Journal of Selection and Assessment, 18(3), 291–304.
Ashton, M. C., Jackson, D. N., Paunonen, S. V., Helmes, E., & Rothstein, M. G. (1995). The
criterion
validity of broad factor scales versus specific facet scales. Journal of Research in
Personality, 29, 432–442.
Ashton, M. C., Lee, K., & Son, C. (2000). Honesty as the sixth factor of personality: Correlations
with Machiavellianism, primary psychopathy, and social adroitness. European Journal of
Personality, 14, 359–368.
176 Selection
Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance:
A meta‐analysis. Personnel Psychology, 44, 1–26.
Barrick, M. R., & Mount, M. K. (1996). Effects of impression management and self‐deception on
the predictive validity of personality constructs. Journal of Applied Psychology, 81(3), 261–272.
Barrick, M. R., Mount, M. K., & Judge, T. A. (2001). The FFM personality dimensions and job
performance: Meta‐analysis of meta‐analyses. International Journal of Selection and Assessment,
9(1–2), 9–30.
Bartram, D. (2005). The Great Eight competencies: a criterion‐centric approach to validation.
Journal of Applied Psychology, 90(6), 1185–1203.
Bartram, D. (2007). Increasing validity with forced‐choice criterion measurement formats.
International Journal of Selection and Assessment, 15, 263–272.
Batey, M., Walker, A., & Hughes, D. J. (2012). Psychometric tools in development – Do they work
and how? In J. Passmore (Ed.), Psychometrics in Coaching. Using Psychological and Psychometric
Tools for Development (pp. 49–58). London: Kogan Page.
Baumgarten, F. (1933). Die Charaktereigenschaften [The character traits]. Beiträge zur Charakterund
Persönlichkeitsforschung (Whole No. 1). Bern, Switzerland: A. Francke.
Bertua, C., Anderson, N., & Salgado, J. F. (2005). The predictive validity of cognitive ability tests:
A UK meta‐analysis. Journal of Occupational and Organizational Psychology, 78(3), 387–409.
Birkeland, S. A., Manson, T. M., Kisamore, J. L., Brannick, M. T., & Smith, M. A. (2006). A metaanalytic
investigation of job applicant faking on personality measures. International Journal of
Selection and Assessment, 14(4), 317–335.
Blackman, M. C. (2002). Personality judgment and the utility of the unstructured employment
interview. Basic and Applied Social Psychology, 24(3), 241–250.
Blickle, G., Meurs, J. A., Wihler, A., Ewen, C., Plies, A., & Günther, S. (2013). The interactive
effects of conscientiousness, openness to experience, and political skill on job performance
in complex jobs: The importance of context. Journal of Organizational Behavior, 34(8),
1145–1164.
Block, J. (1995). A contrarian view of the five‐factor approach to personality description. Psychological
Bulletin, 117, 187–215.
Block, J. (2001). Millennial contrarianism: The five‐factor approach to personality description 5
years later. Journal of Research in Personality, 35, 98–107.
Block, J. (2010). The five‐factor framing of personality and beyond: Some ruminations. Psychological
Inquiry, 21, 2–25.
Booth, T. (2011). A review of the structure of normal range personality. Unpublished doctoral
thesis. University of Manchester, UK.
Booth, T., Irwing, P., & Hughes, D. J. (in preparation). The 11+ factor model: A structural analysis
of 1,176 personality items.
Booth, T., & Hughes, D. J. (2014). Exploratory structural equation modeling of personality data.
Assessment, 21(3), 260–271.
Borgatta, E. F. (1964). The structure of personality characteristics. Behavioral Science, 9, 8–17.
Brannick, M. T., & Levine, E. L. (2002). Job Analysis: Methods, Research and Applications for Human
Resource Management in the New Millennium. Thousand Oaks, CA: Sage.
Brown, A., & Maydeu‐Olivares, A. (2013). How IRT can solve problems of ipsative data in forcedchoice
questionnaires. Psychological Methods, 18, 36–52.
Brown, A., & Maydeu‐Olivares, A. (in press). Modelling forced‐choice response formats. In
P. Irwing, T. Booth, & D. J. Hughes (Eds.), The Wiley–Blackwell Handbook of Psychometric
Testing. Oxford: Wiley–Blackwell.
Bunce, D., & West, M. A. (1995). Self‐perceptions and perceptions of group climate as predictors of
individual innovation at work. Applied Psychology, 44(3), 199–215.
Caligiuri, P. M. (2000). The Big‐Five personality characteristics as predictors of expatriates’ desire to
terminate the assignment and supervisor‐rated performance. Personnel Psychology, 53, 67–88.
Carver, C. S., & Scheier, M. F. (1996). Perspectives on Personality (3rd ed.). Needham Heights, MA:
Allyn & Bacon.
Cattell, R. B. (1943). The description of personality: I. Foundations of trait measurement.
Psychological Review, 50, 559–594.
Personality Questionnaires 177
Cattell, R. B. (1954). The personality and motivation of the research scientist. Wennergren Prize
Essay. New York Academy of Science.
Chamorro‐Premuzic, T., & Furnham, A. (2003). Personality predicts academic performance:
Evidence from two longitudinal university samples. Journal of Research in Personality, 37(4),
319–338.
Colodro, J., Garcés‐de‐los‐Fayos, E. J., López‐García, J. J., & Colodro‐Conde, L. (2015).
Incremental validity of personality measures in predicting underwater performance and
adaptation. Spanish Journal of Psychology, 18, E15, 1–10.
Connelly, B. S., & Ones, D. S. (2010). An other perspective on personality: Meta‐analytic integration
of observers’ accuracy and predictive validity. Psychological Bulletin, 136(6), 1092–1122.
Cook, M. (2009). Personnel Selection: Adding Value through People (5th ed.). Chichester: John Wiley
& Sons.
Costa, P. T., & McCrae, R. R. (1992). Revised NEO Personality Inventory (NEO‐PI‐R) and NEO
Five‐Factor Inventory (NEO‐FFI) Professional Manual. Odessa, FL: Psychological Assessment
Resources.
Costa, P. T., McCrae, R. R., & Kay, G. G. (1995). Persons, places, and personality: Career assessment
using the Revised NEO Personality Inventory. Journal of Career Assessment, 3(2), 123–139.
Costa, P. T., Terracciano, A., & McCrae, R. R. (2001). Gender differences in personality traits across
cultures: Robust and surprising findings. Journal of Personality and Social Psychology, 81,
322–331.
Cronbach, L. J. (1984). Essentials of Psychological Testing (4th ed.). New York: Harper & Row.
Del Giudice, M., Booth, T., & Irwing, P. (2012). The distance between Mars and Venus: Measuring
global sex differences in personality. PloS One, 7, e29265.
DeYoung, C. G., Quilty, L. C., & Peterson, J. B. (2007). Between facets and domains: 10 aspects of
the Big Five. Journal of Personality and Social Psychology, 93(5), 880–896.
Digman, J. M. (1990). Personality structure: Emergence of the five‐factor model. Annual Review of
Psychology, 41, 417–440.
Driskell, J. E., Hogan, J., Salas, E., & Hoskin, B. (1994). Cognitive and personality predictors of
training performance. Military Psychology, 6(1), 31–46.
Feeney, J. R., & Goffin, R. D. (2015). The overclaiming questionnaire: A good way to measure
faking?
Personality and Individual Differences, 82, 248–252.
Feist, G. J., & Barron, F. X. (2003). Predicting creativity from early to late adulthood: Intellect,
potential, and personality. Journal of Research in Personality, 37(2), 62–88.
Fiske, D. W. (1949). Consistency of the factorial structures of personality ratings from different
sources. Journal of Abnormal and Social Psychology, 44, 329–344.
Funder, D. C., & Ozer, D. J. (1983). Behavior as a function of the situation. Journal of Personality
and Social Psychology, 44, 107–112.
Furnham, A. (1986). Response bias, social desirability and dissimulation. Personality and Individual
Differences, 7, 385–400.
Galton, F. (1869). Hereditary Genius. London: Macmillan.
Goffin, R. D., Rothstein, M. G., & Johnston, N. G. (1996). Personality testing and the assessment
center: Incremental validity for managerial selection. Journal of Applied Psychology, 81, 746–756.
Goffin, R. D., Rothstein, M. G., Rieder, M. J., Poole, A., Krajewski, H. T., Powell, D. M., &
Mestdagh, T. (2011). Choosing job‐related personality traits: Developing valid personalityoriented
job analysis. Personality and Individual Differences, 51(5), 646–651.
Goldberg, L. R. (1990). An alternative ‘description of personality’: The Big Five factor structure.
Journal of Personality and Social Psychology, 59, 1216–1229.
Guildford, J. P. (1959). Personality. New York: McGraw‐Hill.
Guion, R. M., & Gottier, R. F. (1965). Validity of personality measures in personnel selection.
Personnel Psychology, 18(2), 135–164.
Guion, R. M., & Highhouse, S. (2006). Essentials of Personnel Selection: Personnel Assessment and
Selection. Mahwah, NJ: Lawrence Erlbaum.
Hamby, T., Taylor, W., Snowden, A. K., & Peterson, R. A. (2016). A meta‐analysis of the reliability
of free and for‐pay Big Five scales. The Journal of Psychology: Interdisciplinary and Applied,
150(4), 422–430.
178 Selection
Hampson, S. E. (2012). Personality processes: Mechanisms by which personality traits ‘get outside
the skin’. Annual Review of Psychology, 63, 315–339.
Heggestad, E. D., Morrison, M., Reeve, C. L., & McCloy, R. A. (2006). Forced‐choice assessments
of personality for selection: Evaluating issues of normative assessment and faking resistance.
Journal of Applied Psychology, 91(1), 9–24.
Hicks, L. E. (1970). Some properties of ipsative, normative, and forced‐choice normative measures.
Psychological Bulletin, 74(3), 167–184.
Hogan, J., & Holland, B. (2003). Using theory to evaluate personality and job–performance
relations:
A socioanalytic perspective. Journal of Applied Psychology, 88, 100–112.
Hogan, R. (2007). Personality and the Fate of Organizations. Mahwah, NJ: Lawrence
Erlbaum.
Holden, R. R., Wood, L. L., & Tomashewski, L. (2001). Do response time limitations counteract
the effect of faking on personality inventory validity?. Journal of Personality and Social Psychology,
81(1), 160–169.
Hough, L. M., Eaton, N. K., Dunnette, M. D., Kamp, J. D., & McCloy, R. A. (1990). Criterionrelated
validities of personality constructs and the effect of response distortion on those validities.
Journal of Applied Psychology, 75(5), 581.
Hough, L. M., Oswald, F. L., & Ployhart, R. E. (2001). Determinants, detection and amelioration
of adverse impact in personnel selection procedures: Issues, evidence and lessons learned.
International Journal of Selection and Assessment, 9(1–2), 152–194.
Hughes, D. J. (2014). Accounting for individual differences in financial behaviour: The role of personality
in insurance claims and credit behaviour. Unpublished doctoral thesis. University of
Manchester, UK.
Hunter, J. E., & Schmidt, F. L. (2004). Methods of Meta‐Analysis: Correcting Error and Bias in
Research Findings. Thousand Oaks, CA: Sage.
Irwing, P., & Booth, T. (2013). An item level exploratory factor analysis of the sphere of personality:
An eleven‐factor model. Paper presented at the 1st World Conference on Personality, Stellenbosch,
South Africa.
Jackson, D. N., Ashton, M. C., & Tomes, J. L. (1996). The six‐factor model of personality: Facets
from the Big Five. Personality and Individual Differences, 21, 391–402.
Jackson, D. N., Paunonen, S. V., Fraboni, M., & Goffin, R. G. (1996). A five‐factor versus six‐factor
model of personality structure. Personality and Individual Differences, 20, 33–45.
Jenkins, M., & Griffith, R. (2004). Using personality constructs to predict performance: Narrow or
broad bandwidth. Journal of Business and Psychology, 19(2), 255–269.
Johnson, C. E., Wood, R., & Blinkhorn, S. F. (1988). Spuriouser and spuriouser: The use of ipsative
personality tests. Journal of Occupational Psychology, 61(2), 153–162.
Judge, T. A., Bono, J. E., Ilies, R., & Gerhardt, M. W. (2002). Personality and leadership: A
qualitative and quantitative review. Journal of Applied Psychology, 87, 765–780.
Judge, T. A., & Erez, A. (2007). Interaction and intersection: The constellation of emotional stability
and extraversion in predicting performance. Personnel Psychology, 60(3), 573–596.
Judge, T. A., & Ilies, R. (2002). Relationship of personality to performance motivation: A metaanalytic
review. Journal of Applied Psychology, 87, 797–807.
Judge, T. A., Jackson, C. L., Shaw, J. C., Scott, B. A., & Rich, B. L. (2007). Self‐efficacy and workrelated
performance: The integral role of individual differences. Journal of Applied Psychology,
92(1), 107–127.
Judge, T. A., Rodell, J. B., Klinger, R. L., Simon, L. S., & Crawford, E. R. (2013). Hierarchical
representations
of the five‐factor model of personality in predicting job performance: Integrating
three organizing frameworks with two theoretical perspectives. Journal of Applied Psychology,
98(6), 875–925.
Judge, T. A., & Zapata, C. P. (2015). The person–situation debate revisited: Effect of situation
strength and trait activation on the validity of the Big Five traits in predicting job performance.
Academy of Management Journal, 58, 1–31.
Kelley, T. L. (1927). Interpretation of Educational Measurements. Yonkers, NY: World Book.
Komar, S., Komar, J. A., Robie, C., & Taggar, S. (2010). Speeding personality measures to reduce
faking. A self‐regulatory model. Journal of Personnel Psychology, 9(3), 126–137.
Personality Questionnaires 179
Le, H., Oh, I. S., Robbins, S. B., Ilies, R., Holland, E., & Westrick, P. (2011). Too much of a good
thing: Curvilinear relationships between personality traits and job performance. Journal of
Applied Psychology, 96(1), 113–133.
Lee, K., & Ashton, M. C. (2004). Psychometric properties of the HEXACO Personality Inventory.
Multivariate Behavioral Research, 39, 329–358.
Lee, K., Ashton, M. C., Hong, S., & Park, K. B. (2000). Psychometric properties of the Nonverbal
Personality Questionnaire in Korea. Educational and Psychological Measurement, 60, 131–141.
Lounsbury, J. W., Sundstrom, E., Loveland, J. L., & Gibson, L. W. (2003). Broad versus narrow
personality traits in predicting academic performance of adolescents. Learning and Individual
Differences, 14(1), 65–75.
Marsh, H. W., Lüdtke, O., Muthén, B., Asparouhov, T., Morin, A. J. S., Trautwein, U., & Nagengast,
B. (2010). A new look at the Big Five factor structure through exploratory structural equation
modeling. Psychological Assessment, 22, 471–491.
McCrae, R. R., & Costa, P. T. (1985). Updating Norman’s ‘Adequate Taxonomy’: Intelligence and
personality dimensions in natural language and in questionnaires. Journal of Personality and
Social Psychology, 49, 710–721.
McManus, M. A., & Kelly, M. L. (1999). Personality measures and biodata: Evidence regarding
their incremental predictive value in the life insurance industry. Personnel Psychology, 52,
137–148.
Meade, A. W. (2004). Psychometric problems and issues involved with creating and using ipsative
measures for selection. Journal of Occupational and Organizational Psychology, 77, 531–551.
Mischel, W. (1968). Personality and Assessment. New York: John Wiley & Sons.
Monson, T. C., Hesley, J. W., & Chernick, L. (1982). Specifying when personality traits can and
cannot predict behavior: An alternative to abandoning the attempt to predict single act criteria.
Journal of Personality and Social Psychology, 43, 385–399.
Monzani, L., Ripoll, P., & Peiró, J. M. (2015). The moderator role of followers’ personality traits in
the relations between leadership styles, two types of task performance and work result satisfaction.
European Journal of Work and Organizational Psychology, 24(3), 444–461.
Morgeson, F. P., Campion, M. A., Dipboye, R. L., Hollenbeck, J. R., Murphy, K., & Schmitt, N.
(2007a). Reconsidering the use of personality tests in personnel selection contexts. Personnel
Psychology, 60(3), 683–729.
Morgeson, F. P., Campion, M. A., Dipboye, R. L., Hollenbeck, J. R., Murphy, K., & Schmitt, N.
(2007b). Are we getting fooled again? Coming to terms with limitations in the use of personality
tests for personnel selection. Personnel Psychology, 60(4), 1029–1049.
Morgeson, F. P., Reider, M. H., & Campion, M. A. (2005). Selecting individuals in team settings:
The importance of social skills, personality characteristics, and teamwork knowledge. Personnel
Psychology, 58(3), 583–611.
Moscoso, S., & Salgado, J. F. (2004). ‘Dark side’ personality styles as predictors of task, contextual,
and job performance. International Journal of Selection and Assessment, 12, 356–362.
Mount, M. K., & Barrick, M. R. (1995). The Big Five personality dimensions: Implications for
research and practice in human resources management. Research in Personnel and Human
Resources Management, 13(3), 153–200.
Mueller‐Hanson, R., Heggestad, E. D., & Thornton III, G. C. (2003). Faking and selection:
Considering the use of personality from select‐in and select‐out perspectives. Journal of Applied
Psychology, 88(2), 348–355.
Myers, I. (1978). Myers–Briggs Type Indicator. Palo Alto, CA: Consulting Psychologists Press.
Norman, W. T. (1963). Toward an adequate taxonomy of personality attributes: Replicated factor
structure in peer nomination personality ratings. The Journal of Abnormal and Social Psychology,
66, 574–583.
O’Boyle, E. H., Forsyth, D. R., Banks, G. C., & McDaniel, M. A. (2012). A meta‐analysis of the
dark triad and work behavior: A social exchange perspective. Journal of Applied Psychology,
97(3), 557–579.
Oh, I. S., Lee, K., Ashton, M. C., & De Vries, R. E. (2011). Are dishonest extraverts more harmful
than dishonest introverts? The interaction effects of honesty–humility and extraversion in
predicting
workplace deviance. Applied Psychology, 60(3), 496–516.
180 Selection
Oh, I. S., Le, H., Whitman, D. S., Kim, K., Yoo, T. Y., Hwang, J. O., & Kim, C.‐S. (2014). The
incremental validity of honesty–humility over cognitive ability and the Big Five personality traits.
Human Performance, 27, 206–224.
Oh, I. S., Wang, G., & Mount, M. K. (2011). Validity of observer ratings of the five‐factor model of
personality traits: A meta‐analysis. Journal of Applied Psychology, 96(4), 762.
Ones, D. S., Dilchert, S., Viswesvaran, C., & Judge, T. A. (2007). In support of personality
assessment in organizational settings. Personnel Psychology, 60(4), 995–1027.
Ones, D. S., & Viswesvaran, C. (1996). Bandwidth‐fidelity dilemma in personality measurement for
personnel selection. Journal of Organizational Behavior, 17, 609–626.
Ones, D. S., & Viswesvaran, C. (1998). The effects of social desirability and faking on personality
and integrity assessment for personnel selection. Human Performance, 11(2–3), 245–269.
Pace, V. L., & Brannick, M. T. (2010). How similar are personality scales of the ‘same’ construct?
A meta‐analytic investigation. Personality and Individual Differences, 49, 669–676.
Paulhus, D. L., & Williams, K. M. (2002). The dark triad of personality: Narcissism, Machiavellianism,
and psychopathy. Journal of Research in Personality, 36, 556–563.
Paunonen, S. V., & Ashton, M. C. (2001). Big Five factors and facets and the prediction of behavior.
Journal of Personality and Social Psychology, 81, 524–539.
Paunonen, S. V., Haddock, G., Forsterling, F., & Keinonen, M. (2003). Broad versus narrow
personality
measures and the prediction of behaviour across cultures. European Journal of
Personality, 17, 413–33.
Paunonen, S. V., & Jackson, D. N. (2000). What is beyond the Big Five? Plenty! Journal of
Personality, 68, 821–835.
Pervin, L., & John, O. P. (Eds). (2001). Handbook of Personality: Theory and Research (2nd ed.).
New York: Guilford Press.
Pittenger, D. J. (2005). Cautionary comments regarding the Myers–Briggs type indicator. Consulting
Psychology Journal: Practice and Research, 57(3), 210–221.
Ployhart, R. E., Weekley, J. A., Holtz, B. C., & Kemp, C. (2003). Web‐based and paper‐and‐pencil
testing of applicants in a proctored setting: Are personality, biodata, and situational judgment
tests comparable? Personnel Psychology, 56(3), 733–752.
Raymark, P. H., Schmit, M. J., & Guion, R. M. (1997). Identifying potentially useful personality
constructs for employee selection. Personnel Psychology, 50(3), 723–736.
Roberts, B. W., & DelVecchio, W. F. (2000). The rank‐order consistency of personality from
childhood to old age: A quantitative review of longitudinal studies. Psychology Bulletin, 126,
3–25.
Roberts, B. W., Kuncel, N. R., Shiner, R. L., Caspi, A., & Goldberg, L. R. (2007). The power of
personality: The comparative validity of personality traits, socioeconomic status, and cognitive
ability for predicting important life outcomes. Perspectives on Psychological Science, 2, 313–345.
Roberts, B. W., & Mroczek, D. (2008). Personality trait change in adulthood. Current Directions in
Psychological Science, 17, 31–35.
Rojon, C., McDowall, A., & Saunders, M. N. (2015). The relationships between traditional selection
assessments and workplace performance criteria specificity: A comparative meta‐analysis. Human
Performance, 28(1), 1–25.
Rosse, J. G., Miller, J. L., & Stecher, M. D. (1994). A field study of job applicants’ reactions to
personality
and cognitive ability testing. Journal of Applied Psychology, 79, 987–992.
Rothstein, M. G., & Goffin, R. D. (2006). The use of personality measures in personnel selection:
What does current research support? Human Resource Management Review, 16(2), 155–180.
Rothstein, M., Paunonen, S., Rush, J., & King, G. (1994). Personality and cognitive ability predictors
of performance in graduate business school. Journal of Educational Psychology, 86, 516–530.
Rynes, S., Gyluk, T., & Brown, K. (2007). The very separate worlds of academic and practitioner
periodicals in human resource management: Implications for evidence‐based management.
Academy of Management Journal, 50, 987–1008.
Sackett, P. R., & Lievens, F. (2008). Personnel selection. Annual Review of Psychology, 59,
419–450.
Salgado, J. F. (1997). The five factor model of personality and job performance in the European
Community. Journal of Applied Psychology, 82(1), 30–43.
Personality Questionnaires 181
Salgado J. F. (2003). Predicting job performance using FFM and non‐FFM personality measures.
Journal of Occupational and Organizational Psychology, 76, 323–346.
Salgado, J. F., & Moscoso, S. (2003). Internet‐based personality testing: Equivalence of measures
and assessors’ perceptions and reactions. International Journal of Selection and Assessment, 11,
194–205.
Salgado, J. F., Anderson, N., & Tauriz, G. (2014). The validity of ipsative and quasi-ipsative forcedchoice
personality inventories for different occupational groups: A comprehensive meta‐analysis.
Journal of Occupational and Organizational Psychology, 88(4), 797–834.
Salgado, J. F., & Tauriz, G. (2014). The five‐factor model, forced‐choice personality inventories and
performance: A comprehensive meta‐analysis of academic and occupational validity studies.
European Journal of Work and Organizational Psychology, 23, 3–30.
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel
psychology: Practical and theoretical implications of 85 years of research findings. Psychological
Bulletin, 124(2), 262–274.
Schmidt, F. L., Shaffer, J. A., & Oh, I. S. (2008). Increased accuracy for range restriction corrections:
Implications for the role of personality and general mental ability in job and training
performance. Personnel Psychology, 61, 827–868.
Schmitt, N., & Kunce, C. (2002). The effects of required elaboration of answers to biodata questions.
Personnel Psychology, 55(3), 569–587.
Silzer, R. F., & Church, A. H. (2010). Identifying and assessing high potential talent: Current
organizational
practices. In R. F. Silzer & B. E. Dowell (Eds.), Strategy‐Driven Talent
Management: A Leadership Imperative (pp. 213–280). Chichester: Wiley.
Smith, J. M., & Smith, P. (2005). Testing People at Work. London: Blackwell.
Steiner, D. D., & Gilliland, S. W. (1996). Fairness reactions to personnel selection techniques in
France and the United States. Journal of Applied Psychology, 81(2), 134–141.
Tett, R. P., & Burnett, D. D. (2003). A personality trait‐based interactionist model of job
performance. Journal of Applied Psychology, 88 (3), 500–517.
Tett, R. P., & Christiansen, N. D. (2007). Personality tests at the crossroads: A response to Morgeson,
Campion, Dipboye, Hollenbeck, Murphy, and Schmitt (2007). Personnel Psychology, 60(4),
967–993.
Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job
performance: A meta‐analytic review. Personnel Psychology, 44(4), 703–742.
Tett, R. P., Jackson, D. N., Rothstein, M., & Reddon, J. R. (1999). Meta‐analysis of bidirectional
relations in personality–job performance research. Human Performance, 12(1), 1–29.
Tett, R. P., Steele, J. R., & Beauregard, R. S. (2003). Broad and narrow measures on both sides of
the personality–job performance relationship. Journal of Organizational Behavior, 24(3),
335–356.
Timmerman, M. E. (2006). Multilevel component analysis. British Journal of Mathematical and
Statistical Psychology, 59(2), 301–320.
Thorndike, E. L. (1904). An Introduction to the Theory of Mental and Social Measurements. Oxford:
Science Press.
Tupes, E. C., & Christal, R. E. (1961). Recurrent personality factors based on trait ratings. Technical
Report No. ASD‐TR‐61‐97. Lackland Air Force Base, TX: U.S. Air Force.
Vassend, O., & Skrondal, A. (2011). The NEO personality inventory revised (NEO‐PI‐R): Exploring
the measurement structure and variants of the five‐factor model. Personality and Individual
Differences, 50, 1300–1304.
Wanberg, C. R., Kammeyer‐Mueller, J., & Marchese, M. (2006). Mentor and protégé predictors and
outcomes of mentoring in a formal mentoring program. Journal of Vocational Behavior, 69(3),
410–423.