THE  A-Z OF TESTING

 

The ETPG A-Z complements many specialist dictionaries and glossaries which provide scientific definitions of technical terms used in psychology and testing.

Our definitions use simple language except where further technical terms are unavoidable. In places we’ve included non-technical terms to examine their implications for testing.

 

We’ve introduced certain statistical concepts, tried to explain their implications and why they’re important, but left out any detailed statistical explanation. This is not to downplay the significance of statistics in this area but we believe there are better ways of explaining and learning these techniques .

 

If you want more information do ring us. Also ring us if you can’t find a term in our A-Z.  We’ve tried to be selective and we may have missed some words out.

 

A

 

ABILITY

 

Can a person carry out a task and if so, how well ? If they can, they are said to have that ‘ability’. 

 

Work abilities usually imply some sort of “reasoning”. Examples could be working out why a hot water system is leaking, or “ teasing out” the overall trends implied by a set of sales figures in EXCEL format.

 

People’s ability to reason and carry out a task in different sorts of situations with different sorts of evidence will vary there are different tests to assess areas like “verbal ability”, “ numerical reasoning ability “, “abstract reasoning ability” and “spatial reasoning ability” to distinguish these different contexts.  These are among the most widely used tests in modern business and educational practice.

 

Jobs often require different sorts of ability: a copywriter will need high verbal skills, an accountant numerical skills, a mechanic spatial skills and so on.

 

ADAPTIVE TESTING

 

Different people taking the same test traditionally answered the same questions in the same order. Adaptive tests ask different questions depending on an individual’s answers.

 

So, if a person gets an item wrong or answers in a certain way, the test will present a different item than if the person got the question right or answered in a different way. If a person gets a certain number of items wrong in a row, the test may stop, since the person has reached his or her “ ceiling”  of ability.

 

Adaptivity saves time and prevents the candidates getting bored because they’re being  asked very easy, very difficult or irrelevant questions. Adaptivity is made possible by computer technology, but some printed tests are partly adaptive. They ask pre-test questions then send the person to the area of the test that most fits their initial answers

 

ADMINISTRATORS

 

Sometimes called ‘proctors’. These are the people who run testing sessions to make sure the test is delivered to give the most accurate results, to ensure there’s no cheating and to ensure the test takers are in the right frame of mind to answer honestly or do their best. Professional administration leaves a good impression with candidates in recruitment situations and can help organisation’s brands.

 

Good administration is a skill.

 

An administrator does not need to be an expert in the scoring and interpretation of tests; indeed using specialist administrators to run sessions and test experts to interpret results can save time and money.

 

ADVERSE IMPACT

 

If a selection process results in relatively more people from one group being selected than from another, adverse impact is operating. This is a crucial issue given legislation on equal opportunities and discrimination.

 

If your selection process results in relatively more white than black people, men than women, or 17 year olds than 60 year old being chosen – unless this is a justifiable result from the requirements of the job - then you have adverse impact and may be open to legal action.

 

It is crucially important therefore that you gather background information on your candidates while making clear that this is not being used in the actual selection decision. Once you have this information it’s fairly simple to work out if your procedures are discriminating unfairly and to put this right.

 

APTITUDE

 

If truth be told, the words "ability” and “ aptitude” are often used indiscriminately. Technically they mean slightly different things:

 

  • Ability underlies aptitude. Ability measure a person’s general intellectual functioning: how good are he or she at reasoning from numbers for instance
  •  Aptitude measures your likely success in an occupation given the abilities you have.

 But people will use the same test to measure “ ability” and “aptitude” and in work situations we’re rarely testing people to describe how good they are in a particular area: we’re trying to find out if they’re going to be good managers, call centre operatives or sales people

 

ASSESSMENT

 

Another word which can cause confusion.

 

  • At its loosest assessment happens all the time - in your conversations, at meetings, in interviews, at speed dating events – when you appraise or estimate something about a person . You’ll be making judgements about the person who wrote this A-Z from the words it uses.
  • More precisely, assessment has been used to distinguish less formal and technical techniques from psychometric tests. There’s more about psychometric tests later on in our A-Z but ‘assessment’ is often used to define informal questionnaires which start discussions rather than help make decisions...
  •  ...but over the last few years “ test” and “ assessment” have been blurred and you’ll find them used to mean each other.

  

ASSESSMENT CENTRE

 

An event in which a number of techniques are used to evaluate a group of applicants for a job. Typically an assessment centre might involve a structured interview, test(s), role plays, in-tray exercises, observed discussions and maybe even an observed social event. It may last 1-3 days depending on the seniority of the job in question. They are extremely accurate, but also expensive and tend to be used for senior, high profile jobs.

 

APA

 

The American Psychological Association. While this has little direct relevance to European test users, the APA is a very influential organisation which publishes much useful material on test use.

 

ATP

 

The US Association of Test Publishers, some of whose members’ tests are available within Europe in translated and adapted divisions

 

ATTAINMENT

 

What you’ve learnt, whether that’s a skill ( driving a car, using WORD, dragging and dropping computer files) or knowledge ( marketing theory, HR law, geography ). Attainment is what you’ve learnt: ability is more about the ability to learn.

 

B

 

BATTERY

 

A group of tests used together to measure a range of a person’s attributes.

 

BIAS

 

If an assessment causes people’s responses to vary in a way which has nothing to do with what a test is supposed to measure then it’s biased..

 

For instance, if a test which claimed to measure abstract reasoning caused very short people to score much lower than very tall people, you might suspect it was biased in some way.

 

BIG FIVE

 

One strand of personality theory suggests that there are a number of reasonably stable factors which make up a person’s unique personality. These factors might describe areas like: how sociable you are; how far you follow rules; the extent to which you can control the expression of your emotions.

 

Different tests suggest there are different numbers of these factors but, since the 1980s,  the most widely used model has been the FIVE FACTOR MODEL or the BIG FIVE. You can trace this back to research done in the 1930s and much research has been done since then.

 

There are a number of big five tests available and tests that report other numbers of factors often map onto the Big Five.  In a sense, the number of factors a test uses – given that it’s technically good – reflects how much detail the user wants to go into and that reflects the reason the test is being used, the importance of the job, the time and money available and the expertise of the user.

 

BIODATA TESTS

 

Created to select insurance salespeople in the 1950s, biodata tests are not widely used, partly because of their expense but also because feel uncomfortable not knowing why they work !

 

In essence:

 

  • much testing at work is done to predict performance. You recruit, trying to ensure that successful candidates are more likely to perform well than unsuccessful candidates. Equally, you offer someone training in order that they’ll do their existing job better or take on a bigger job and succeed.
  •  classic assessment tries to predict success through specific human characteristics : personality, abilities, qualifications etc.

 

Biodata tests take the view that anything may predict success in a job. It doesn’t matter if you can explain why that characteristic predicts success: as long as the statistics suggest that a person with a particular attribute or experience is more likely to succeed than someone who hasn’t got it then, practically, you have a test which helps in selection and business success.

 

So, biodata tests gather huge amounts of information about a person – their family history, physical make up, life experiences etc, then see which ones – if any - predict success.

 

Good biodata tests seem to work well but they’re extremely expensive to develop: may raise privacy issues and are difficult to justify if, for instance, your best salespeople without fail have blue eyes !