The Assessment Gap

Ross Woods
©2004 22 Nov. 04 rev. 2010, rev. 2018  

What is the assessment gap?

The assessment gap is the extent to which assessment is a judgment call.

Unit statements (Very broad, apply to many situations)

The assessment gap (guesswork?)

Real students in concrete, specific situations
Actual evidence to be assessed

In the left column, the elements and performance criteria unit statements are meant to be broad enough to be flexible and appropriate to many different situations, and are assumed to be too broad and non-specific to apply clearly to particular situations.

On the right is something quite specific: real students in concrete, specific situations producing actual, specific evidence for assessment.

Assessment almost always involves a judgment call, which is supposedly quite risky because it might involve too much guesswork.

 

Why the angst?

Training authorities are concerned about assessment decisions being judgment calls made by assessors. For example, if a student appeals, can you prove your assessment was correct? How can you show that the assessment was valid, fair, reliable, and flexible?

Training authorities still get complaints about certified graduates who are not competent at what they've been certified as competent to do. It's unlikely that the majority of assessments have problems, but there are enough of them to lower the credibility of the whole competency-based system. Either way, there is little doubt that too many assessors are making impressionistic judgments that students are competent when they actually aren't. Of course, some of them are just playing tick-a-box (also known as "tick and flick").

 

Other solutions

A great deal has been done to take the guesswork out of assessment, usually by making the assessment system more explicit or more concrete.

It comes up as a constant theme throughout assessor training requirements, policy changes and re-interpretations, and recommended best practice. The packages also include many things that attempt to specifically define requirements.

Here's a list, and you could add more:

 

Another approach

One way to improve the system is to use concrete, specific tools that apply directly to real students in real situations, but that also reflect all requirements of the unit of the competency standard. (Tools are also known as instruments, although a few people try to differentiate them.)

Training standards don't say much about tools but the only way a college can comply with some requirements is to have auditable tools.

The assumption is that the competency standards got it right in the first place, sometimes a questionable assumption, and sometimes known to be incorrect. One of the inferences made, however, is that colleges are not strictly complying with the package requirements.

The solution is to write tools that refer to required evidence much more specificly and concretely than the competency standards. Assessment almost always involves a judgment call, but good tools should minimize or even eliminate any guesswork. The answer to the assessment gap is then to provide concrete, specific tools that fill the gap. That is, tools should enable assessors to make concrete judgments without guesswork.

Unit statement reflected in Assessment tools that apply directly to Real students in concrete situations

 

Valid, reliable, flexible and fair

Okay, so now you have a set of assessment tools. But this creates a problem. If your tools are now very specific and concrete, are they valid? Do they actually assess what you want to assess? That is, do they faithfully reflect the package requirements?

Unit statement Are they valid? Assessment tools   Real students in concrete situations

The tools also create other problems:

Unit statement   Assessment tools Are they reliable, flexible, and fair? Real students in concrete situations

 

So what is required?

The quality standard is still being interpreted as requiring colleges to prove that each separate requirement is addressed. The issue is not whether or not colleges comply with the quality standard, but whether they can demonstrate in detail that they comply with all assessment requirements. Just claiming to meet the requirements was, and still is, inadequate.

This might be done by:

(At one stage, it was also allowable to write a narrative of how the transition was done, but, inexplicably, it is no longer considered adequate.)

In particular, this means:

  1. Show that you've taught each element.
  2. Ensure every aspect of each element is addressed in delivery and assessment.
  3. Show that you are drawing conclusions according to the assessment criteria.
  4. Ensure all required knowledge is assessed.
  5. Match assessment to the qualification level in the Qualifications Framework.

 

Does the assessment gap vary?

I suspect so. In the first variation, it is difficult to establish the relationship between the tools and the unit statements:

Unit statement Possible gap Assessment tools that are so concrete that assessment could be a clerical or computer procedure Real students in concrete situations

In the second variation, diagrammed below, the tools would be seen to be unreliable and require some guesswork. This would allegedly be the case if using templates based directly on package requirements.

Unit statement Assessment tools that are very close to the unit statement Assessment judgments Real students in concrete situations

Cometency standards also vary. Generating tools for some of them takes lots of work because the assessment gap is very wide. Other standards are already concrete and specific, and there is no identifiable gap. They are nearly "ready to use" because their inbuilt safeguards are very effective (e.g. performance criteria, contextualization options, range, critical aspects of assessment). Some units are no more that observation checklists written up as units.

In these cases, the question of validity and flexibility are eliminated, but the presumption of unreliability remains and can be difficult to disprove, even if the instruments have proven reliable in practice. That is, the presumption of guilt is not quite fair. It is even more difficult when instruments are assembled for individual students. The procedure may be field-tested, but not the assessment for the individual student.

Whether justified or not, accreditation auditors now seem to want to see tools that are not based directly on the packages, but that can be fully justified by the package requirements.

 

Will this help?

Good question. Better tools will make better assessments for honest assessors. They will have to to better at processing the requirements and be more accountable for them. However, it won't necessarily improve the quality of assessments for those who are accustomed to playing tick-a-box. However, tick-a-box will probably be made more clearly dishonest because the vagaries of a "judgment call" won't be a defense.

 

What's new?

First, what was not new? Following a written program development procedure and writing notes is not new. Having a system of functioning tools is a given.

So what is required, or at least seems to be required?

  1. A very tool-centric approach. That's not a bad thing because good tools add value to a program.
  2. Tools must be unit-specific but may not be unit-driven.
  3. Generic tools, templates, and unit-driven tools are presumed unreliable.
  4. You must "demonstrate" in detail and in writing how you complied. It is not sufficient just to comply with the quality standard; colleges also need to refer in writing to specific cases with specific items in the competency standard. That is, it is a de facto requirement for an extra layer of paperwork, and auditors can say colleges are non-compliant if they do not do it.

These changes were new interpretations of the standards when the first edition of this e-book were written.

 

How burdensome?

It's easy to explain to a validator how you put the program together. It's not difficult to map assessment tools against the requirements of competency standard, as long as you know that you need to do so.

But some things are more burdensome. The extra paperwork seems to be more for compliance. If you already meet all requirements anyway, it doesn't add much value to the program for the staff or the students. It may be useful in litigation when you must defend your assessment system, but nobody except auditors will probably want to read it. In fact, the challenge may be in finding a way to make it add value to a program. Given the presumption that unit-driven tools, generic tools and templates are unreliable (regardless of whether or not they are), many of these must be re-written, which is a huge task.