Ross Woods, rev. 2018
This guide to assessment was originally written specifically for vocational education, but most of it is just as useful for professional education. In vocational training and professional education, assessment is used to determine whether or not a student is competent in a particular set of skills.
Otherwise, the assessment procedures in this workbook apply to all kinds of learning.
The more general sections are suitable for K-12 schools and higher education but do not contain discussion of specific K-12 issues. Many higher education programs have different goals, teaching through lectures and tutorials, giving essay tasks as assignments, and using written examinations. If you teach and assess this way, you will need to see how these principles apply to you.
Assessment can mean appraisal
or judgment of worth.
In contrast, the term evaluation is used to refer to determining the quality of a whole program, so it now usually means something different from assessment.
One way to classify assessment is criterion-referenced and norm-referenced, and another way is as either formative or summative.
Many other kinds of assessment are also quite relevant to schoolteaching. Some of them reflect methods of assessment rather than purposes, and many are not mutually exclusive:
Diagnostic assessmenthttps://www.exeter.ac.uk/staff/development/academic/resources/assessment/principles/types/ June 25, 2018
People take assessments for many purposes, for example:
The most interesting point is that you could use the same process for more than one purpose:
A competency is a course requirement that is expressed as something the student must be able to do and defines the actual requirements.
In school systems, they are usually written in the curriculum framework or a syllabus document. In professional or vocational education, competencies are witten in a competency standards frameworks. In other higher education, they are set by the particualr institution. They are often written in full in the unit description for each unit.
It is helpful to think of competencies more broadly. They aren't limited to performance skills; they may be conceptual, such as analysis, formulation, and strategy. Their written length does not necessarily reflect their difficulty. Some challenging tasks can be described quite briefly.
In professional and vocational education, the competencies of a degree together usually reflect a specific work role. The competencies of a purely academic degree define the level of mastery of an academic discipline. Options may allow for one degree to accommodate multiple roles.
No matter how well-written competency statements are, interpreting them always brings up questions. Competencies are umbrella statements and can't specify every micro-skill. Then again, they don't need to do so. Similarly, they can't be comprehensive, because writers can't predict every kind of serious mistake that students will make.
Interpreting a well-written competency statements doesn't need to be difficult. First, competencies are to be interpreted in the context of the generic requirements of the qualification and its purpose. Second, competency statements normally also include smaller objectives or criteria to help ensure consistent interpretation and to specify more clearly how well the competency must be done. For example, a competency may be written as a purpose statement and a list of skills, each with a set of performance criteria. A performance criterion can be any kind of indicator that shows whether or not the student has performed satisfactorily.
Some of the implications are:
In some jurisdictions, higher education programs must be defined in semester hours. In the US, a semester hour usually represents 45 hours of study. Consequently, programs can be defined in both competencies and semester hours. One semester hour represents the learning competencies achieved through 45 hours of study, based on the estimated average time for native English speakers in the program’s target group to achieve those competencies.
What about people who have never studied? Can they just apply for the assessment?
Yes. Many vocational colleges and institutions of higher education allow for the recognition of current competencies regardless of how they are acquired. Terms vary greatly. "Challenge test" is quite common in the US, while Australian vocational colleges call it "Recognition of Prior Learning" (RPL).
People can learn skills though:
If someone has learnt the skill through any of these ways, or any other informal way, and meets the requirements to get into the program, then they must be allowed to be assessed. They can also be assessed for RPL to gain admission into the program.
Training organizations may not refuse assessment because an applicant didn't sit the classes or training sessions. If the student passes the assessment, he/she is not differentiated in any way from those who sat through all classes.
RPL affects assessors because assessors now have greater responsibility. The assessment is the only way to ascertain whether the student has those skills. In the past, the classroom teacher could monitor student's development through assignments, class activities and formative assessment, so they had a guideline to know whether assessment went well.
Without classroom monitoring, the assessor is responsible to ensure the quality of assessment, and assessment becomes mainly summative.
RPL has another effect. RPL students and regular classroom students must sit equivalent assessments. It is unethical to make one more difficult than the other, for example, an assessor can't say that because an RPL student's skills weren't monitored in class, their assessment may be much more difficult.
In the past, some institutions used unethical tactics to make sure that students could not pass an RPL assessment. They did this by:
RPL is not the same as transfer credit. The confusion can arise because one way to do an assessment is by a portfolio of documentary evidence (discussed later). Besides, it becomes possible to recognize unaccredited study through RPL, as long as there is sufficient evidence of learning and an assessment is made. RPL is quite different from transfer credit:
Portfolios of documents are still quite useful in some circumstances, but the trend is generally away from them. Current RPL procedures are more hands-on and on-site, including focused interviews, observations and walk-throughs.
Here's an interesting thought on assessment. A Recognition of Prior Learning assessment can be much like applying for a job:
Applying for a job | Assessment |
---|---|
• Your CV outlines your experience and achievements with enough detail to convince the prospective employer that you are competent for the position. You list your qualifications and any other training, and you enclose references and a portfolio of work samples. • You address each selection criteria, showing how you meet it. • The prospective employer verifies what he/she reads by checking the references with past employers • In the interview, he/she sights the originals or certified true copies of qualifications. He/she also asks questions to test whether you really have the skills for the position. |
• Your CV outlines your experience and achievements with enough detail to convince the assessor that you have the competencies of the qualification. You list your qualifications and any other training, and you enclose references and a portfolio of work samples. • You address each element, showing how you meet it. • The assessor verifies what he/she reads by checking the references with past employers. • In the oral assessment, he/she sights the originals or certified true copies of qualifications. He/she also asks questions to test whether you really have the skills for the qualification. |
Portfolios of documents for RPL are still quite useful in some circumstances, but the trend is generally away from them. Current RPL procedures are more hands-on and on-site, including focused interviews, observations and walk-throughs.
Here are two different step-by-step procedures for RPL, one by portfolio and one by practicum. I've put them side by side so they are easy to compare:
Portfolio | Practicum |
---|---|
Prospective student approaches the RTO. |
Prospective student approaches the RTO. |
Education policies perceive a gap between:
Curriculum statements of outcomes and assessment criteria | and | Making specific assessment decisions with real students |
---|
They are concerned about assessment decisions being judgment calls made by assessors. For example, if a student appeals, can you prove your assessment was correct? Or was it just your best guess?
A great deal has been done to close the gap and take any guesswork out of the assessment decision. Nevertheless, solutions to the guesswork problem come up as a constant theme throughout assessment practices:
Some authorities tend to prefer holistic assessment.
Could you have one assessment activity that gets evidence for many related elements? Holistic assessment means that one assessment of one complex activity might cover more than one element or even more than one unit. Over-assessing is costly for you and frustrating for the student.
You might be able to save yourself and your students a lot of work by clustering assessments. The point is that students must be able to put all the different skills together to be able to do the job:
Example 1.: A receptionist might be doing one job at the desk and be assessed for separate skills in workplace safety, communication, information management, and computing.
Example 2.: A factory worker could use a piece of equipment correctly while complying with workplace safety rules.
Holistic evidence is good. A good holistic assessment is based on actual practice in a real workplace or a realistic simulation. Being part of one work role, the elements have common circumstances and probably overlap; students often perform skills from different units simultaneously.
In fact, a single assessment activity can combine knowledge, understanding, problem-solving skills, technical skills, workplace safety, language, literacy, numeracy, people skills, attitudes and ethics.
But you can't make a "holistic judgment." It wouldn't be responsible to make a gut feeling assessment judgment (Um, yeah, he/she can do the job
) based on a whole-of-life report consisting of qualifications, training, and experience.
The problem is that the specific written requirements and the person’s performance don't clearly correlate. In other words, the assessment gap is too wide and the assessment is basically guesswork. To solve this problem, you should assess according to specific requirements of the units.
The result for each vocational unit will be that the students are either Competent
or Not yet competent
. Many vocational units are well suited to these assessment outcomes, and most assessors appreciate the simplicity.
Grades always need defining. In a letter grading system, A usually means outstanding, B means excellent, C is a pass, and D and E mean that the student has not passed.
Grading is normally built in to norm based assessment, and works well when students numbers are large and the assessments are done very consistently. Note the conditions; it doesn't work for very small numbers of students or if the assessment system cannot ensure very high levels of consistency.
Many institutions give grades as A, B, C, D, or E. Students, schools, employers, and universities all want it. Grades are necessary for students transferring to the further studies or internationally. Consider this scenario:
Based on "Percentages in assessment" by Ross Woods
"What percentage constitutes a passing grade?" For example, some people might say 60%, others might say 70% is higher, and others suggest that 80% represents "higher quality" education.
The fact is that the actual percentage figure is completely arbitrary. An assessor could easily dumb down all questions in order to require 100% correct. Similarly, an assessor could make them all so difficult that 20% correct is a good result.
In norm-based assessment, what really matters is the percentile needed to be deemed a particular grade.
In a competency-based system, however, it is the other way around. First, write competencies to define a passing level of performance. Then set the testing system so that a pass fits the competency. In competency-based assessment that has lists of required knowledge
, all knowledge is required so we can legitimately ask: "If it doesn't matter whether students get a particular question wrong, then nobody needs to get it right to pass. So why are you asking it?" In other words, it is quite legitimate to ask for 100% in these cases.
Some systems have it both ways; they use a rubric for assessors to classify answers as "unsatisfactory," "satisfactory," and one or more levels of exceptional. Other have several stages:
• Early questions only settle students into the testing environment.
• The next batch of questions tests the basics, and represent the competency standard. Students must get them all correct to pass.
• The following batch of questions are more difficult; their purpose is to enable students to get a grade above the minimum pass.
Not yet competenton the job. Consequently, for the student to pass the unit, your system must ensure that students also demonstrate competence on the job.
In higher education, the system of graded assessment is often left to each particular institution. Oddly enough, It is most common in research work, where theses aand dissertations are usually assessed on some kind of pass / not pass system. "Not pass" often means that corrections are required, a lower qualification is awarded (such as a Master of Philosphy instead of a Ph.D.) or the student leaves with no qualification.
In other cases, the student must demonstrate satisfactory competence in all requirements, and a grade can then be given on the extent that their performance exceeds the standard.
The way to derive a letter grade is to define requirements for letter grades. These can be done on a micro level for each assessment, perhaps as a rubric. They can also be set at institutional level, such as the following:
Grade | Meaning |
---|---|
A | Outstanding achievement, competent at the next highest qualification |
B | Meets the minimum requirement for recommendation to proceed to the next highest qualification |
C | Satisfactory work to pass in the qualification for which the student is enrolled, but not recommended to proceed to the next highest qualification |
C- | The minimum requirement to pass |
D | Does not meet the minimum requirement to pass |
E | Very poor |
Assessment also implies that there is some evidence that can be assessed. Evidence
is information, materials, or products that show whether or not a student has the skills. For example, a teacher could watch them directly and record an assessment result. In a vocational training or professional education context, an assessor could get reporting forms of what the student has done, or get references from work or practicum supervisors.
The evidence is usually a direct correlate of the assessment mode. There are three kinds of evidence, although assessment authorities seldom agree on definitions:
Kind | Key feature | What it means |
---|---|---|
Direct evidence | Direct observation in a real situation. | As assessor, you directly observe the student performing the skill in a real or simulated situation. (To be called a simulation, an assessment must be realistic enough to qualify as direct evidence.) |
Indirect evidence | Assessor infers competence. | As assessor, you infer competence from what the student has done (such as samples or written work). This may also include classroom-based assessments, and performance of conceptual skills in an interview or a test. |
Supplementary evidence | You rely on someone else to inform your decision. | This includes third party reports, references, and professional licenses. However, the assessor is still responsible for the final assessment decision. |
Supplementary evidence is excellent and in no way inferior. Original signed documents with detailed statements of competence from responsible, competent persons are very appropriate. Those documents need to be either issued independently or authenticated, and need to be free of conflict of interest. They usually take the form of references or professional licenses from credible bodies with established standards. It is good practice to follow up references with a phone call, because people will often tell you things in person that they wouldn't put in writing.
Of course, some third party evidence lacks detail or credibility, and is best used simply to corroborate other evidence.
Some authorities list the student's claims to competence (CV, self-assessments, etc.) in supplementary evidence. They are not real evidence and should not much determine assessment results. They are most useful for establishing an appropriate assessment. In a taught program, they monitor the students' readiness for assessment. In Recognition of Prior Learning, they indicate areas and levels of ability and possible sources of third-party evidence.
Most competency standards leave it up to teh asessor to determine which you will get. Some, however, require you to gather a combination of direct, indirect and supplementary evidence, or specify that direct evidence must be collected.
Below are the main points of a code of practice for assessors, most of which reflect ethics in some way.
*Careful, confidential use of assessment results is imperative; an unwisely released "Not yet competent" result can have severe career ramifications for the student. There is no special exemption for the student's parents or employers, even if they have paid for the course.
owe a favor?
Your surgeon might be licensed to give you brain surgery, but would you want him to do it if he knew that he couldn't?
Don’t assess outside your area of ability even if you have appropriate qualifications. If you know that a topic is outside your current abilities, then it is normally unethical for you to assess it alone. You might get advice or have a co-assessor.
Find out your own limits. For example, you may have limitations related to your ability in assessment procedures, quality processes or your own competency level. There can also be related legal responsibilities that you cannot meet.
Your college might also put some constraints on you:
There is generally no assessment policy written on the role of attitudes, and some authorities take the narrow view of assessing behavior only and consider attitudes irrelevant. However, many competency requirements are de facto attitude statements.
The rule of thumb is that attitudes must be appropriate to the kind of skill. The role of attitudes varies with the kind of job:
Besides, some attitudes (such as prejudice based on gender or race) can be unconscious but observable in behavior. Even when attitudes are unconscious, they still need to be appropriate to the student's work. Training should have raised those issues to a conscious level.
Confidence usually comes up as an issue.
Ideally, you'd like students to do a good job and be confident that they can do well. Consider these five variations:
stress break.
Of course, if you're assessing someone who lacks confidence, it's an excellent idea to encourage them as much as you can.
General standards are standards that apply to all assessments. Assessments must address several principles of assessment: validity, reliability, flexibility, and fairness. Evidence must be sufficient, authentic, and current. Some are less complex and we can deal with them briefly before progressing to those that are more complex.
The basic meaning of reliable is that the assessment works the same way every time. Reliability depends on assessors sharing a common interpretation of the units and of evidence being assessed.
It means consistent results:
Will the assessment give the same results at different times or for people who have learnt the skills in quite different ways? Will the evidence be interpreted in the same away by other assessors? Do different kinds of assessment of the same skill produce the same results?
Will the procedure assess people in widely different situations?
Flexible means that the assessment works equally well for all students and situations for which it was designed.
Fair means that the assessment works equally well for all students for whom it is designed, including disabled students where relevant. Perhaps no single assessment strategy is equally fair for all students, although an assessment strategy can be fair for all members of a particular group.
Does the process favor one kind of student over another? How would you assess a disabled person? Do students know they can appeal? Do students know beforehand the way they will be assessed and criteria used in assessment?
Is the work being examined the student's own work? Do you need to verify it? For example:
Are you assessing skills that the student has now? Or what they once knew and have since perhaps forgotten? For example: A 10-year-old report confirming a student's computer literacy is not evidence of their current skills.
Some people also now use "current" to mean that you are assessing against the current version of the standards. While it is not too difficult to keep track of competency standards, some national bodies frequently make minor revisions that are difficult to track.
Valid means that the assessment assesses what it is supposed to assess. This has two meanings:
Educators tend to define it as assessing the right thing. Two of the most ommon examples are as follows:
Another meaning of valid, often used in vocational education, means that the assessment complies with the competency standard and addresses all its requirements. In this meaning, assessments are invalid if they don't address all competency requirements. Vocational educators vary in opinions as to whether adding extra requirements makes the assessment invalid.
Does the assessment adequately cover the range of skills? Does it integrate theory and practice? Are there multiple ways to assess the learning?
It's the assessor's responsibility to have a procedure that asks students for enough evidence. It is then the student's responsibility to provide enough evidence.
There is no universal definition of 'sufficient evidence.' Some competency standards make your job easier; they actually define it (e.g. "on three separate occasions"), so you only have to comply with that definition. (There is still a potential trap. Unfortunately, some competency standards routinely specify in most units, "If a number is not specified then one is adequate," although a single shot might not really be enough.)
The best practice for most cases is as follows:
There is no universal definition of ‘sufficient evidence.’ It can vary according to the actual competency, so you will need to find out how much evidence is sufficient on a case by case basis. Consider the following ...
Assessment of knowledge is a little more complex:
In vocational and professional education, several another de facto requirements are not written in policy but derive from the idea of competence. In essence, they mean that the student has all the skills they need to do their job relative to the competency requirements. Your students need to learn more than just the skill that is the main point of the lesson, and you are usually required to reinforce them in your teaching and assess them.
Vocational and professional students should demonstrate competence in various contexts. The most common way to provide various contexts is to assess over an exended period of time. Other ways are to move students through different roles or locations, or to assess their skills using different equipment, etc.
Can they use what they've learnt in other contexts? Can they apply it outside the narrow confines of what they are doing at the time? How broad are skills? On one hand, students need to be able to transfer skills to other situations. For example, a hairdresser who can only cut blond hair is inadequately trained, and he/she should be able to use those skills on non-blondes. On the other hand, students don’t necessarily need to perform the skill in every possible situation. The hairdresser need not perform every possible haircut to demonstrate competence.
When doing an assessment, ask, "What would you do in a situation where …?" "How would this apply to …?" "Some organizations have a rule that …. What would you do if you had to perform this skill there?"
Here are a few simple solutions:
You should require consistency. The trend is that all evidence should consistently indicate competence, so it is clear that the student can perform the skill consistently on more than one occasion. A patchy performance should not be considered competent.
The rationale is that a competent person has consistent performance. In fact, some standards now explicitly require consistency. Depending on how much evidence is sufficient, you may have to check that the student has performed the skill in a range of contexts over a period of time.
If you face inconsistent evidence, check to see what is going on. Here are two examples:
For vocation and professional education, students must demonstrate competence at a workplace standard, even though standards vary from workplace to workplace. It assumes that your graduates must be employable, and employability can be used as a standard.
This has two implications:
Students should know the relevant laws with which they should comply. And if they do something illegal during an assessment, they must be assessed as not yet competent.
Professional and vocational colleges must maintain compliance with legislation and any regulatory requirements. That is, compliance is required by the college whether or not the competency standards says so. This compliance requirement extends to generic things like WHS, anti-discrimination, and consumer law. If the standard doesn't explicitly say so, compliance with legislation and any regulatory requirements should be read into it.
Workplace health and safety is a regulatory requirement, so you are also entitled to require that students comply with WHS rules and you can ask about relevant WHS aspects of the task in an assessment. In fact, you are legally required to maintain a safe environment. As part of an assessment, you may be able to get third party reports from the student's supervisor that the student complies with WHS procedures, or include WHS in your observations.
These are simply the ability to handle individual tasks. Can the student actually do the job? You can use this requirement to justify the little extra items that are part of doing the job.
It can be necessary to include efficiency here. That is, a student isn't competent if he/she can't perform the skill within a reasonable amount of time. For example, a new trainee nurse might do just as good a job washing a patient as a competent nurse, but take twice the time.
Do they know what to do if something goes wrong or not to plan? How do they handle interruptions?
Ask lots of "What if …?" questions. "You depend on person X to get their job done correctly. What if they made a mistake?" "What if this machine broke down? What would you do?"
"What potential problems normally occur in this situation? How would you respond to each one? What are the signs of it becoming a real problem, not just a potential problem? When would you try other ways of doing this?
Do they know how to relate to other people? How do they relate to their organization? If they have to do paperwork, do they keep it up to date?
Gather evidence on whether students can manage themselves and their tasks. For example, can they plan their own work, predict consequences, and identify improvements? Ask about getting people or equipment organized. Ask how their schedule works, and whether they keep paperwork under control.
Content 15
Content 16
Content 17
Content 18
Content 19
Content 20