Ross Woods, rev. 2018
Your goal in assessment is to gather evidence and make an assessment decision. It's doing what you planned to do. Previous ebooks told you how to plan the assessment. That included choosing the competency standards and other relevant benchmarks, and interpreting them so they would work in your situation. If you wrote your own assessment tools, you then chose the activities, designed the assessment (taking into account students' needs), and planned any reasonable adjustments.
It’s now time to put all that planning into practice. Gathering evidence is basically carrying out the plan. Use assessment tools for gathering evidence or give them to students so that they can gather evidence themselves (e.g. for a portfolio). Ensure that evidence is organized in a suitable format so that you can assess it. Keep an eye on students during the assessment; they may need help if they get stuck.
Nobody enjoys being assessed. Students are often nervous, fearful, and in need encouragement. Your attitude affects student performance. If they feel that you have created a friendly, supportive environment, they will perform more satisfactorily than if you appear to create a hostile environment.
Your first step is to establish a working relationship with the student. Of course, you’ll need to maintain it throughout the assessment.
Your attitude is important. You will need to guide and support students and give them encouragement. You will need to be sensitive to any issues arising, including your own role in the relationship. Be humble enough to accept and utilize feedback.
If you observe students a little, you will figure out the best kind of interpersonal approach you need. You will need to develop a professional relationship, but you will still need to be sensitive to students’ individual differences. This is especially the case for members of potentially marginalized groups.
Tip 1If you do on-job training, you can usually delay the assessment until students are up to speed and ready. Then almost all your students will pass well. | Tip 2In vocational education, the best way to start an assessment is to get the student to show you around their workplace and tell them what they do. You can ask them anything they don’t explain. They’re quite comfortable on their home turf and will often tell you most of the things you want to know. Then when you come to the formal assessment, you be able to skip anything they’ve already explained in full. | Tip 3Students are normally nervous before any assessment, so it’s essential that you put them at ease beforehand.
You will also need to discuss the assessment with students and give them full opportunity to ask questions, even if they are shy or reticent.
Your discussion with the student should be two-way, with you being an active listener who asks questions to clarify what was said and confirm that you both understand. Watch out for non-verbal messages as well as verbal messages; make sure you interpret them accurately.
You need an appropriate communication style for the specific context. For example, it should mirror the language used in the context, so that students understand it and feel comfortable with you.
Establish ways to encourage communication and feedback between you and the student; make sure you can both communicate anything necessary. Be careful to avoid monologue explanations. Let the student ask questions and clarify anything that he/she is unsure about. If you and the student have an agreement to be honest with each other, then it can benefit you both:
Ask the student about his/her preferences, needs and expectations, and address any inclusivity issues and other potential problems. You may negotiate to achieve an assessment approach that works for both you and the student.
You need to make sure the assessment goes smoothly, saves time and money, and gives the student a fair chance of success. Some assessment authorities are of the opinion that fairness involves fully informing students of the assessment criteria before they are assessed; in that view, it is not enough to tell them verbally or to put a copy on a notice board. Others would want a general picture of expectations, but would not require students to understand the details of the criteria. Either way, it is unfair to assess students without first telling them what is expected.
Make sure that students are informed of expectations before the assessment. In postsecondary education it is normal practice to provide students with written information on the assessment at the beginning of the term or semester, usually in the unit description. Be aware that students may have lost written information given to them at the beginning of the unit.
During the whole process, you will need to:
Take time to explain to students any factors affecting the assessment. Your admissions officer might have handled some of these, but you should explain anything still unclear. The point is that the students should go into the assessment knowing what to expect so that there are no nasty, unfair surprises.
In some cases, the student is not permitted to know some things about the assessment. For example:
The list of assessment factors is quite long, and many might not apply in particular situations. Factors affecting the assessment may be:
You must make an assessment decision that is in line with the standards and procedures that you are using and the evidence that you have gathered. Remember your limitations; you can ask an experienced assessor for help if necessary. It is a judgement call, but the closer the fit between evidence and assessment criteria, the easier it will be.
The dominant factors in making assessment decisions are the assessors' own assumptions and biases.
Making an assessment judgement can be defined as a two-step process:
If you are using a graded assessment system, it is then another step after that.
Assessors are people and have unconscious biases. It's not just everyone else either. It includes you and me.
The best antidotes are:
If you are assessing students whom you taught, formal assessment usually doesn’t bring many surprises. You should have a very good idea of students’ learning effectiveness before you do any formal assessment.
On the other hand, beware of assumptions based on past experience.
Professionalism in assessment requires that you realize when you are biased and adjust your assessment approach. Here are some of the main biases that unconsciously weaken assessment:
average.This results in two similar tendencies:
pass mark.(This is actually norm referencing. The middle 50% pass, the top 25% get distinctions, and the bottom 25% fail.) For example, if you assess a large group, you will unconsciously be tough at first and become more lenient as you realise what is
average; or
normal.
ordinary,
exceptionalor
struggling.
I’m good so I’m the benchmark for others.)
not yet competent; result to a student who has financially sacrificed to take the course or whose job (or future job) depends on a
competentoutcome.
Many of these can also work in the negative, biasing you to presume that the student is not yet competent. For example, you might presume they’re not yet competent at one thing because you’ve seen that they’re not yet competent at something else.
A great deal of the assessment problem is elimination of doubt, the fuzzy area where the decision could go either way. In the doubt gap, no decision is really defensible:
Clearly competent | The doubt gap The result could be either competent or not yet competent. |
Clearly not yet competent |
You are caught between two options. First, if you judge the student competent, could you justify your decision? Second, if you judge the student not yet competent and the student appealed, could you justify your decision? Either way, would someone else independently reach the same conclusion? How would you answer a supervisor or auditor?
You can seldom be absolutely sure. It’s always possible to ask for more evidence and then face the problem of over-assessing. Besides, assessment criteria at the higher levels are usually abstractions, which can always be reinterpreted. We only need to be sufficiently sure based on sufficient evidence. So how much is sufficient
? Enough for someone else to independently draw the same conclusion from the same evidence.
In a good assessment decision, you can show which side of the gap the student is on, because there is enough evidence to minimize or even eliminate doubt. As we saw, good assessment tools go a long way to put you in this situation.
There is more than one way to view the an assessment decision process:
I know it’s a bit thin, but they showed up and tried hard.)
In other words, if you conclude that you have insufficient evidence to make a decision, you must give a Not yet competent decision
. But you'd normally give the student a chance to supply more evidence.
You can make your assessment decisions more reliable by calculating the risk of making a wrong decision. Most assessments have some level of risk, especially RPL where evidence comes from a variety of sources. Besides, students with dodgy
credentials really need RPL because the assessment process will hopefully result in a nationally recognised qualification.
If you assess a person as competent when in fact they are not, there may be legal consequences. If you assess them as not yet competent when in fact they are, it may adversely affect your RTO’s reputation and result in a messy appeal.
Very low risk. The risks are very low if you taught the students yourself and monitored their learning progress.
Low risk. The prima facie risks are low when, for example:
Higher risks. The opposite is also true. Risks are higher when:
To make matters more complex, the risk status might change during the assessment. For example, you might doubt a reference from an obscure organization. Yet the organization might be excellent and highly credible, just that it was previously unknown to you.
Change in risk status can also go the other way. A reference appears very useful, but then you find that the person giving the reference has a possible conflict of interest. You might still count the reference as evidence, but only to corroborate lower-risk evidence items.
As another example, a student might offer a license from an international organization as evidence of competence. So far, it appears to be low risk. But then you find out that the international organization
is one person acting alone with no formal recognition other than a registered business name, and the procedure for getting the license is very weak. That is, the license is worthless as evidence of competence. The assessment is also higher risk because the student has offered such flimsy evidence.
In case of a higher risk assessment, you may simply need to get better evidence to minimize risk. In fact, people who know that their current credentials are weak or poorly recognized are those who most need a successful RPL assessment to gain a recognized qualification.
Verify evidence. It is normal good practice to verify evidence, and necessary if there is any reason to doubt it. Besides an email, phone call or visit, the organization’s website may be helpful.
Get more or better evidence. If the evidence is too risky to clearly demonstrate that the student is competent, you might need to decrease the risk by collecting more evidence. Otherwise you are forced to state clearly why the evidence is high risk (e.g. reference cannot be verified
) and give a not yet competent
decision.
A few assessments have a sudden death factor. If students get something essential wrong, they must be assessed as Not yet competent, no matter how good they do everything else in the assessment. This is different from a minor error made by a competent student who still has room for improvement.
Here are several examples:
a major contradiction or fallacyis different from a minor error that does not affect the conclusion.
In many kinds of skills, it's only fair to give students a second chance if they didn't demonstrate it the first time. It's basically unfair to write off students who fail on their first attempt, especially if they're very nervous.
This is not a problem in straightforward demonstrations of skill, where classroom scheduling might not be a factor. Supplementary examinations might be possible in other cases.
However, you are not obliged to give students more than one chance in all situations. For example:
Question: What if a student is assessed as not yet competent on the second assessment attempt? Do I then give him/her another chance? How long can this process go on for?
Answer: The student is entitled to a second chance. Even if he/she doesn't get the whole qualification, the college must issue a transcript for any units in which you assess him/her as competent. If you wish, you could give the student a third chance, but you don't have to and he/she is not really entitled to it. You are quite within your rights to say, This was your second assessment of these units, and you weren't successful. You are welcome to try again, but you'll need to apply and pay again for them.
uot;
Students are normally very keen to know how the assessment went, and apprehensive about the result. Their first question is: Did I pass?
If they gain a competent
result, they might be so relieved that they don’t care about your feedback at the time. If you write it down, they can come back to it later.
Feedback to students can include lots of things:
Give feedback as soon as possible after the assessment. It will have more effect and students will remember the correction rather than any mistakes they might have made. The longer you delay, the less notice the student takes.
Otherwise, the timing varies greatly depending on the assessment mode. Assessors can give students immediate feedback on an individual oral examination, but a large pile of written assessments takes days or even weeks to assess.
Feedback should be clear, constructive, and helpful. Be polite and positive, and comment on things well done. Give more positive than negative comments so people feel encouraged rather than crushed. It's good advice to start with positive comments, then give negative comments and conclude with positive comments.
Where possible, this is the most appropriate stage to give the student guidance on further opportunities for training. If there are gaps in the student's abilities, this is a good time to talk to the student about them.
Your task is more difficult if the student has done an assessment task unsatisfactorily. They might be despondent, disappointed, defensive, or angry about a not yet competent
result. A negative result may have serious employment ramifications for them (e.g. they lose a promotion or lose their job). If you have bad news, it is your job to give it, and you do have to tell the truth.
Verbal feedback is usually necessary, but it's good practice to write it down too. Giving feedback is often an accreditation a requirement and you may need to prove that you have done so in an audit.
As the assessor, you should get feedback from the student on how they thought the assessment went.
Use the forms that you developed in the planning process. Assessment records are obviously extremely important and must be made at the time of assessment; they cannot be done by memory later on.
Many colleges have their own procedures for records and deadlines for the submissions. Where the law affects assessment (e.g. some kinds of licensing) compliance with legislation should be built into your college's procedures.
You need to:
Here are three good reasons to plan your assessments when you plan the unit:
Planning is better than making it up as you go along. You might need to adjust the assessments as an on-course redirection if you are new to teaching or teaching a new unit that you have never taught and assessed before. Unfortunately, even the best-planned units sometimes hit the brick wall of reality. However, it is more efficient to change a plan than to have no plan at all.
They sometimes hide important information, especially in the performance criteria and other details. Don't let the (usually) clear, simple layout and boring explanation of the obvious distract you. It is important to get them right. You might want to discuss anything unclear with your supervisor before you go on.
In vocation and professional education, the the point is to get the whole job done well. Start with an overall picture of what the students must be able to do. Then you can concern yourself with the minutiae of performance criteria etc. Some expert assessors even suggest that you don't even need to bother students with the performance criteria in the assessment. In theory, they are for assessors to determine how well students perform the elements of competency. (In practice, however, some performance criteria are incorrectly written as stand-alone objectives. This gives rise to the wrong way to build assessment tools, described below.)
The wrong way. Some assessors head straight for the lists of performance criteria. They then try to group them into some new categories to plan assessments. That is, they build upwards from the minutiae. It's like trying to build an elephant out of individual atoms. It ignores the elements, becomes too complex and usually falls in a heap.
Consider what language, literacy and numeracy skills are required for the assessment.
You cannot require written work or language skills (e.g. written assignments) that are beyond those specified in the competency guidelines. This would make the assessment invalid, because it would require something more than the standards. One vocational college even used pictograms to assess occupational health and safety for people with low levels of language and literacy.
You will occasionally find students with a reading disability, or, rarely a writing disability. You will generally need to treat these in terms of allowable adjustments, which we''l get to later.
Examine the cost-effectiveness of your assessment:
This is not so much a factor in many human services programs, but becomes huge when materials and equipment are expensive or when plants must be temporarily closed down.
Fairness involves making allowable adjustments so that the student is not disadvantaged. Allowable means that they may not compromise program requirements. Most adjustments incur little or no financial outlay, but do take time, effort and thoughtfulness on your part.
Whatever the case, you need to make sure that your assessment will work for your students. It is your responsibility to make any adjustments and confirm them with co-workers and supervisors.
You may have to adjust the assessment for:
As a reasonable adjustment, you might need to do one of the following:
Some assessment strategies are not particularly flexible. If the skill is to write a report, then the appropriate assessment strategy is a written report. However, the allowable adjustment might be that the report relates to a topic in which they have some expertise. Some competency standards specify what is not allowable as an adjustment.
The most common kind of allowable adjustment comes from context. For example, an urban youth worker in a church might have different assessment needs to a government youth worker in a country town, though both could be assessed using the same youth worker standards and both might be equally competent.
Other examples are:
You can make allowable adjustments for culture and language:
Assessors need to find out whether they need to adapt the assessment to the student's disability. For example:
Some kinds of adjustments suit disturbed or intellectually disabled students:
Under the terms of the Act, a reasonable adjustment for a disability may not cause you "unjustifiable hardship". This is defined in terms of the benefit or detriment is likely for the student, the effect of the disability, and the financial cost to you. This means:
not yet competent.For example, if a blind person shows up for visual arts, you don't have to pass them.
When qualifications are driven by knowledge and concepts, skills usually require considerable reflection and take longer to develop.
Students' practical assessments for higher qualifications are more difficult than lower qualifications for these reasons:
It depends on …Sometimes they must plan for situations that can only be envisaged or forecasted.
With thanks to Russell Docking
In vocational and professional education, using local supervisors to gather evidence can cover hundreds or thousands or hours of work, more than an off-site college staff member could gather. They are not qualified assessors, so they are not actually making the assessment judgment, just gathering or providing evidence.
To start, give orientation to supervisors to tell them what to expect.
They have to understand it. Doing the job is not so important.)
You will also need to give supervisors ongoing advice and support. Many suffer in silence if they get stuck, waste lots of time, and either over-record or under-record.
Plan to visit or telephone periodically. Ask how they're going and for any questions they might have. They might also unsure and lack confidence, and simply need to be old that they are doing it right. If you visit, see what they do.
As part of your moderation, you might want to have a co-assessment session during a visit on the job.
You might also need to discuss student performance with the supervisor, which you would record as an interview for extra evidence if the supervisor is averse to writing. The workplace will probably generate other kinds of naturally-occurring evidence such as job descriptions, log books, reports, products, and work evaluations. Obviously, you'll want to put it into a portfolio.
You need to explain clearly to students a considerable amount of information on how assessment will be done. It must be provided before you start assessing, and often before you start teaching. It can be given in different ways:
The kinds of information are as follows:
When you start a new unit, you'll probably need to orally explain the outcomes to students. They often don't understand a written statement of outcomes because they haven’t studied them yet.
You might be well advised to mention several other matters:
With the possible exception of a campus group, you might need to include more information in the assessment plan. This would involve cases such as:
You rcollege should explain the following to students when they are admitted, but you might find them within your role as instructor or assessor:
Some students get extremely stressed about assessment. Imagine what happens from the student’s viewpoint. The college gives you an official-looking piece of paper describing the difficult things you have to do, the deadlines, the exam with the secret questions, and how you could ask for help after you fail. In fairness, the college meant well; it tried to inform students what's happening: how they'll be assessed, the times of the assessment, and how to appeal. But this kind of communication can simply build up students' dread of what might happen.
Students are considered educationally wounded
when their bad memories of school negatively affect their ability to learn and to be assessed. They are averse to formal education and anything that looks like school,
and generally believe that it sets them up to fail. Consequently, they see assessment as very daunting.
But many of these students are quite able to learn effectively. Some just don't like exams. Many fear failure and feel that the assessment system sets them up to fail. Some have unpleasant memories of school.
As an assessor, students' assessment stress affects what you do in various stages:
Here are several tips to minimize student assessment stress.
When you inform students in writing how they will be assessed, make your written statement simple and easy to understand. Give it out early so it doesn't build fear. It might even help if they lose it. And talk them through it to allay their fears.
When you teach, help students them keep focussed on the goals. Verbally let them know what they have to be able to do and how well they have to do it. Most students are trying to get it right.
Integrate learning and assessment seamlessly. You create lots of stress when you mark the assessment time as The Big Exam.
Use the same kinds of activities for assessment as for teaching.
Just being "competent" isn't always enough. Students might not be ready for assessment even if they can actually perform the competencies. Give them time to build up some confidence. Wait until they are ready for assessment and you are fairly sure that they will pass comfortably.
Try stealth. In many cases, students will pick up a variety of skills through one learning activity. But they'd be terrified if we started by telling them that they'd have to learn all those things. They'd give up before we started. So we give them a learning activity that they think is quite achievable. Then after they've done it and we've assessed it, we say, Lo and behold! You've learned all these other things as well. You'll just have to get the extra credit for them.
(With thanks to Alison Wright of ASRITC.)
You have more options in vocational and professional education:
The ways in which you assess required knowledge will vary according to the actual item to be assessed:
Make the assessment part of what they will do in their workplaces anyway:
At the end, you get the upside. First, everybody enjoys the assessment much more. Second, when you tell easily stressed students that they did well and have passed , they say Is that all? It was easy.
And you can honestly tell them they earned it. You win too. You get an excellent graduate who couldn't have succeeded at your competitor colleges.
The main factor in reviewing what you have done is your personal fear. Nobody likes being reviewed or evaluated, and many people will do anything they can to avoid discussing their weaknesses. If you're new to teaching or training, or if it's the first time you've taught or assessed a particular unit, you'll be painfully aware of your mistakes. But you need to overcome those fears if you are to improve, no matter how good you are.
In practice, especially in a larger institution, a lot of what is done is to collate student feedback forms and your own forms, and to review the results with a responsible person or in a committee. Ideally, your assessment forms should have a space for comments that you can put into the review later on.
At this stage, the main point is to say honestly how you think it went. Whether you taught a class or supervised work-based learning, you'll be painfully aware of what worked and what didn't, and you'll want to change some things next time. Don't forget to analyze your own skills. Don't just describe what you did but evaluate it. You need to answer What do you think needs changing, and how would you change it?
Hint: Make the review easier to do by making notes of good ideas and changes as you teach and assess. This will make the review more useful, because you'll include lots of things hat you would otherwise have forgotten.
You can base your evaluation comments on:
You need to be able to pick up on what went well and what didn't. The specific questions are not standardized but you might ask questions like these:
Identify areas you need to improve on and write them down. Then put the improvements into practice in your teaching and assessment. There's not much point doing a good review if you don't implement the findings.)
Your organization probably has an established review process. People in positions of responsibility and your peers might also need to take part in reviewing your work, and you should actively welcome input from students. The review processes their feedback on how well you did, and identifies areas for improvement.
Revew processess vary greatly between sectors. Almost all organizations have staff performance reviews, but those reviews might not cover teaching and assessment skills. Many higher and vocational education institutions evaluate teaching and assessment only through student feedback and complaints. In K-12 schools, trainee teachers are always closely supervised. Qualified teachers are only supervised if the Principal, Head of Department, or Superindent suspects problems. The trend in best practice, however, is to have a system where all teachers are encouraged to improve their skills and the quality of teaching and assessment in the school.
The simplest way to conduct a review is do this is to have a standardized form with clear instructions, and fill it in fully. Even so, you can never ask every good question.
Check that procedures and systems are working, and suggest changes if they aren't. Even if it's a good system, you might find ways to improve it. This means that you might need to report in writing to those responsible:
It is not acceptable to only report that you’ve thoroughly reviewed the unit or to say that the unit went very well and you wouldn't change anything. (However, good teachers can almost always think of changes they'd make next time.)
When reviewing the effectiveness of work-based learning, make a written record of work performance and learning achievements according to the requirements of your organization.
Feedback goes in two directions : from you to the students (discussed in another e-book) and from the students to you. For reviewing a program, we refer to the students' comments to you on how the teaching and assessment went.
You also need to encourage students to provide critical feedback on their learning experiences so you can evaluate the effectiveness of the learning program and the way it was done. Listen carefully to the student’s comments on how it went. It will make sure that you and the students understand each other and will allay any negative impressions that the students might have had. It will help in your review of the assessment.
Multiple choice student feedback forms are sometime sarcastically called happy forms
because they often do little more that ask: Are you happy?
They are widespread because they are quick and easy to use for both staff and students, and keep a paper trail of feedback. Although they meet any accreditor's requirements as a feedback system, they are often unreliable because people tend to give minimum answers with little thought. Reports vary; some responses are overly optimistic, and some are overly pessimistic.
Otherwise, you are quite free to look at a wide range of methods to evaluate what has been done. You don’t need to use all the methods, just those that are most helpful in your situation. For example, you might choose to interview management, instructors, assessors, and students.
Observing assessments could be most helpful. You might be able to see that a student is badly stuck on a task he/she finds difficult, but they might be unwilling to say so or mention it in feedback.
Some other aspects of review methods are:
Here’s a list of the kind of things you might need to address in your notes with answers and supporting evidence:
Content 14
Content 15
Content 16
Content 17
Content 18
Content 19
Content 20