Combination: Distance Education Recognition of Prior Learning

The purpose of this approach is to provide accreditation for overseas students anywhere at all, at a minimal per-student cost, even in other languages. There can be very large numbers of students if you use the right software. It does this by combining Distance Education and Recognition of Prior Learning.

In brief, local evidence gatherers assemble evidence for an RPL assessment. Its chief characteristics are:

This approach avoids all the normal difficulties:

It is permissible under the SNR but is not yet common. Auditing is quite simple because it is distance education with assessment done at the RTO office. The evidence should be well documented and quite audit-proof. The training is not audited because it is not an activity of the RTO. The main critical success factors are very good evidence-gathering tools and trustworthy on-site personnel.

 

Forms for collecting evidence

Write special forms to collect evidence, which are mostly instructions and checklists of observations or completion of specific tasks. Some forms simply have identifying information, reports that the students did the task, how many times, when and where they did it, and met all performance criteria. Others might include Job Descriptions and references. Any checklist forms can be bilingual, with the other language for the local people to use and the English version for administrators and auditors in Australia. However, assessors and auditors want to see at least some sections of text where they can read (in English) a description of the student and observation notes of what he or she has done. You will also need a form for collecting student feedback.

Although some endorsed units are fairly easy to use, it is easier to write forms for an accredited course that you own, because you can write the units to be easily assessable as observed tasks.

 

Consultation and stakeholder feedback

Different groups may need to be consulted in different ways. Email is probably adequate for westerners, but in some non-western cultures, a personal discussion may achieve the most honest answers.

During consultation, ask if there are any necessary descriptors of performance that are not included so far. These may need to be added to the tools. Clear examples may be necessary. It would be best if the forms could be kept the same for all students, but local factors might necessitate tailor-made tools. Check situations where assessment might not be fair, honest or equitable.

Then polish the tools, and see if they work with a wider group of core stakeholders. It is essential to get the assessment forms right before you use them on any large numbers of students.

These questions are based on Indonesian experiences, but may be appropriate for many non-western situations:

  1. What organizational structures affect what students are permitted to do? Is initiative valued or not?
  2. What interpersonal factors will be relevant to interpreting references? (e.g. personal relationships with key leaders)
  3. What geographical factors will be relevant? Will some people be too remote to gain suitable references?
  4. What moral and personal factors affect the assessment (e.g. moral failure, dissension, etc.)?
  5. How will we measure effectiveness? (It is not just that they do the tasks, but that they achieve suitable levels of effectiveness.)
  6. What conflicts of interest will we face? (E.g. organizations comprised of extended family members)
  7. What risk of corruption will there be? (E.g. payment for references)
  8. What motives will drive students to take the qualification?
  9. What reluctance is there to recognize members of some groups equitably? (e.g. prejudice against some ethnic or social groups, prejudice against the disabled, prejudice in favor of some groups, e.g. the highly educated or other higher socio-economic groups)
  10. Will people be embarrassed to get only some units rather than the whole qualification? Will referees be embarrassed to give a reference for only some units? Is there freedom to say that the students did not do well in all units?
  11. Can the existing courses be taken at different levels? If so, how do you define "different levels"?

 

Distributing assessment tools

The central office normally provides up-to-date master copies of tools on its staff website. However, local representatives might find that they need to keep them in hard copy or CD, especially in countries that do not have reliable Internet access, and where tools are adapted (e.g. by addition of local languages). In small programs, simple Word forms can be emailed, but large programs generally require an interface on a website, behind which is an electronic database.

 

The onsite person: Applications and evidence-gathering

You need a trustworthy onsite person, who needs some training in gathering evidence. Some kind of communication needs to happen to make sure the system stays on track. The system can easily fall down if on-site personnel misunderstand or bend the rules. (Student bribes can be a factor in some cases.)

If students use another language and do not speak English, the onsite person needs to speak the other language. Onsite people don't need to live locally; they can be international travelers, as long as they can do the job.

At the application stage, they oversee applications:

They also collect evidence for assessment:

 

Doing the assessments

Do the assessments here in Australia, preferably at the RTO office. The assessor at the RTO then checks that the portfolio is correct, which is easy. The assessment is mostly looking at forms, so is quite easy and quick. If Australian VET assessment procedures allowed it, much of the assessment could be done automatically by the software, but standards require that assessors have the freedom to exercise judgement in assessment.

Cultural and contextual factors may need discussion with the on-site person, but the decision should be easy. Validation is done onsite at the RTO office, and its main value will most likely be to improve the evidence- gathering materials.

The final recording form that the assessor uses to collate evidence and record an assessment decision can be a hard copy form or a electronic interface.

 

The hard bits

  1. Required knowledge is difficult to assess. Some of it can be assessed as part of the tasks. Otherwise, try multiple choice questions. They are difficult to write well and need validation, but they are good for mass numbers.
  2. Evidence gatherers easily feel that they are making the assessment decision, and sometimes de-facto actually do make the decisions.
  3. Evidence gatherers ask, "What's the standard?" This can mean that the forms aren't clear enough in saying what to expect. Try changing the task descriptions or adding performance criteria. Alternatively, perhaps the forms are clear enough and evidence gatherers need to discuss their interpretations. Perhaps the barriers to understanding might be in the assumptions of the evidence gatherers.
  4. How can you realistically give students a right of appeal against assessment outcomes, especially if they are accustomed to powerlessness?
  5. Getting useful feedback from the assessment process can be difficult. It can be done: