The second layer of compliance

Ross Woods, 2020

This paper discusses the evidence that is used in a compliance audit for accreditation purposes. The first layer of compliance refers to requirements that are clearly specified in the accreditation standards. In contrast, the Second Layer of Compliance (SLC) is the collective term for items that are not specified in the standards but which an auditor might require in order to deem that an institution complies with the standards.

This example refers to audits against the Standards for Registered Training Organizations 2015, standards, which is overseen by the Australian Skills Quality Authority ASQA. (In US terminology, ASQA is an accreditation agency.) The standards are intended to suit equally different kinds of institutions of different sizes, and qualifications from the lowest certificate to graduate qualifications. A few standards are quite explicit and prescriptive while other are general statements where the institution much choose the way in which it demonstrate compliance.

Evidence has two meanings. It refers to anything that an institution provides to auditors to demonstrate that it has complied the standards. It can also refer to anything that an auditor might require to deem an institution compliant with the standards. In most cases, these refer to the same things, but opinions can vary. Evidence clearly includes documents. In some audit standards, however, it also includes interviews. The ASQA audit standard* is silent on the matter.

For example, the standard might require items A, B, and C to be documented, but not D, E, and F, which must be consistently done but for which documentation is not required. However an auditor might require that all of them be documented. As a second example, institutions normally keep a mapping document that compares the specific requirements of assessment tools and the mandatory competency standards. The standards do not specifically require a mapping statement, but it is only realistic way of comparing assessment tools to competency standards and may take many forms. The third example is that institutions must have certain equipment, but are not required to have written equipment lists. The fourth example is the Training and Assessment Strategy, which relates to the way in which it will conduct training and assessment.

ASQA auditors usually test SLC using questions such as: How do you know? What checking did you do to confirm it? How do you meet this requirement? The latter might mean What are your procedures or practices for this? and What documentation do you have?

A continuum

SLC can probably be placed on a continuum from least contentious to most contentious. At the least contentious end of the continuum are items that all institutions routinely do without question that it are required. The best examples are Training and Assessment Strategies and assessment tool mapping.

At the most contentious end of the continuum are the kinds of evidence that auditors can only require by brute force; an unfavorable report can result in the loss of ASQA registration. The best examples are specific requirements that were once mandatory but were deliberately omitted from later versions of the standards. These are the nature of internal audits and verification of staff qualifications. If an institution is deemed non-compliant in any more contention item, the institution has a stronger case if it chooses to appeal the auditors findings.

The standards do not actually require some kinds of evidence, but auditors always do. For example, the standards do not mention assessment tool mapping at all, just that assessments comply with the competency standards. However, auditors require mapping because they assume a right to require auditees to be able to show how they comply with the standards. In other words, some items are not requirements, but simply make the auditor's job easier. The ASQA audito standard* places the onus on auditors: The role of auditors and accreditation assessors is to determine whether the RTO, applicant or course owner has complied with the requirements of the relevant standards, based on the evidence provided.

The more prescriptive standards are, the easier it is for both auditors and institutions to differentiate between compliant and not compliant. The opposite is also true. The less prescriptive standards are, the more difficult it is for either the auditor or the insititution to make the differentiation.

In the middle are range of other kinds of evidence:

  1. How clear, complete, and unambiguous do written notes have to be?
  2. The competency standards must be applied to the particular context of each student cohort, called contextualization. How far can they be contextualized before they are invalid, that is, they no longer accurately reflect the intent and purpose of the unit?
  3. Must every activity relating to compliance requirements be documented, not just those things that the standards are explicitly required to be?
  4. If someone from central office visits a site, are they required to keep full records of the visit?
  5. Can auditors include in the audit any activities of the past, not just those current at the time of the audit?
  6. Are interviews acceptable as evidence in an audit?
  7. Must the system of compliance be described in a separate written document?
  8. If evidence is documented, does it still need to be confirmed? Or did people create document specifically to pass audit?
  9. When the institution provides pre-admission information to prospective students, is the institution required to get each student to sign a document stating that he/she has recieved it?
  10. When the institution provides samples of evidence in an audit, how many are enough?

The shades of gray

At what point along the continuum is it reasonable for the auditor to require evidence of a particular kind?

First, auditors can almost always ask for more evidence of compliance. They seldom, if ever, have a standard for enough. Put another way, auditors can too easily say Not enough evidence no matter how much they have and how good it is. In this sense, sufficiency is a logical fallacy, because a premise is missing, that is, it does not have criteria for differentiating between insufficient and sufficient.

Second, auditors do not always clearly differentiate between compliance, non-compliance, risk, and opportunity for improvement. The value of these distinctions is that risks and opportunities for improvement both look like non-compliances, so it is easy for auditors to incorrectly allege a non-compliance.

Non-compliant Negligence, intentional dishonesty, system failure
Error A typographical error, a minor oversight.
Risk Something that could foreseeably go wrong, but hasn’t. Risk levels range from moderate to severe.
Opportunity for improvement Meets the requirement but could improve in some identified way.

Third, ASQA’s audit standards state that auditors may not prescribe the specific form that evidence may take.1 This allows the institution to choose the way in which it will demonstrate compliance, but it also gives auditors more freedom to decide whether it is compliant or non-compliant.

At least some extent, the line shifts according to the nature of the organization. The more complex an institution is, the more it must document its activities. A very large organization or an organization with many franchisee sites needs more than a small, all-on-one-site institution with few personnel. In theory, a high risk activity might require more documentation than a low risk activity.

What can institutions do to be safe?

Being safe has various aspects. The first is to be deemed standard-compliant at an audit. An institution’s first solution is to build a fortress of paperwork, documenting all aspects of compliance-related activity. It should also have a system of checking; a second set of eyes usually sees anything differently and can identify gaps, errors, and possible misunderstandings. This also involves developing effective systems to ensure compliance. For example, assessment tools are difficult to check and to improve if their formats are very different. It would be better to require that all assessment tools follow an agreed-upon template for layout and mapping.

Being safe also means developing cost-efficient systems; a cost blow-out would endanger the institution. Compliance costs have clearly risen, and institutions need to itemize and budget them, and set fees to cover them. However, it is also possible to computerize many processes, and the limits of computerization have yet to be defined exactly. So far, anything cannot be computerized if it cannot be addressed in an algorithm and depends of human judgement. The change, however, is that many tasks, once thought to require human judgment, can increasingly be expressed as algorithms and computerized.

Being safe also means maintaining effective teaching and learning. Giving some students large amounts of information in unintelligible TrainingSpeak probably confuses them more than helps them, especially if they have to sign a formal document stating that they received and understood it.

 

Reference
* Code of Practice: ASQA Auditors and Course Accreditation Assessors, N.d.