Fact Sheet: Assessment and Assessment Judgement

Fact sheet
This Fact Sheet focuses on Outcome Standards 1.3 and 1.4, which require RTOs to maintain fit-for-purpose assessment systems and conduct assessments that enable accurate judgments of competence.
Last updated:

Assessment is the mechanism through which Registered Training Organisations (RTOs) determine whether students have the skills and knowledge required to perform effectively and safely in the workplace as specified in the relevant training product (AQF qualification, a skill set, a unit of competency, accredited short course or module).

High-quality, compliant assessment systems not only support student success,  they uphold industry confidence and safeguard the integrity of qualifications and statements of attainment issued by the RTO. Assessments that are fair, valid, reliable, and flexible ensure students are job-ready and meet the needs of employers and the broader community.

This Fact Sheet considers:

  • Outcome Standard 1.3 which requires RTOs demonstrate their assessment system is fit-for-purpose and consistent with the training product, assessment tools are reviewed prior to use, and the outcome of the review informs any changes to assessment tools.   
  • Outcome Standard 1.4 which requires RTOs demonstrate that assessment is conducted in a way that is fair and appropriate, enabling accurate judgements of student competence.

What is Assessment?

Assessment is defined in the Outcome Standards – Registration Standards 2025 (Outcome Standards) to mean the process by which a WA registered provider, or a third party delivering services on its behalf, collects evidence for the purposes of determining whether a student is competent to perform to the standard specified in the training product.

Principles of Assessment

The Principles of Assessment ensure that assessments accurately and appropriately measure a candidate's skills and knowledge against the relevant training product, providing a comprehensive and consistent evaluation of a student’s skills and knowledge. The principles of assessment include:

Fairness - assessment accommodates the needs of the student, including reasonable adjustments and opportunities for reassessment.

In practice: Assessments should respond to individual student needs by providing clear instructions, outlining student rights such as access to appeals and reassessment, and offering reasonable adjustments where appropriate. These adjustments must maintain the integrity of the assessment process and the training product.

Flexibility - assessment is appropriate to the context, training product and student, and recognises skills and knowledge regardless of how or where they were acquired.

In practice: Assessments should reflect the student’s context and recognise prior learning, whether gained through formal or informal training or work experience. Flexibility may include using varied assessment methods that may include oral, written, or practical, and conducting assessments in the workplace when suitable.

Validity - assessment includes practical application components that enable the student to demonstrate the relevant skills and knowledge in a practical setting

In Practice:  Assessments must measure what they are intended to measure and include practical application. Students should demonstrate skills and knowledge in realistic or workplace-like settings. Mapping tools help ensure all training product requirements are addressed.

Reliability - assessment outcomes are consistent across assessors and delivery contexts, with evidence interpreted in a comparable way.

In Practice: Assessment outcomes should be consistent across assessors and delivery contexts supported by clear benchmarks, detailed assessor instructions, well developed tools and regular validation activities.

Rules of Evidence

The Rules of Evidence are the principles used to ensure the evidence collected for assessments justifies the ‘competent’ or ‘not yet competent’ decision. The Rules of Evidence include:

Validity - assessment evidence is adequate, such that the assessor can be reasonably assured that the student possesses the skills and knowledge described in the training product.

In practice: Evidence must align with the training product and clearly demonstrate the student’s competence. It must only reflect what the student is expected to know and do.

Sufficiency - the quality, quantity and relevance of the assessment evidence enables the assessor to make an informed judgement of the student’s competency in the skills and knowledge described in the training product.

In practice: A single piece of evidence is rarely enough. Assessors collect a range of evidence that covers all required skills and knowledge, without relying on minimal or isolated examples.

Authenticity - the assessor is assured that a student’s assessment evidence is the original and genuine work of that student.

In practice: Assessors must verify authorship, especially in unsupervised, online or remote settings. This includes checking for plagiarism, undeclared assistance, or use of generative AI tools.

Currency - the assessment evidence presented to the assessor documents and demonstrates the student’s current skills and knowledge.

In practice: Evidence should show what the student can do now, not what they did some years ago. Timeframes for currency may vary depending on industry and regulatory expectations.

Assessment Systems and Tools

Assessment system is defined in the Outcome Standards and means a coordinated set of documented policies, procedures and assessment tools designed to ensure that assessment, including recognition of prior learning, produces consistent and valid judgements of student competency and meets the requirements of this instrument.

Assessment Tools are also defined in the Outcome Standards and means the instrument, instructions and methods used to gather and interpret assessment evidence for the purposes of determining VET student competency, including:

  • the context and conditions of assessment;
  • the tasks to be administered to the VET student;
  • an outline of the assessment evidence to be gathered from the VET student;
  • the criteria used to judge VET student competency; and
  • the administration, recording and reporting requirements for assessments and assessment evidence

Assessment methods may incorporate items such as:

  • direct observation in real or simulated workplace settings;
  • practical demonstrations, projects, and tasks;
  • written or oral questioning to confirm knowledge;
  • supervisor or observer reports used when direct observation by the assessor is not feasible; and
  • work samples, portfolios, or logs, which show authentic outputs over time.

All assessment tools must be systematically reviewed prior to their use to ensure they:

  • cover all training product requirements;
  • provide clear, unambiguous instructions;
  • reflect current industry practices; and
  • support fair, valid, reliable, and flexible assessment.

Assessment tools must align fully with the elements, performance criteria, performance evidence, knowledge evidence, and assessment conditions specified within the training product. A process for mapping the training product requirements to the assessment tools is one way to demonstrate all requirements are met, support validation and assist with meeting compliance and audit processes.

To ensure validity and authenticity, assessment tools should reflect real workplace tasks and conditions, be tailored to the student cohort without compromising required outcomes and be validated by industry representatives.

Assessment tools should include plain language instructions, clearly defined expectations and conditions, and accessibility features. Reasonable adjustments must be available where needed to support fairness and inclusion for candidates with a disability.

Using a combination of assessment methods, which may include direct observation, questioning, work samples, and workplace or third party reports, strengthens the assessment decision and allows students to demonstrate competence in diverse ways.

Finally, assessment tools must specify whether tasks are to be completed in a real or simulated environment. Where simulation is used, it must closely replicate real workplace conditions and be justified in line with training product requirements.

If purchasing assessment resources, an RTO must not rely solely on publisher or third party claims of compliance. All assessment tools, whether developed externally or internally, must be reviewed and contextualised to identify and address any gaps, clarify instructions, ensure relevance, and the outcome of the review should be documented. This documentation may include a continuous improvement register or quality system or similar.

Training Products - Requirements and Assessment Conditions

Regardless of the assessment tool used, evidence must demonstrate that the student meets all skills and knowledge requirements outlined in the training product, and evidence must show application in a real or accurately simulated work context.

When designing assessments, RTOs must review the training product to determine:

  • the essential outcomes and the required performance to demonstrate; achievement (elements and performance criteria);
  • what must be demonstrated (performance evidence and knowledge evidence);
  • how and where it must be demonstrated (assessment conditions);
  • any mandatory use of workplace settings or client interactions; and
  • required equipment, resources, or assessor qualifications specified in the assessment conditions.

Foundation skills, such as reading, writing and numeracy, must be integrated into assessment design where specified. For example:

  • oral communication may be assessed through client interactions or team discussions; and
  • reading may be assessed through interpreting standard operating procedures (SOPs).

Assessment design should also consider the needs of diverse student cohorts including culturally appropriate practice, language backgrounds and accessibility requirements such as large print materials or screen reader compatibility, and accessible venues.

Contextualisation and Workplace Relevance

Assessment tools must be contextualised to reflect realistic work situations. This includes:

  • aligning tasks and scenarios to current industry standards and practices;
  • tailoring tasks within an assessment tool to reflect specific student cohorts or work contexts without reducing the required standard; and
  • confirming with industry representatives that simulated or practical tasks are relevant and current.

Assessment Conditions – simulated environments vs assessing in the workplace

The assessment tool must specify whether the assessment tasks are to be completed in a real workplace or a simulated setting. RTOs must review the assessment conditions to confirm simulation is permitted, and if so, validate the realism and appropriateness of simulated tasks with industry representatives, and document the rationale, including how it aligns with the product’s requirements and reflects current workplace realities.

Simulated Environments

Simulation must closely replicate real workplace conditions including replicating actual workplace conditions such as equipment, time constraints, workflow, and interpersonal interactions.

When using simulation, assessors should reflect the five Dimensions of Competency to ensure holistic assessment including:

  1. Task skills - performing specific tasks or duties required in a particular role or industry.
  2. Task management skills – prioritising and managing multiple tasks.
  3. Contingency management skills - responding to problems, unexpected situations and changes in the workplace.
  4. Workplace environment skills – working within the workplace’s culture and interacting within a team.
  5. Transfer skills – applying skills in other workplaces, with other tasks and using other equipment.

These dimensions help ensure that assessment reflects real-world performance. When assessment occurs in a workplace, these dimensions are demonstrated organically through real work tasks.

Simulation should not be used simply for convenience or to reduce delivery costs. Instead, it should be carefully designed to reflect the complexity and variability of real work situations and as a strategy to ensure assessment validity and authenticity, enabling students to demonstrate all required skills and knowledge in context.

Assessing in the Workplace

Workplace assessment refers to the evaluation of a student’s skills and knowledge in a real work environment, rather than in a simulated workplace or classroom setting. It involves observing the student performing actual tasks using authentic equipment, processes, and workplace standards.

Further guidance is available in the Fact Sheet: Assessing in the Workplace

Evidence Collection and Assessment Context

Evidence Collection Methods

Evidence collection must be guided by the requirements of the training product which outline what must be demonstrated, how it must be demonstrated, and under what conditions.

Performance criteria, performance evidence, and foundation skills are generally best assessed through direct observation of tasks, either in a real or simulated environment. In some cases, performance criteria may result in a product or outcome (e.g. a report, spreadsheet, or design) rather than an observable action. When this occurs, the quality, and relevance of the product must be evaluated against task requirements to confirm competence.

Knowledge evidence must be assessed explicitly through oral or written questioning. It cannot be inferred from performance alone, as it confirms the student’s understanding of principles, underpinning theory, and required contextual knowledge.

Supplementary evidence, such as logbooks, workplace records or reports can be used to verify volume or frequency requirements, especially when direct observation is not feasible.  These forms of evidence also support authenticity and sufficiency.

Integrated methods like projects, case studies, or practical demonstrations are particularly effective for assessing complex tasks that require decision-making or problem-solving. However, all methods must be selected purposefully and mapped directly to the training product’s elements, performance criteria, evidence requirements, and assessment conditions.

This approach ensures that evidence gathered is valid, sufficient, authentic, and current, and that each requirement of the training product is clearly and comprehensively addressed.

Industry Experts as Evidence Gatherers

Industry experts or workplace supervisors may assist in gathering evidence but must operate under the supervision of a qualified assessor. They cannot make final competency decisions.

More information is available in the Fact Sheet:  Trainer and Assessor Requirements.

Making Assessment Decisions

Assessment decisions must be based on professional judgement and supported by evidence that meets the Rules of Evidence, and Principles of Assessment. A student can only be deemed competent when all the requirements of the training product have been met

When the evidence presented does not fully meet the requirements of the training product, the student must be assessed as Not Yet Competent. Competence should never be assumed or inferred from incomplete or inadequate evidence. In such cases, RTOs should offer opportunities for reassessment or supplementary tasks, in line with their policies and the principle of fairness. Clear, constructive feedback must be provided to the student, outlining what additional demonstration or evidence is required to achieve competence. Any additional evidence collected must be documented, along with an explanation of how it informs the updated assessment decision.

To ensure reliability and fairness across students, assessors, and delivery modes RTOs should use marking guides, detailed checklists, and clear benchmarks to reduce subjectivity. Assessor participation in validation activities aids in consistency, maintains transparency and supports student trust in the assessment process.

Students must have access to a clear appeals process should they disagree with an assessment outcome.

Further information is available in the Fact Sheet Feedback, Complaints and Appeals.

A Practical Example…

Unit of Competency:  SITXCCS014 Provide service to customers (from the SIT Hospitality Training Package)

A student completes a practical service assessment in a simulated cafe environment. The unit requires the student to:

  • greet and interact with customers in a professional and friendly manner;
  • accurately take orders and use appropriate communication to confirm details; and
  • resolve customer complaints or special requests in line with organisational procedures.

During the assessment, the student successfully greeted customers and took orders but did not appropriately manage a simulated customer complaint. Instead of following the organisation’s escalation procedure, the student offered a personal solution that was outside the cafe’s policy.

The assessor determines that the student has not fully demonstrated competence in handling complaints (which is one assessment contributing to the whole unit) and therefore must be marked as Not Satisfactory for the assessment.

  • the student is provided with feedback outlining specific gaps in performance;
  • a reassessment opportunity is offered, where the student can repeat the task focusing on correct complaint resolution procedures; and
  • the reassessment is conducted in a controlled, simulated environment, and all outcomes and assessor notes are documented.

Lessons Learnt
Assessors must make clear, evidence-based assessment decisions, either Satisfactory or Not Satisfactory. Where a student is found Not Satisfactory, the decision must be documented, feedback provided, and appropriate opportunities for reassessment offered in line with RTO policy and fairness principles.

Authenticity and Integrity of Evidence

Authenticity is a component of the Rules of Evidence and underpins the credibility of every assessment decision. Assessors must be confident that the evidence submitted is genuinely the student’s own work and reflects their capability.

Maintaining authenticity is increasingly complex due to online and remote learning. Common risks to authenticity include:

  • submissions completed by another person;
  • undeclared external assistance or contract cheating services;
  • use of generative AI tools to produce content without disclosure; and
  • misrepresentation or false statements in supervisor or workplace reports.

Assessors can adopt multiple, complementary strategies to verify authenticity, such as:

  • Direct Observation – observe students completing tasks in person or via secure live video;
  • Supervised Knowledge Assessments – supervise written or verbal assessments to confirm authorship;
  • Targeted Questioning – ask follow-up questions to confirm understanding and authorship;
  • Supervisor or Observer Reports – use structured templates and verify details directly where needed;
  • Signed Declarations – require students to formally declare their work is their own. Signed declarations should complement, not replace, other verification methods;
  • Digital Evidence Checks – review file metadata, version histories, and logs;
  • Photo or Video Evidence – use dated media showing students performing tasks (noting this form of evidence may not be appropriate for all training products);
  • Online Authentication Tools – consider facial recognition, keystroke dynamics, or IP tracking; and
  • Live Check-ins – schedule real-time interactions during assessment windows.

Evidence from workplace supervisors, mentors, or observers should support, but not replace, the assessor’s professional judgement. RTOs must:

  • brief and support observers before evidence collection;
  • ensure contributions are documented, authenticated, and validated; and
  • confirm that all evidence meets the Rules of Evidence and aligns with training product requirements.

A Practical Example…

Unit of Competency:  CHCECE032 Nurture babies and toddlers

A student submits a reflective journal describing how they settled infants, engaged in play, and communicated with families.

The unit’s assessment conditions specify that key tasks must be observed at least twice, including one occasion directly by the assessor in the workplace (e.g. nappy changing, feeding routines).

Authentication approach - the assessor conducts a direct observation visit to observe core tasks personally. The student’s reflective journal and supervisor reports are used to provide supplementary evidence for other shifts.

The assessor conducts targeted questioning to confirm that the student wrote the journal entries and understands the techniques described.

In this example, authentication strategies (direct observation, questioning, supervisor confirmation) ensure the integrity of the evidence. This approach confirms both compliance with assessment conditions and upholds the rule of authenticity.

Supporting Students during Assessment

Supporting students throughout the assessment process is essential to uphold fairness and flexibility, ensuring every student has a genuine opportunity to demonstrate competence. As outlined in Outcome Standard 1.1, RTOs are required to provide sufficient time for instruction, practice, feedback, and assessment. Students must have opportunities to develop and refine their skills before formal assessment to ensure readiness and fairness.

Before assessment begins, students should receive clear guidance on the tasks, criteria, conditions, and expected outcomes. This includes timelines, required resources, and their rights, such as access to reasonable adjustments, appeals, and reassessment. For unsupervised assessments, students must be informed that verification processes will be used to confirm authenticity.

Reasonable adjustments may be made for students with disability or chronic health conditions. These could include assistive technologies, oral responses instead of written ones, or changes to the physical environment. All adjustments must be documented and justified to ensure they are fair and do not compromise the integrity of the assessment, and must be relevant and applicable to workplaces.

More information on is available in the Fact Sheet: Reasonable Adjustment and Inclusive Practice.

To reduce anxiety and build confidence, RTOs should encourage practical activities, formative assessments, and mock assessments. Creating a supportive environment, using plain language and avoiding jargon, helps students feel comfortable asking questions and seeking clarification.

Feedback plays a vital role in supporting student development. It should confirm the outcome, explain the reasoning, highlight strengths, and identify areas for improvement, even when competence is achieved. Regular, constructive feedback promotes transparency and reduces the likelihood of complaints or appeals.

A Practical Example…

Unit of Competency:  SITXFSA005 Use hygienic practices for food safety

A student with a registered disability for a vision impairment is undertaking a written knowledge assessment for SITXFSA005 Use hygienic practices for food safety.

The RTO arranges for the student to complete the assessment orally with an assessor, using oral questioning instead of written responses. The assessor records the response to each question.

Extra time is allocated to allow the student to process questions comfortably.

This adjustment is documented in the student’s file, and assessment outcomes are recorded in line with standard processes.

By actively supporting students, RTOs enhance assessment fairness and accessibility, strengthen student confidence, and maintain compliance with Outcome Standards.

Note:  Although this strategy addresses issues relating to knowledge assessment, care must be taken to ensure that there are no other aspects of the unit that might be impacted.  In this example ‘reading’ is a Foundation Skill, necessary for reading regulatory documents and food standards as specified in the assessment conditions. Text-to-voice or text-to-speech tools may also be needed for the individual to function effectively in the workplace, and these would have to be always accessible, such as on a smart phone or tablet.

Assessments Records

RTOs should retain assessment plans and tools, mapping documents, all assessments submitted by a student, assessor feedback, records of reasonable adjustments, appeals, and validation activities.

Compliance Standard 10 of the Compliance Standards and Fit and Proper Person Requirements specifies the records of AQF certification documentation and assessments that must be retained by an RTO including retaining records of all assessments submitted by a student for a period of two years after the student has completed the training product.

Recognition of Prior Learning

Recognition of Prior Learning (RPL) enables students to have their existing skills, knowledge, and experience formally assessed and recognised. RPL assessment must address Outcome Standard 1.3 and 1.4 and 1.5 and 1.6.

Information relating to RPL is available in Outcome Standard 1.6 of the TAC Registration Standards 2025 Hub.

Validation and Continuous Improvement

Outcome Standard 1.5 specifies RTO requirements in relation to validation of assessment practices and judgement. 

Refer to the Fact Sheet: Assessment Validation.

Continuous improvement is essential to maintaining the quality and relevance of assessment practices (4.4). RTOs should regularly review assessment strategies using feedback from students, industry, complaints, appeals, and audit outcomes. Identifying gaps in assessor knowledge or practice should inform targeted professional development (3.2). Assessment tools and processes must be updated to reflect evolving workplace expectations and training product requirements, ensuring ongoing compliance and improved student outcomes.

Further information is available in the Fact Sheet: Continuous Improvement

Risks to Quality

Assessment Planning and System Design

Issues in assessment planning can lead to non-compliance, unfair outcomes, or invalid competency decisions. Addressing these risks early helps strengthen assessment integrity, student outcomes and compliance practices.

Prerequisites not identified or confirmed – students may be enrolled in training products without having completed required pre-requisites or hold the necessary foundational skills. This can result in invalid assessment and safety risks. RTOs must verify that prerequisite units have been awarded before making any competency decisions, and this should be clearly documented in the judgement tool and assessor instructions.

Incomplete consolidation of evidence – assessment decisions must be based on a full set of records, including theory assessments, multiple practical assessments, logbooks, confirming workplace hours have been attained, supervisor reports, prerequisite units. If these are not brought together before making a final judgement, decisions may be premature or incorrect. RTOs should use a marking guide or a checklist to support consistent and valid decision-making.

Clustering units - for delivery efficiency is acceptable but it must still allow students to be assessed and awarded each unit individually. Clustering units allows students to integrate tasks that reflect how skills are actually applied in the workplace.

Using formative assessments as summative – formative tasks are designed for practice and feedback, not for final competency decisions. Treating them as summative can result in students being incorrectly deemed competent without sufficient or current evidence.

Over assessment – adding tasks, skills or knowledge not required by the training product, places unnecessary burden on students, reduces fairness, and compromises validity. Assessment should be scoped strictly to training product requirements.

Purchased resources - must be thoroughly reviewed and contextualised. Off-the-shelf products often contain gaps, irrelevant tasks, or misalignments with local industry needs. RTOs cannot rely on developer claims of compliance - all resources must be checked to ensure they meet training product requirements and suit the student cohort.

Mapping and Alignment

Over-mapping or under-mapping – where the same training product requirement, such as performance evidence or knowledge evidence, is mapped across multiple tasks unnecessarily. This does not strengthen reliability; instead, it can confuse assessors, dilute evidence quality, and obscure how competence is actually demonstrated. On the other hand, if the mapping is missing items, including foundation skills or assessment conditions, this can result in critical gaps and lead to non-compliance.

Assessing performance through questioning alone – the performance evidence specifies what a student must be able to do in a practical demonstration of skills in context. Mapping these to written or oral questions fails to confirm actual performance and undermines validity. Similarly, including questions that are not directly required to address knowledge evidence introduces unnecessary content and compromises the integrity of the assessment.

Splitting elements across tasks – performance criteria within an element are designed to be assessed together in a holistic, integrated task that reflects realistic work application. Fragmenting these across separate tasks makes it difficult to confirm that the student can perform the requirements together in full and may result in invalid outcomes.

Omitting parts of a unit deemed ‘not relevant’ – such as specific performance and/or knowledge evidence, or assessment conditions is a serious risk. RTOs and assessors cannot disregard unit components based on perceived relevance to a student cohort or workplace. If certain tasks cannot be performed in a given context, alternative evidence collection strategies (e.g. simulation or targeted questioning) must be used to ensure full coverage. Selective omission compromises validity, student outcomes, and regulatory compliance.

Evidence Collection Methods and Tools

Ignoring assessment conditions – if requirements for workplace observation, simulation limits, or specific assessor credentials are not met, the assessment outcome may be invalid. RTOs must carefully review and apply all conditions specified in the training product.

Not meeting volume or frequency requirements – many training products specify repetition or range, such as working with different client types or completing tasks multiple times. Failing to assess across this required range undermines the validity of the evidence and the competency decision.

Separating performance evidence from criteria – performance criteria are used to shape and inform assessment tasks and sets the benchmark that is required.  Performance evidence is the proof that the standard has been met by the student and outlines the actual evidence assessors must collect to confirm the student is competent.

Weak observation checklists – simply copying and pasting performance criteria into a checklist does not provide sufficient detail for assessors to make informed judgments. Strong checklists should be contextualised to reflect real workplace tasks, include specific observable behaviours, and provide space for detailed assessor comments. Without this, assessments risk inconsistency.

Incomplete or vague records – such as ticking ‘yes’ or ‘competent’ without comments, fail to justify assessment decisions. Detailed, annotated records are essential to demonstrate what was observed, how it was performed, and whether it met the required standard. This supports transparency and strengthens the integrity of the process.

Poor question design - can also undermine assessment quality. Common issues include missing components of the knowledge evidence, ignoring broad contextual headers (e.g. legislation or codes of practice), using overly simplistic questions, and failing to specify the required depth or structure of responses. Questions must be aligned to the notional AQF level and give students clear guidance on how to respond.

Over-reliance on true/false or multiple choice - can result in superficial evidence. These formats limit students’ ability to demonstrate understanding in their own words and may encourage guessing. While they can be used sparingly, they should be complemented by open-ended formats such as short answer or scenario-based questions. Reassessment processes for these types of questions must also be clearly defined, unlimited reattempts without structured feedback can compromise reliability and fairness.

Open-book knowledge assessments – while open-book formats can be appropriate, they must reflect actual workplace access to resources. If not, they reduce assessment rigour and may not accurately assess the student’s independent knowledge or decision-making ability.

Leading instructions or guided responses – instructions that suggest or hint at correct responses compromise validity by coaching students rather than assessing their competence. Assessment tasks must be worded in a way that allows students to demonstrate their own understanding.

Poorly designed simulated environments – when simulations are used, they must closely replicate real workplace conditions, including equipment, time pressures, environmental factors, and typical interactions. Simplified or unrealistic simulations fail to provide evidence of workplace performance. Additionally, simulation must be permitted within the training product’s assessment conditions. If the product requires workplace observation, simulation alone is not sufficient.

Assessment design and practice should not be static –regular review, professional development, industry consultation, and validation activities are essential to ensure assessment systems remain robust, current, and responsive to evolving workplace expectations. RTOs and assessors should foster a culture of continuous improvement to support high-quality student outcomes and maintain regulatory compliance.

Feedback, Judgement, and Records

Lack of clear benchmarks or exemplar answers - leads to unreliable and subjective assessment decisions. For knowledge assessments, benchmarks define what constitutes a satisfactory response, highlighting key points, required detail, and the expected depth or breadth. For practical tasks, benchmarks describe observable behaviours, quality standards, and expected outcomes, such as correct techniques or safety protocols. While exemplars are useful as reference models, assessors must recognise what is satisfactory, not just what is perfect, students are not unfairly penalised for minor variations that do not affect competence.

Inconsistent assessor judgements – can occur due to unclear marking standards or varied approaches across assessors and undermines fairness and confidence in outcomes. Clear benchmarks and regular moderation activities are essential to support reliability.

Supervisor or workplace observer reports - can be problematic if they lack detail. High-quality reports include specific examples of tasks performed, contextual details (e.g. client types or environments), frequency and consistency of performance, and comments on safety and adherence to procedures. Importantly, observers must not make competency decisions, their role is to provide factual accounts that support the assessor’s judgement. Assessors remain responsible for verifying the credibility and sufficiency of this evidence.

Insufficient feedback to students - affects fairness in assessment. Without meaningful feedback, students cannot understand their results, correct errors, or improve performance. Feedback should confirm the outcome, explain the reasoning, highlight strengths, and identify areas for development, even when competence is achieved.

Incorrect marking keys or answer guides - can pose risks. Guides that reference superseded legislation or contain incorrect marking keys can result in students being wrongly assessed and affecting fairness and the credibility of qualifications. Regular review and updating of marking guides and reference materials is essential to ensure assessments remain valid and current.

Authenticity and Integrity

Group assessments – while they can be valuable for developing teamwork and simulating workplace collaboration, each student’s competence must still be individually assessed and evidenced. Relying solely on collective outputs fails to confirm whether each student has achieved the required skills and knowledge. Assessment design must ensure that individual contributions are clearly captured, through separate questioning, individual observation notes, or documented roles within group tasks.

Failure to identify issues in pre-use review – If a thorough review is not conducted, then issues such as missing training product requirements, unclear instructions, or poorly designed tasks can go undetected, leading to invalid assessments and compromised student outcomes. RTOs must not rely on publisher or developer claims of compliance. Every tool must be reviewed to confirm alignment with training product requirements, support valid and reliable evidence collection, and provide clear, accurate guidance for both students and assessors.

Have a question or want to report a problem?

Fill in the form to get assistance or tell us about a problem with this information or service.

Send feedback