Evaluate

A guide for public sector agencies to develop and implement formal code of conduct training.

Evaluate as part of continuous improvement to identify:

  • whether training is achieving learning objectives
  • if training meets learner needs and expectations
  • knowledge and skills learners gain through training
  • how training is being applied.

Use evaluation to improve training in terms of design, content and delivery.

Feedback questionnaires

Questionnaires are a typical way of evaluating training. Some questionnaires measure learners’ initial reaction to the training, for example “Did you enjoy the training?”, “Did the training meet your expectations” and “Were you comfortable in your surroundings?” Some may also attempt to test learning, for example “How much did you know about the topic before versus after the training?”

When developing a questionnaire be mindful how much time learners have to respond. Questionnaires provided at the end of the training may be done quickly without much consideration, especially if overly long. Consider providing for the questionnaire to be completed during the training or at set intervals, allowing for more thoughtful responses or using a 2 step feedback process, with an initial point in time questionnaire followed by a more detailed questionnaire in 3 months’ time to test how learners have transferred their knowledge to the workplace.

Develop questions

Questions can elicit quantitative data (“how many”, “how much” and “how often”) and qualitative data (“what type”). Questionnaires usually elicit a combination of both and both are valuable.

Quantitative questions are generally closed to count and report on response, for example “70% of learners said the resources were well presented.” 

Qualitative questions are usually open and therefore more difficult to analyse and draw conclusions from as they need categorising or ‘coding’. For example, “What did you like about the resources?” elicits different responses from each learner. These questions need to be analysed so improvements can be made. 

Sample questions 

“Yes” or “No” questions (nominal scale; frequency of responses can be counted)

  • Content was what I expected/relevant to my work.
  • Slides were organised logically.
  • Photos, tables and graphics were related to the topic.
  • Resources were well presented and supported the learning.
  • The facilitator:
    • encouraged questions
    • answered questions fulsomely
    • encouraged learners to share experiences.

Likert scale questions (interval scale; frequency of responses can be counted, provides more nuance responses than yes/no)

  • How engaged were you with the training?
  1. It didn’t keep my attention at all.
  2. It kept my attention some of the time.
  3. It kept my attention most of the time.
  4. It kept my attention the entire time.
  5. Don’t know.
  • The purpose of this training was to [add purpose/learning outcome here]. How well do you think it achieved its purpose? 
    Rate from 1 – Not at all to 5 completely
  • Rate how satisfied you were with the training from 1 – very dissatisfied, 3 – neither satisfied nor dissatisfied to 5 – very satisfied

Recall questions (open, qualitative; tests immediate learning transfer)

  • List 3 of the risk areas covered in our code of conduct.
  • Name 2 ways to report a suspected breach of our code of conduct. 

Process questions (open, qualitative; tests higher order thinking, asks for an opinion)
Complete this sentence:

  • I learned…
  • The thing that was most/least helpful was…
  • The one thing I would improve is…
  • I will go back to my team and share…
  • Three things I will implement/change are…

Recall and process questions can be used during the training (as a quiz or reflection activity), immediately after or emailed to learners in 3 months’ time to ask if they have done what they committed to after the training.

Other evaluation criteria

Feedback tools provide some indication of the effectiveness of training. Effectiveness can also be measured by gauging learners’ involvement in the training; and participation in group discussions, activities and case studies. These are qualitative assessments and rely on the facilitator’s perceptions and judgements rather than direct feedback from learners.

Other indicators of successful training include:

  • learners implementing action plans or other ‘homework’ activities set during the training (evidenced by completion rates, feedback on implementation)
  • evidence of increased integrity awareness in staff culture/perception surveys
  • trend analysis of incidences of discipline and misconduct cases, declarations of conflicts of interest and secondary employment applications increasing after training.

Consider carefully attributing the strength of any of these indicators directly to the training. Rather they provide a picture of how useful the training has been and how it contributes to wider efforts to promote understanding of the code.

Reporting on evaluation results

If time has been taken to collect and analyse feedback, report results to relevant parties (e.g. senior leadership team) in a timely way.

Last updated: