How to develop a detection system

A guide to help public authorities develop and implement detection systems to prevent misconduct and corruption.

Stage 1: Decide

At this stage:

  • identify the need to implement or strengthen a detection system based on authority remit, size, people, processes and risk (for example, risk appetite and emerging risks)
  • consult with functional areas to determine the system or approach required.

Steps

  • Consult with relevant functional areas, particularly those identified as being more vulnerable to integrity risks. Consult externally for additional advice and assistance if required.
  • Perform a risk analysis to identify high risk functions and activities that might expose the authority to misconduct or corruption.
  • Develop a business case to establish a detection system. Focus on how detection activities target and mitigate identified integrity risks. Encourage buy-in from the senior leadership team.
  • Identify existing internal capability, capacity and resources to develop, implement and support the detection system.

Relevant case studies

All authorities reported they consulted internally and externally where required to develop and implement their detection systems.

All authorities reported functional areas were engaged to identify the data sources required to inform their detection systems.

Authority B performed due diligence to understand potential flaws in legacy systems that could affect the detection system as part of its decision making process.

Authority D outlined early in the process what changes were to be made when implementing its detection system and the impact of those changes on the business.

Stage 2: Design

At this stage, design the detection system taking into consideration:

  • purpose
  • data and information available
  • existing or new capability and capacity needed
  • tools and resources required to support the system.

Steps

  • Understand what is to be detected and determine what data and information can inform that query.
  • Confirm governance arrangements and internal processes for how the detection system is to be managed and oversighted, and by whom. Determine how the detection system is to interact with existing governance arrangements such as the role of audit.
  • Perform quality assurance over data and information inputs. Collect and capture data from appropriate internal and external sources.
  • Design analysis based on data availability, quality and capture.
  • Confirm tools and outputs required such as dashboards and reports. Factor these into the design of the system.
  • Start internal communications with staff about the detection system such as what it is, benefits to the business and their role. Be mindful about providing too much detail about how the system works as this could allow staff to deliberately avoid detection.
  • Understand the specific skills and capabilities needed to run the detection system. Identify any skills gaps that may need to be addressed before the system is implemented.
  • Test the system and address issues before deployment.

Relevant case studies

Authority A commenced the design stage by establishing a governance team for the project. It consulted with in-house experts to conduct a gap analysis of current practices against its own governance and integrity best practice standards.

Authority C used internal resources to design its system, basing the design on internal and external research and intelligence gathering. During the design phase, it gathered feedback from internal functional areas that would be ‘owners’ of the analytics. It also gathered information from external data analytics experts before progressing to deployment.

Authority D used internal resources to design its system and relied on in-house technical support from its information technology area.

Stage 3: Deploy

At this stage:

  • implement the detection system
  • deliver communications to support the system and integrity generally
  • confirm the system is operating effectively
  • validate errors and irregularities to identify integrity breaches and internal control improvements.

Steps

  • Use internal communications channels to inform staff of the detection system before and during deployment.
  • Remind staff of their responsibilities under relevant policies and procedures. Monitor behaviour to ensure staff are accountable. Include refresher training on staff roles and responsibilities.
  • Follow up exceptions to established governance processes. Monitor if these processes are working effectively.
  • Monitor capacity and capability in real time. Allocate resources and check that timely and adequate validation is performed.
  • Confirm escalation protocols are working.
  • Perform remedial and/or corrective actions. Change internal controls to improve internal processes and practices.

Relevant case studies

Authority A committed to regular communications about its detection system to its staff, reinforcing the message of zero tolerance to fraud.

Authority B investigated exceptions identified from fraud data analytics and used findings from the testing to inform future policy reviews.

Authority C focused on change management and partnering with key functional areas such as human resources, procurement, finance and information technology to implement ownership processes for the review of red flags and enhance collaboration in responding to incidents.

Stage 4: Monitor and maintain

At this stage:

  • confirm the system is delivering intended results
  • confirm long term monitoring and management responsibility
  • make changes and improvements where required.

Steps

  • Seek assurance that the detection system is working as intended. Consider if detection activities are confirming suspicions or highlighting risks previously not considered.
  • Monitor effectiveness. Communicate the detection system’s outputs and findings to staff in a position to affect change, using tools designed in stage 2 such as reports and dashboards.
  • Determine how the ongoing program of work and monitoring of exceptions occurs, for example though internal audit.
  • Confirm ongoing ownership of and oversight over the detection system and outputs.
  • Continuously improve the detection system to keep pace with new and changing integrity risks, and to enhance detection outcomes.
  • Consider the time and resources required to perform ongoing maintenance on the system to ensure it remains functional.

Relevant case studies

Authority A identified and investigated exceptions reported and, where needed, made recommendations for process improvements. It also monitored the status of recommendations until implementation.

Authority B advised it might repeat the same fraud data analytics tests over time and extend the analytics to include additional tests across supplier and purchase order management.

Authority C uses governance processes at different levels to track and report on red flags, review progress, investigate outcomes and provide overall reporting to senior committees and external agencies. Internal and external assurance and improvement are managed through a continuous improvement approach which is documented in an integrity detection plan and separate improvement plan. These capture both technology based and manual detection controls.

Authority D reported it would continuously improve processes including assessing the criminal history of prospective staff and increased use of technology.

Authority E suggested authorities engage senior leadership and encourage their full understanding and support to establish accountability and set the tone from the top.

Constraints and challenges to manage

All case study authorities reported constraints and challenges in developing and implementing their detection systems including:

  • available internal capability and capacity
  • legacy systems/technology
  • budget and resourcing
  • data migration.

Relevant case studies

All case study authorities acknowledge implementation of their detection systems helped reduce capacity constraints. They reported that, as their detection systems matured, previously manual processes and tasks have been replaced by automated activities.

Authority A leveraged the expertise of its information technology function to overcome initial funding and resourcing constraints.

Legacy systems and data quality issues initially challenged Authority B. It worked with internal stakeholders and external service providers to cleanse and format the data that eventually informed the testing.

In Authority C’s case, its data analytics and information technology requirements are significant, with many disciplines required to execute 24/7 live data lake and detection analytics systems. Success requires internal champions and to be driven by the authority. External providers can play an important role in development, delivery and operation of detection systems but may not have a nuanced knowledge and understanding of an authority’s business. Do a risk assessment and thoroughly consider the costs and benefits of proceeding with a fully outsourced approach. Even with an outsourced model, commit resources at the design and deployment stages to make sure the system is configured to best suit the authority.

Authority D’s main challenge was the large number of employment screening checks conducted, technical updates required to implement systems and testing them. An interface between systems assisted it to capture data in an accurate and timely way.

Technology limitations around data collection challenged Authority E. Manual processes – including identifying, recording and investigating breaches – limited the introduction of automated controls, exception reporting and data analytics. It overcame some of these issues by using staff with a deep understanding of policies and procedures to add data into the system.

Last updated: