Go to top of page

Program evaluation in the public sector

Editor's Note to Readers

Human Capital Matters (HCM )is the digest for leaders and practitioners with an interest in human capital and organisational capability. It seeks to provide Australian Public Service leaders and practitioners with easy access to the issues of contemporary importance in public and private sector human capital and organisational capability. It has been designed to provide interested readers with a guide to the national and international ideas that are shaping human capital thinking and practice. The inclusion of articles is aimed at stimulating creative and innovative thinking and does not in any way imply that the Australian Public Service Commission endorses service providers or policies. It is intended that the articles are accessible for the general reader, do not require subscriptions to specific sites and, where possible and appropriate, editions of HCM have been reviewed by topic specialists to provide range and reasonable currency on topical issues.

A new feature, based on feedback from 2015, is the inclusion—where possible—of additional hyperlinks/references for those with a keen interest or librarian support or access to specific, user-pays sites.

Thank you to those who took the time to provide feedback on 2015 editions of Human Capital Matters. Comments, suggestions or questions regarding this publication are always welcome and should be addressed to: humancapitalmatters [at] apsc.gov.au. Readers can also subscribe to the mailing list through this email address.

This edition addresses program evaluation in the public sector. The articles have been chosen to provide some ideas and thoughts about the reasons for and the mechanics of program evaluation. Firstly is the unambiguous statement of principle from the ANAO that evaluation and the consequent accumulation of evidence are good practices. This answers the 'why' about program evaluation. From there the articles have been chosen to hopefully address the challenges of 'how' 'what' and 'when' of program evaluation.


The articles for this edition are:

The first article is a brief extract from the Better Practice Guide on Public Service Governance produced by the Australian National Audit Office (ANAO) in June 2014. The report is in two parts. Part 1 is Public sector governance fundamentals. Part 2 provides guidance on achieving good governance in practice and program evaluation features significantly as good practice. It is from this profile provided by the ANAO, underscoring the importance of planned and robust program evaluation, that the other articles flow.

The second article is a useful introduction of key concepts and activities in an evaluation. It is presented as a series of 'flash-cards' (like chapter headings in appearance) developed by an authoritative author/practitioner in evaluation circles, Michael Patton.

The third article, from 2015, provides a more detailed description of key concepts and issues in program evaluation and has been developed for an Australian Government context. It gives definitions ('policy', program', 'evaluation', 'SMART'); introduces evaluation-specific language; and, provides guidance on how to conduct an evaluation and use evaluation findings for better decision making.

Further introductory material and guidance can be found in the following links. They have not been summarised for inclusion in this edition:
Center for Disease Control and Prevention, Introduction to Program Evaluation for Public Health Programs: a Self-Study Guide
ACT Government Evaluation Policy and Guidelines

The following are links to articles for those interested in the debate about what makes for 'good' evidence in evaluating a program.

The fourth article is a thought-provoking piece by a Canberra-based academic and politician, Andrew Leigh MP, a proponent of evaluation of public policy using randomised trials. While the ANAO paper puts the case for evaluation, Leigh's paper argues for a specific type of evaluation in regard to public policy. He counters the usual arguments against randomised trials in the paper.

The fifth article provides a less prescriptive perspective on what constitutes evidence by which to inform public policy. This article is included to underscore that randomised controlled trials—and the belief that only these types of evaluations provide valid, reliable and relevant data—whilst gold standards in health and mental health sciences, are more elusive in the 'real world', at least at this time. The article does not explicitly address evaluation but reinforces the ANAO contention that rigorous, well-planned data and data collection are necessary to inform public policy. 

Return to list of articles


ANAO report, June 2014, Public Sector Governance: Strengthening performance through good governance, Better Practice Guide, Commonwealth of Australia

This report from ANAO states that the practice of good governance derives from, among other things, evaluation and review. Such analyses:…

 enable an entity to identify strengths, learn lessons and, maintain and improve its capacity to serve government and the community over time. A key focus of evaluation … is to assess the impact of government policies and activities, which assists government decision-making … and contributes to improved accountability for results …

Evaluation and review are seen to be necessary to gauge the quality and impact of government behaviour concerning complex issues. The consequent lessons learned from strategic and rigorous reviews are recommended as ways of informing policy advice.

Leadership which is positively disposed to learning from such reviews is seen as a necessary condition for effective learning. Key governance actions in regard to evaluation are:

  • Plan to conduct an evaluation before beginning any program. This includes identifying time-frames, resources, baseline data and performance information
  • Align internal review activities with external requirements so as to reduce reworking
  • Treat and prepare for external scrutiny as part of usual business, rather than as something that 'happens' to the entity
  • Assign responsibility for implementing the recommendations of the evaluation. Identify the senior executive responsible for implementation and establish timeframes for actions
  • Ensure appropriate monitoring of implementation and the impact on performance and outcomes.

Embedded in Section 3 of the report is the advice:

Information and analysis support the decision-making process, making it possible for officials to make well-informed, sound and defensible decisions. Information and analysis also inform entities about how best to effectively design programs and strategies, allocate scarce resources to mitigate program and service delivery risks, and provide assurance that key requirements are being met

The rationale for evaluation in the public sector is thus clearly and unambiguously stated.

Return to list of articles


Patton, M.Q. (2014) Evaluation Flash Cards: Embedding evaluative thinking in organization culture. Otto Bremer Foundation, Minnesota

As part of series of learning seminars on evaluation, Michael Patton developed 25 flash-cards of common terms, accompanied by a single page explanation of each and where appropriate, examples and key evaluation questions. Each flash-card contains a 'bottom-line' activity with a succinct rationale. This link provides an accessible guide to conducting an evaluation in 28 pages.A

 salient example is flash-card #19 'the It Question'. What do evaluators mean when they say 'it' works or doesn't work? Patton indicates that the bottom-line is to be clear about the 'it' in any proposal. Helpful questions to clarify 'it' might be: what exactly is the model being proposed? What outcomes is the model expected to produce? And, what evidence will be generated about how the model works?

Return to list of articles


Government of Western Australia. January, 2015 Program Evaluation: Evaluation guide.

The evaluation guide from WA outlines the role of evaluation as a key component in the policy cycle, the key principles of good evaluation practice, a strategic approach to evaluation, different types of evaluation and when they might be used, how to conduct an evaluation and the use of findings from an evaluation for better decision-making. A starting principle for the guide is the creation of an evaluation culture to ensure the best possible economic and social returns.

Evaluation is defined as 'the systematic collection and analysis of information to enable judgements about a program's effectiveness, appropriateness and efficiency'. 'SMART' results are used to determine describable and measurable change:

  • Specific (criteria must be well-defined)
  • Measurable (criteria is concrete and measurable so that progress can be demonstrated)
  • Attainable ('is there a realistic path to achievement?')
  • Relevant (are results within the constraints of resources, knowledge and time)
  • Time-bound (include reporting time-lines to provide a sense of urgency).

The advantages of conducting evaluations are seen to be opportunities to: evaluate performance, revise program structures and consequently, to justify funding. The guide also outlines potential benefits to stakeholders, from efficient resource allocation through to transparent and accountable government. Evaluation is seen as a part of a continuous cycle as changes to policy are implemented. As the guide states:

The policy cycle is not intended to encourage a process driven approach to policy development and implementation but rather to underscore the need for a planned strategic approach to evaluation.

Systematic and regular evaluation is necessary to assist decision makers and should be built into the program design. The guide offers guidance on 'mega-level', whole-of-Government programs (e.g., the Closing the Gap Indigenous Disadvantage Plan) as well as on macro- and micro-level programs.

The article reviews types of evaluation (formative/developmental, process, summative/impact) and devotes a significant number of pages to detailed descriptions and rationales for the five stages of an evaluation and their component activities. It provides an example program logic map and addresses the need to develop key evaluation questions at the outset of a program design as part of the logic map.

Return to list of articles


Leigh, A. (2003) Randomised Policy Trials, Agenda, 10(4), 341-354

This paper from 2003 reminds us that in the eighteenth century medical practitioners were averse to randomised trials. They believed that their expertise should be taken on faith. Similarly, in 2003 Australia, Leigh argued, political rhetoric had been the substitute for hard evidence.

The article addresses six common objections to the use of randomised trials in public policy evaluations. In doing so Leigh is able to argue the strengths of conducting randomised trials in developing meaningful, sustainable policy. Each of the following objections is discussed:

  • The goals of most policies are not well-defined making the use of randomised trials too difficult
  • Randomised trials involve denying treatment to worthy individuals
  • There are good alternatives to randomised trials
  • Qualitative research provides better information about the impact of policies than quantitative measures
  • Political self-interest overrides hard results
  • Randomised trials are only used in America

Leigh discusses lessons from some randomised trials of the early 2000s—job training for unemployed persons, class sizes in education, neighbourhood effects and locational disadvantage, the NSW Drug Court—and their implications for policy. He concluded that evaluation in Australia was probably adversely affected by three-year election terms. He wrote that there were few evidence-based think tanks and an overall constrained culture of policy contestation. In other words, evaluation in Australia 'had a long way to go'. Finally, Leigh argues that it is not enough to ride on the results of trials conducted overseas.

Hon Dr Andrew Leigh MP is an Australian politician and former professor of economics at the Australian National University
Return to list of articles


Isett, K.R., Head, B.W., & VanLandingham, G. (2006). Caveat Emptor: What Do We Know about Public Administration Evidence and How Do We Know It? Public Administration Review, 76(1), 20-23

This article provides an overview about evidence in public administration and summarises recent thinking and challenges. While not specifically addressing evaluation, the authors are concerned with 'shin(ing) a light on the evidence needed to make effective decisions and examinations of the evidence that currently exists for contemporary public sector efforts.' Evidence, they claim, is often undermined by poor quality/outdated data, flawed program logic (a feature of evaluation) and limited access to high-quality research. Essentially the article emphasises the importance of quality, well-planned data gathering.

While acknowledging the 'gold standard' of the randomised controlled trial, the authors posit that such standards are not always feasible or even desirable and practitioners often have to rely on 'best available' knowledge to do their work credibly. In fact Brian Head in 2013 (as also reported in this journal, p.25) refers to 'evidence-informed' policy rather than evidence-based policy.

The authors argue that the way forward in producing public policy requires systematic attempts 'to synthesise and accumulate reliable knowledge about practice, taking into account the constraints on the use of scientific methods'.  The key questions from their perspective are:

  • Does the available evidence support the conclusion?
  • What are the parameters about how a practice works (implementation attributes, specific circumstances)?
  • For whom and under what circumstances was it found to be effective?
  • What were the specific conditions under which the program or practice was tested and validated?

Such key questions are also part of evaluation logic.

Dr Kimberley Isett is from Georgia Institute of Technology. Dr Brian Head is Professor of Policy Analysis at the University of Queensland, Australia. Dr Gary VanLandingham has published on evidence-based policy making and directs the Pew-MacArthur Results First Initiative.

Return to list of articles