Challenges of evidence-based policy-making
Last updated: 19 Jun 2009
This page is: archived
Gary Banks AO
This publication is one in a series designed to stimulate debate about contemporary government challenges. It deals with the challenge of putting 'evidence-based policy-making' into practice.
Evidence-based policy-making, while not a new concept, has recently become more prominent in public debate in Australia. The Prime Minister has called it a key element of the Government's agenda for the public service. He wants policy design to be driven by analysis of all the available options, and not by ideology. This explicit endorsement by the Prime Minister provides us with a valuable opportunity to advance the cause of evidence- based policy-making in the APS.
Making policy that is based on evidence seems obvious. Putting the principle into practice, however, is another matter.
In the real world, policy is developed in a fluid environment, is subject to competing vested and political interests, and can be driven by pressure to act quickly to solve headline-grabbing problems. Ideally, we need systems that are informed by evidence at each stage of policy development, from when an issue is first identified, to the development of the most appropriate response, and subsequent evaluation of its effectiveness. This is even more important when dealing with complex problems, like climate change, when the evidence on which responses must be based is shifting rapidly and involves many different interactive elements.
Even when dealing with less complex issues, if we are to successfully integrate evidence into the policy-making process, we must have good evidence to begin with. This means not only collecting data and investing in research, but ensuring that policy makers have the right skills to discriminate between evidence which is reliable and useful, and that which is not.
Evidence should also be open to rigorous public and professional debate. As well as validating evidence, transparency can help governments to gauge community reaction to ideas before they are fully formed and so better anticipate the politics of pursuing different courses of action. However, this does add to the challenge for policy makers, as transparency takes time and effort, and governments often have a need for speed.
It is particularly timely, therefore, that the Chairman of the Productivity Commission, Gary Banks AO, has developed the paper on which this publication is based. In it, he sets out his thoughts on why we need an evidence-based approach to public policy and discusses what he considers the essential ingredients for building an effective evidence base. An earlier version of Mr Banks' paper was presented to an APS Commission 'Leader to Leader' seminar, before being revised and presented as an ANZSOG Public Lecture earlier this year.
It is a very valuable contribution to the debate on this issue.
Australian Public Service Commissioner.
In an address to senior public servants in April 2008, the Prime Minister observed that, 'evidence-based policy-making is at the heart of being a reformist government'. I want to explore why that is profoundly true; what it means in practice, and some implications for those of us in public administration. In doing so, I will draw on the experience of the Productivity Commission—which with its predecessors has been at the heart of evidence- based policy-making in Australia for over three decades—to distil some insights into what is needed across government generally if we are to be successful.
It is as important that we have a rigorous, evidence-based approach to public policy in Australia today as at any time in our history. This country faces major long-term challenges; challenges that have only been exacerbated by the economic turbulence that we are struggling to deal with right now. When the present crisis is over, we will still have the ongoing challenges of greenhouse, the ageing of our population and continuing international competitive pressures. We should not underestimate the significance of those challenges, which place a premium on enhancing the efficiency and productivity of our economy.
The good news is that there is plenty of scope for improvement. The Council of Australian Governments (COAG)'s National Reform Agenda embraces much of what is needed—not just the completion of the old competition agenda, but getting further into good regulatory design and the reduction of red tape, efficient infrastructure provision, and the human capital issues which will be so important to this country's innovation and productivity performance over time.
The Productivity Commission's modelling of the National Reform Agenda indicates that the gains from this 'third wave' of reform could potentially be greater than from the first and second waves. The problem is that there are few 'easy' reforms left. The earlier period had a lot of low hanging fruit that has now been largely harvested. Even in the competition area, rather than further deregulation, we are confronting the need for regulatory refinements which are quite subtle and complex to assess. In the new agenda to do with enhancing human capital, complexities abound. We don't know all the answers to the policy drivers of better health and educational outcomes, for example, let alone to the pressing goal of reducing Indigenous disadvantage.
These are all long-term issues. They also have an inter-jurisdictional dimension, bringing with it the challenge of finding national solutions to problems that have been dealt with by individual states and territories in the past. This has 'upped the ante' on having good analysis and good processes to help avoid making mistakes on a national scale which previously would have been confined to particular jurisdictions.
Why we need an evidence-based approach
I don't think I have to convince anyone within government of the value of an evidence-based approach to public policy. After all, it is not a novel concept. I read somewhere that it is traceable to the fourteenth century, motivated by a desire to discipline the whimsical rule of despots. Its absence in practice, however, has been long lamented. Over a century ago, for example, Florence Nightingale admonished the English Parliament in the following terms:
You change your laws so fast and without inquiring after results past or present that it is all experiment, seesaw, doctrinaire; a shuttlecock between battledores.
The term 'evidence-based policy-making' has been most recently popularised by the Blair Government, which was elected on a platform of 'what matters is what works'. Blair spoke of ending ideologically-based decision-making and 'questioning inherited ways of doing things'.
Of course, 'inherited ways of doing things' meant to the Blair Government the ways of the previous Thatcher (and Major) administrations! The advent of a new government is clearly a good time to initiate an evidence-based approach to public policy, especially after a decade or more of a previous one's rule. I think that resonates too with the take-up in Australia of these 'New Labour' ideas from the UK, commencing with the Bracks Government in Victoria.1
But, again, evidence-based policy-making is by no means new to this country. Probably the oldest example, or longest-standing one, would be tariff-making, which for many years was required under legislation to be informed by a public report produced by the Tariff Board and its successor organisations (notably the Industries Assistance Commission). The nature of those evidence-based reports changed dramatically over time, however, from merely reporting the impacts on industries under review to also reporting the effects on other industries and the wider economy.
Other key economic policy reforms that have drawn heavily on evidence-based reviews and evaluations include the exchange rate and financial market liberalisation of the 1980s, the National Competition Policy reforms of the 1990s and the shift to inflation targeting in monetary policy in 1993. Examples from the social policy arena include the Higher Education Contribution Scheme (HECS) in its initial configuration, and the introduction of 'Lifetime Community Rating' provisions in private health insurance regulation.
The tariff story illustrates the crucial point that the contribution of an evidence-based approach depends on its context, and the objectives to which it is directed. Evidence that is directed at supporting narrow objectives—a particular group or sector, or fostering use of a particular product or technology—will generally look quite different to that which has as its objective the best interests of the general community. Of course, this depends on having the analytical tools to enable such a broad assessment to be undertaken. Developments in this area were also an important part of the story on tariffs, as well as in other policy areas.
While the systematic evaluation and review of policy have not been pervasive—and arguably have been less evident in the social and environmental domains than the economic—Australia's experience illustrates its potential contribution. It also reveals the sterility of academic debates about whether evidence can or should play a 'deterministic' role in policy outcomes. Policy decisions will typically be influenced by much more than objective evidence, or rational analysis. Values, interests, personalities, timing, circumstance and happenstance—in short, democracy—determine what actually happens.
But evidence and analysis can nevertheless play a useful, even decisive, role in informing policy-makers' judgements. Importantly, they can also condition the political environment in which those judgements need to be made.
Most policies are experiments
Without evidence, policy-makers must fall back on intuition, ideology, or conventional wisdom—or, at best, theory alone. And many policy decisions have indeed been made in those ways. But the resulting policies can go seriously astray, given the complexities and interdependencies in our society and economy, and the unpredictability of people's reactions to change.
From the many examples that I could give, a few from recent Productivity Commission reviews come readily to mind:
- in our research for COAG on the economic implications of Australia's ageing population, we demonstrated that common policy prescriptions to increase immigration, or raise the birth rate, would have little impact on the demographic profile or its fiscal consequences (indeed, higher fertility would initially exacerbate fiscal pressures)
- our report into road and rail infrastructure pricing showed that the presumption that road use was systematically subsidised relative to rail was not borne out by the facts (facts that were quite difficult to discern)
- in our inquiry into waste management policy, we found that the objective of zero solid waste was not only economically costly, but environmentally unsound
- our inquiry into state assistance to industry showed that the bidding wars for investment and major events the state governments engaged in generally constituted not only a negative sum game nationally, but in many cases a zero sum game for the winning state
- our recent study on Australian's innovation system reaffirmed that, contrary to conventional opinion, the general tax concession for R & D mainly acted as a 'reward' for research that firms would have performed anyway, rather than prompting much additional R & D, and
- our recent draft report on parental leave, indicated that binary views in relation to whether childcare was a good or a bad thing were both wrong, depending on which age group you were looking at, and that there were many subtle influences involved.
To take a separate example from the education field—which is rightly at centre stage in COAG's National Reform Agenda—the long-term policy goal of reducing class sizes has received very little empirical support. In contrast, the importance of individual teacher performance, and the link to differentiated pecuniary incentives, are backed by strong evidence, but have been much neglected. That illustrates not only a lack of evidence-based policy in education, where social scientists appear to have had little involvement, but also the influence over the years of teachers' unions and other interests.
Among other things, policies that haven't been informed by good evidence and analysis fall prey more easily to the 'Law of Unintended Consequences'—in popular parlance, Murphy's Law—which can lead to costly mistakes. For example, the Commission found, in a series of reviews, that the well-intentioned regulatory frameworks devised to protect native flora and fauna, and to conserve historic buildings, were actually undermining conservation goals by creating perverse incentives for those responsible.
Our report for COAG, Overcoming Indigenous Disadvantage, is littered with examples. One of the first field trips that I did, as part of establishing that process, was to Alice Springs, where I learnt of one instance of an unintended consequence which would be amusing if the issues weren't so serious. It involved children taking up petrol sniffing so that they could qualify for the various benefits and give-aways in a programme designed to eradicate it. That this might happen no doubt would not have occurred to any of us in Canberra, but it may well have occurred to some of the elders in the community if they had been asked.
But, as Noel Pearson and other Indigenous leaders have affirmed, perhaps the most calamitous and tragic example of all was the extension of 'equal wages' to Aboriginal stockmen in the late 1960s. Despite warnings by some at the time, this apparently well-motivated action led to the majority losing their jobs, driving them and their extended families into the townships—ultimately subjecting them to the ravages of passive welfare; with liberalised access to alcohol as the final blow. Good intentions, bad consequences; very, very difficult to remedy.
Now I am not saying that policy should never proceed without rigorous evidence. Often you can't get sufficiently good evidence, particularly when decisions must be made quickly. And you can never have certainty in public policy. All policy effectively is experimentation. But that does not mean flying blind—we still need a good rationale or a good theory. Rationales and theories themselves can be subjected to scrutiny and debate, and in a sense that constitutes a form of evidence that can give some assurance about the likely outcomes. Importantly though, all policy experiments need to be monitored and evaluated and, over time, corrected or terminated if they turn out to be failures. These are things that Governments typically find hard to do—particularly the termination part.
Arguably the biggest-ever case of policy-making under uncertainty is the contemporary challenge posed by global warming. With huge residual uncertainties in the science, economics and (international) politics, there can be little confidence that anyone could identify a uniquely 'correct' policy prescription for Australia at this point. The only sensible way forward, therefore, is to start gradually, to monitor, to learn by doing as we develop institutions and see the effects of carbon pricing on our economy and community, and as we wait for others to come to the party—in other words, an adaptive response. That appears to be broadly the strategy which the Government has ultimately adopted in its recent White Paper. That said, the success of such a strategy still depends on judgements about the most appropriate timing and extent of action by Australia, and indeed the particular form of the policy action itself—notably the mechanism for pricing carbon, its coverage and compensation provisions. These remain subject to ongoing debate.
Conditioning the political environment
Complexity and uncertainty would make policy choices hard enough even if they could be made purely on technical grounds. But policies are not made in a vacuum. Rather, they typically emerge from a maelstrom of political energy, vested interests and lobbying. Commonly, those with special interests will try to align their demands with the public interest. The average person (voter) rationally doesn't do the hard work necessary to find out whether that is correct or not, but often feels intuitively supportive.
In that realpolitik, evidence and analysis that is robust and publicly available can serve as an important counterweight to the influence of sectional interests, enabling the wider community to be better informed about what is at stake in interest groups' proposals, and enfranchising those who would bear the costs of implementing them.
Tariff reform again provides a classic instance of evidence being used to galvanise potential beneficiaries from reform in the policy debate. In Australia, the losers under the tariff regime were the primary exporting industries—the farmers and the miners—who started to appreciate, with help from the Industries Assistance Commission, the extent of the implicit taxes and costs they were bearing; and they soon became a potent force for tariff reform. National Competition Policy has seen a similar political role being discharged through evidentiary processes.
To take a quite different example, the gambling industry got a lot of political support for deregulation essentially based on a myth: namely that it would generate many jobs but have only minor adverse social impacts. The Commission's report showed the reverse to be true. Gambling did not (and cannot) generate significant additional jobs in the long term, and has very substantial social impacts. Establishing that gave community groups a stronger platform to push for reforms to gambling regulation and the development and funding of harm minimisation measures.
My point is that good evidence can ameliorate or 'neutralise' political obstacles, thereby making reforms more feasible. That is part of the reason why, as the Prime Minister has said, a reformist Government needs to have an evidence-based approach at centre stage.
The steps in forming evidence-based policy
WHAT constitutes real evidence
Methodology - Analytical approach allow for proper consideration of the problems
Capacity - Research skills are sufficient to undertake the analysis
WHEN is adequate evidence available to inform decisions?
Time - To harvest data, gather new data and test the analysis
Good data - High-quality data bases support timely analysis
HOW can credible evidence be ensured?
Transparency - Open debate and discussion to test and educate the public
Independence - Incentives to deliver advice in the public interest
A receptive policy environment
Willingness to test policy options and the structures and resources to do so
The essential ingredients
For evidence to discharge these various functions, however, it needs to be the right evidence; it needs to occur at the right time and be seen by the right people. That may sound obvious, but it is actually very demanding. I want to talk briefly now about some essential ingredients in achieving it.
First: methodology. It's important that, whatever analytical approach is chosen, it allows for a proper consideration of the nature of the issue or problem, and of different options for policy action.
Half the battle is understanding the problem. Failure to do this properly is one of the most common causes of policy failure and poor regulation. Sometimes this is an understandable consequence of complex forces, but sometimes it seems to have more to do with a wish for government to take action regardless.
A contemporary example that has received a bit of airplay as a consequence of the Commission's report on waste management is the move to ban the ubiquitous plastic shopping bags from our supermarkets. This initiative drew much support from the alleged problems that these bags pose for the litter stream and for marine health. But closer investigation by the Commission soon exposed gross inaccuracies and overstatements in those claims. Indeed some of what passed for 'evidence' was contrary to common sense, and some outright hilarious. A Regulation Impact Statement soberly cited media reports from India that a dead cow on the streets of New Delhi had 35,000 plastic bags in its digestive system!
In situations where government action seems warranted, a single option, no matter how carefully analysed, rarely provides sufficient evidence for a well-informed policy decision. The reality, however, is that much public policy and regulation are made in just that way, with evidence confined to supporting one, already preferred way forward. Hence the subversive expression, 'policy-based evidence.'
Even when the broad policy approach is clear, the particular instruments adopted can make a significant difference. Thus, for example, economists overwhelmingly accept the superiority of a market-based approach to reducing carbon emissions, but they differ as to whether a cap-and-trade mechanism or an explicit tax (or some combination of the two) would yield the best outcomes. Australia's apparent haste to embrace the trading option remains contentious among some prominent economists, illustrated by recent public advocacy by Geoff Carmody2 (in support of a consumption-based tax) and Warwick McKibbin3 (in support of a 'hybrid' scheme, with trading and taxation components).
How one measures the impacts of different policies depends on the topic and the task—and whether it's an ex-ante or ex-post assessment. There is a range of methodologies available. There is also active debate about their relative merits. Nevertheless, all good methodologies have a number of features in common:
- they test a theory or proposition as to why policy action will be effective—ultimately promoting community wellbeing—with the theory also revealing what impacts of the policy should be observed if it is to succeed
- they have a serious treatment of the 'counterfactual'; namely, what would happen in the absence of any action?
- they involve, wherever possible, quantification of impacts (including estimates of how effects vary for different policy 'doses' and for different groups)
- they look at both direct and indirect effects (often it's the indirect effects that can be most important)
- they set out the uncertainties and control for other influences that may impact on observed outcomes
- they are designed to avoid errors that could occur through self-selection or other sources of bias
- they provide for sensitivity tests, and
- importantly, they have the ability to be tested and, ideally, replicated by third parties.
Australia has been at the forefront internationally in the development and use of some methodologies. For example, we have led the world in 'general equilibrium' modelling of the 'direct and indirect effects' of policy changes throughout the economy. Indeed, the Industries Assistance Commission, with its 'Impact Project' under Professors Powell and Dixon, essentially got that going.
But Australia has done relatively little in some other important areas, such as 'randomised trials', which can be particularly instructive in developing good social policy. We seem to see a lot more, proportionately, of this research being done in the USA, for example.
Most evidence-based methodologies fit broadly within a cost-benefit (or at least cost effectiveness) framework, designed to determine an estimated (net) payoff to society. It is a robust framework that provides for explicit recognition of costs and benefits, and requires the policy-maker to consider the full range of potential impacts. But it hasn't been all that commonly or well used, even in relatively straightforward tasks such as infrastructure project evaluation.
The head of Infrastructure Australia's secretariat recently commented in the following terms about many of the infrastructure proposals submitted to that body: 'the linkage to goals and problems is weak, the evidence is weak, the quantification of costs and benefits is generally weak.'
It is very welcome, therefore, that Infrastructure Australia has stressed that any project which it recommends for public funding must satisfy rigorous cost-benefit tests. It is particularly important, as Minister Albanese himself has affirmed, that this includes quantification of the more 'subjective' social or environmental impacts; or, where this proves impossible, that there is an explicit treatment of the nature of those impacts and the values imputed to them. In the past, this has proven the 'Achilles heel' of cost-benefit analyses for major public investments: financial costs are typically underestimated, non-financial benefits overstated.
Rubbery computations of this kind seem to be endemic to railway investment proposals, particularly 'greenfield' ones, which rarely pass muster on the economics alone. It is disquieting to observe, therefore, that rail projects feature heavily among the initial listing by Infrastructure Australia of projects warranting further assessment, totalling well over $100 billion. Among these we find such old chestnuts as a light rail system for the ACT, and a Very Fast Train linking Canberra with Sydney and Melbourne. The rail proposals are not alone in evoking past follies, however. I note that expansion of the Ord River Scheme is also on the list.
It is undoubtedly challenging to monetise some of the likely costs and benefits associated with certain areas of public policy. But often we don't try hard enough. There are nevertheless some examples of creative attempts. These include work by the Productivity Commission in areas such as gambling, consumer protection policy and even animal welfare.
The key is to be able to better assess whether benefits are likely to exceed costs, within a coherent analytical framework, even if everything cannot be reduced to a single number, or some elements cannot be quantified. Thus in our gambling and consumer policy reports, for example, we could only provide estimates of net benefits within plausible ranges. In the analysis required under the National Competition Policy of the ACT Government's proposal to ban trade in eggs from battery hens, we quantified the likely economic costs and identified the potential impacts on the birds. However, we drew short of valuing these, as such valuations depend on ethical considerations and community norms that are best made by accountable political representatives.
Good data is a pre-requisite
A second essential ingredient, of course, is data. Australia has been very well served by the Australian Bureau of Statistics and the integrity of the national databases that it has generated. But in some areas we are struggling. Apart from the challenges of valuing impacts, and disentangling the effects of simultaneous influences, we often face more basic data deficiencies. These are typically in social and environmental rather than economic domains, where we must rely on administrative collections—or indeed there may be no collections at all.
Data problems bedevil the National Reform Agenda in the human capital area. Preventative health strategies and pathways of causal factors are one example. Indigenous policy provides another striking one, involving a myriad of problems to do with identification, the incidence of different health or other conditions, and their distribution across different parts of the country—all of which are very important for public policy formation. In the crucial education area, obtaining performance data has been an epic struggle, on which I will comment further. In the COAG priority area of early childhood development, a recent survey article from the Australian Institute of Family Studies concludes:
The dearth of evaluation data on interventions generally . makes it impossible to comment on the usefulness of early childhood interventions as a general strategy to sustain improvements for children in the long-term.
Data deficiencies inhibit evidence-based analysis for obvious reasons. They can also lead to reliance on 'quick and dirty' surveys, or the use of focus groups, as lampooned in The Hollow Men. A colleague has observed that a particular state government in which he'd worked was a frequent user of focus groups. They have a purpose, but I think it is a more superficial one, better directed at informing marketing than analysing potential policy impacts.
The other risk is that overseas studies will be resorted to inappropriately as a substitute for domestic studies. Sometimes this is akin to the old joke about the fellow who loses his keys in a dark street, but is found searching for them metres away under a lamp post, because there is more light. Translating foreign studies to Australia can sometimes be perilous, given different circumstances and the scope for misinterpretation.
One topical example is the celebrated work by James Heckman in the USA demonstrating the benefits of preschool education based on the Perry Programme. That work has become a policy touchstone for advocates of universal intensive preschool education in Australia. While that policy may well prove to be sound, Heckman's work does not provide the necessary evidence. As he himself has clearly acknowledged, the Perry Project was confined to disadvantaged children. And the main gain from the intensive preschool treatment that those kids got came from reduced crime. So if there is relevance for the Perry work in Australia, it may be mainly confined to areas where there is concentrated disadvantage.
A major failing of governments in Australia, and probably worldwide, has been in not generating the data needed to evaluate their own programmes. In particular, there has been a lack of effort to develop the baseline data essential for before-and-after comparisons. As an aside, I should note that quite often even the objectives of a policy or programme are not clear to the hapless reviewer. Indeed, one of the good things about having reviews is that they can force some clarification as to what the objectives of the policy should have been in the first place. Examples of policies with unclear objectives from the Commission's current work programme include the Baby Bonus, drought assistance and the restrictions on the parallel importing of books.
In the Commission's first gambling inquiry, we had to undertake a national survey to get a picture of the social impacts, as there were no good national data around. We recommended that, in future, consistent surveys should be undertaken periodically, but this has not happened; the field has become a bit of a shemozzle, and we seem to be confronting the same problems again in revisiting this topic 10 years on. Moreover, while in this time there have been a multitude of harm minimisation measures introduced by different jurisdictions around the country, very few of those were preceded by trials or pilots to assess their cost- effectiveness, or designed with the need for evaluation data in mind.
In the Indigenous field, even the much-anticipated COAG Trials lacked baseline data. The only exception, as I recall, was the Wadeye Trial, but those data were derived from a separate research exercise, which took place before the trials commenced. More generally, we don't even know how much money has been spent on Indigenous programmes, let alone how effective those programmes may have been. There is currently an initiative underway to remedy that, through a new reporting framework involving all jurisdictions, with secretariat support from the Productivity Commission.
Overall, we are seeing funding for data collections actually being cut. This is partly a consequence of the so-called 'efficiency dividend' in the public sector and the blunt way it is imposed. A consequence is that in agencies that have responsibility for collecting data, vital survey information and other data collections are being jeopardised. This seems particularly perverse at a time when governments are seeking to promote evidence-based policy-making.
In contrast, Australia has made great strides in assembling comparable performance data across jurisdictions through the Government Services Review. This is currently being reviewed by a COAG Senior Officials group. Foreign government officials visiting Australia have often expressed astonishment at what we have achieved, and international agencies such as the OECD and the UN have praised the Blue Book and Overcoming Indigenous Disadvantage reports.
But Australia could and should have done a lot more to take advantage of its federal system as a natural proving ground for policy learning across jurisdictions. Indeed, in some cases, rather than encouraging data provision to enable comparisons across jurisdictions, the basis for such comparisons has actually been suppressed.
I mentioned earlier a lack of data on school learning outcomes. Such data are better now than in the past, but it has been a real struggle. And the data we have managed to collect and publish are highly aggregated. It certainly hasn't got down to the level of individual schools, and involves very weak tests that don't reveal much about comparative learning outcomes across the country. The OECD's Programme for International Student Assessment (PISA) data has generally been more revealing as well as more timely—despite being collected internationally.
Andrew Leigh from the ANU has published an interesting paper with a colleague, analysing the impact of individual school performance on literacy and numeracy. But his research had to be confined to Western Australia, which was the only jurisdiction that released school data. Even then, the data were only revealed implicitly in charts. Leigh was obliged to digitise the charts to get the numbers to allow him to do his novel analysis.
So I think there is an opportunity, under the New Federalism banner, to fund the evidence base that we need to compare policy performances across our Federation, and thereby to devise better national policies where national approaches are called for. An important recent initiative in this direction is the allocation of additional funding, as part of a $3.5 billion education package, for a new performance reporting framework for schools. The responsible Minister, the Hon. Julia Gillard, in endorsing the new framework, stated 'It is my strong view, that lack of transparency both hides failure and helps us ignore it … And lack of transparency prevents us from identifying where greater effort and investment are needed.'
Real evidence is open to scrutiny
This leads directly to the third area that I wanted to talk about: transparency.
Much policy analysis, as anyone in the public service will know, actually occurs behind closed doors. A political need for speed, or defence against opportunistic adversaries, are often behind that. But no evidence is immutable. If it hasn't been tested, or contested, we can't really call it 'evidence'. And it misses the opportunity to educate the community about what is at stake in a policy issue, and thereby for it to become more accepting of the policy initiative itself.
Transparency ideally means 'opening the books' in terms of data, assumptions and methodologies, such that the analysis could be replicated. The wider the impacts of a policy proposal, the wider the consultation should be. Not just with experts, but also with the people who are likely to be affected by the policy, whose reactions and feedback provide insights into the likely impacts and help avoid unintended consequences. Such feedback in itself constitutes a useful form of evidence.
The Commission's processes are essentially based on maximising feedback. I won't dwell on this much here, other than to say that, in a range of areas, we've learned a great deal through our extensive public consultation processes, particularly in response to draft reports. If you compare the drafts with our final reports you will often see changes for the better: sometimes in our recommendations; sometimes in the arguments and evidence that we finally employ.
Transparency in policy-making helps government too, because it can see how the community reacts to ideas before they are fully formed, enabling it to better anticipate the politics of pursuing different courses of action. So the signs of a greater reliance again on Green Papers by the Australian Government, as advocated by the Regulation Taskforce, are very welcome. For example, the policy development process for addressing global warming clearly benefitted from an elevated public debate after the Green Paper was released.
Evidence-building takes time
Transparency can have its downsides. In particular, it 'complicates' and slows down the decision-making process—transparency involves time and effort. That is what appears to have militated against draft reports in a number of the recent policy review exercises. This has been a shame, especially for the major industry policy reviews last year, which contained recommendations with important ramifications for the community and economy.
There is an obvious clash between any government's acceptance of the need for good evidence and the political 'need for speed'. But the facts are that detailed research, involving data gathering and the testing of evidence, can't be done overnight. As already noted, in some cases the necessary data will not be available 'off the shelf ' and may require a special survey. In other cases, data needed for programme evaluation might only be revealed through pilot studies or trials with the programme itself.
On a number of occasions in the past decade I have been approached about the possibility of the Commission undertaking an attractive policy task, but in an amount of time that I felt was unreasonable for it to be done well, particularly in view of the time people need to make submissions and give us feedback. When the Commission does something, people rightly expect to be able to have a say. As a consequence, those tasks have more often than not ended up going to consultants. And in most cases the results have vindicated my position.
Good evidence requires good people
The fifth area of importance is capability and expertise. You can't have good evidence, you can't have good research, without good people. People skilled in quantitative methods and other analysis are especially valuable. It is therefore ironic that we appear to have experienced a decline in the numbers with such skills within the public service at the very time when it has been called upon to provide an evidence-based approach that relies on them. Again, that's been largely a consequence of budgetary measures over a long period of time. Research tends to be seen as a more dispensable function when governments and bureaucracies are cut back.
Several manifestations of the consequential reduction in capability have struck me. One is the lower calibre of some of the departmental project teams that I have observed trying to do review and evaluation work. Secondly, there appears to be increased poaching of research staff within the public sector, or at least pleas for secondments.
We are also seeing major new initiatives to train staff. One significant example is the Treasury's sponsorship of a new programme, to be run by Monash University, to teach economics to non-economists. We have seen a shrinkage of the recruitment pool of economics graduates in recent years and I wonder whether the study of economics may be turning into a niche discipline in our universities.
We've also seen a major increase in the contracting of policy-related research outside the public service. A lot of those jobs have gone to business consultants rather than to academics. This contrasts with the experience in the USA, where the academic community seems to be utilised much more by government.
Contracting out is by no means a bad thing. It has been happening progressively for decades. But it does seem to be changing in character more recently. The focus seems to be broadening from provision of inputs to policy-making, to preparation of outputs—the whole package. This gained public prominence last year through media reports of the Boston Consulting Group working up an 'early childhood policy' and developing a business plan for the 'global institute' for carbon sequestration. Also, KPMG seems to have become active in the infrastructure policy area.
There are clear benefits to government from using professional consultants: new ideas, talented people, on-time delivery, attractive presentation and, possibly, cost—although some of the payments have been surprisingly large. But there are also some significant risks. Consultants often cut corners. Their reports can be superficial, and more fundamentally, they are typically less accountable than public service advisers for the policy outcomes.
Whether academics could be drawn on more is a key issue. In an earlier era, the involvement of academics was instrumental in developing the evidentiary and analytical momentum for the first waves of microeconomic reform. Examples from the trade and competition policy arena alone include Max Corden, Richard Snape, Fred Gruen, Peter Lloyd, Bob Gregory, Ross Garnaut, Fred Hilmer, among others. Where are the new academic generation's equivalents in support of the 'Third Wave'? Only a few names come to mind of academics making a notable public contribution to policies bearing on human capital development.
Such involvement is of course a two-way street—with both demand and supply sides. The supply side seems to have been diminished over time, partly as promising academic researchers have sought more attractive remuneration elsewhere and partly as their time has been increasingly consumed by their 'day jobs'. On the demand side, one sometimes hears senior public servants complain that academics can be very hard 'to do business with' or that they are too slow, or lack an appreciation of the 'real world'.
There may be some validity in these perceptions, though I suspect that they may also reflect an unrealistic view of how much time is needed to do good research; and perhaps a lack of planning. Perhaps also a desire for greater 'predictability' in the results than many academics would be willing to countenance. As Brian Head from Queensland University has observed: 'Relatively few research and consulting projects are commissioned without some expectation that the reports may assist in upholding a certain viewpoint.' As I recall it, Sir Humphrey Appelby's maxim—akin to Rumpole's first law of cross-examination—is that 'one should never commission a study without knowing what the answer will be.'
Independence can be crucial
Evidence is never absolute; never 'revealed truth'. The choice of methodologies, data, assumptions, etc. can all influence the outcome, and they do. Anyone who did first year stats at university probably read Darryl Huff 's book How to Lie with Statistics, which was an early indication of the potential and the problems.
Given unavoidable need for judgement in evaluation, evidence is more likely to be robust and seen to be so if it is not subjected to influence or barrow-pushing by those involved. Good research is not just about skilled people, it is also about whether they face incentives to deliver a robust product in the public interest.
Some years ago, following a talk that I gave at a gambling conference in Melbourne, an American academic came up to me and said that the Commission's report was being used extensively in public debate in the States. I expressed surprise, given the extent of homegrown research there. She said 'yes, but we don't know what to believe'. That appears to be because research is polarised in that country between that sponsored by community and church groups and that sponsored by the industry. And there is suspicion that 'he who pays the piper, calls the tune'.
Independence is even more important when dealing with technical research than with opinions. People are better able to judge opinions for themselves, but the average person is naturally mystified by technical research. They look for proxies to help them know whether the results of such research are believable. The status of the researcher (or who is paying for the research) is one such proxy.
Economic modelling is replete with these sorts of issues. Any model comprises many assumptions and judgements which can significantly influence the results. For example, the Productivity Commission and industry consultants used similar models recently to estimate the economic impacts of reducing tariffs on cars. The Commission found that there would be significant economy-wide gains from maintaining scheduled tariff reductions. The other modellers, using different and some less conventional assumptions, projected net losses— with the current tariff rate coincidentally turning out to be 'optimal'.
In modelling the potential gains to Australia from a mooted Free Trade Agreement with the USA, the Centre for International Economics, in work commissioned by DFAT, obtained a significant positive result, whereas separate work by ACIL Tasman projected negligible gains at best. More recently, modelling of the Mandatory Renewable Energy Target (MRET) in conjunction with an emissions trading scheme, either found it to impose substantial additional costs on the economy or to yield substantial benefits, depending on the modeller and the sponsor. COAG's final decision to implement a 20% target nationally essentially favoured the latter estimates. However, Commission researchers found the sources of gains in that modelling difficult to justify.
A 'receptive' policy-making environment is fundamental
We come to the final and most important ingredient on my list. Even the best evidence is of little value if it's ignored or not available when it is needed. An evidence-based approach requires a policy-making process that is receptive to evidence; a process that begins with a question rather than an answer, and that has institutions to support such inquiry.
As has been found through the work of the Office of Regulation Review, and now the Office of Best Practice Regulation, often we see the reverse, especially for more significant proposals. The joke about 'policy-based evidence' has not been made in abstract—we have long observed such an approach in operation through the lens of regulation-making in Australia.
Ideally we need systems that are open to evidence at each stage of the policy development 'cycle': from the outset when an issue or problem is identified for policy attention to the development of the most appropriate response, and subsequent evaluation of its effectiveness.
The ongoing struggle to achieve effective use of regulation assessment processes within governments tells us how challenging that can be to implement. These arrangements require that significant regulatory proposals undergo a sequence of analytical steps designed firstly to clarify the nature of the policy problem and why government action is called for, and then to assess the relative merits of different options to demonstrate that the proposed regulation is likely to yield the highest (net) benefits to the community. These steps simply amount to what is widely accepted as 'good process.' That their documentation in a Regulation Impact Statement has proven so difficult to achieve, at least to a satisfactory standard, is best explained by a reluctance or inability to follow good process in the first place.
I admit that an evidence-based approach undoubtedly makes life harder for policy-makers and for politicians. Lord Keynes, who seems to be well and truly back in vogue, said in the 1930s:
There is nothing a Government hates more than to be well-informed; for it makes the process of arriving at decisions much more complicated and difficult.
I think we can see what he meant. But, against this, are the undoubted political benefits that come from avoiding policy failures or unintended 'collateral damage' that can rebound on a Government, and from enhancing the credibility of reformist initiatives.
Some implications for the public service
I will conclude now with some observations about how those of us in the public service can help advance the cause of evidence-based policy-making.
We begin with the considerable advantage of explicit endorsement by the Prime Minister and senior ministers for an evidence-based approach to public policy. In his speech to the agency heads and the SES last year, Kevin Rudd declared, 'we cannot afford a public service culture where all you do is tell the Government what it wants to hear'. We've also heard from the head of the public service, Terry Moran, that 'for civil servants, a capacity to analyse problems rationally and empirically and to advance options for action by Governments is a basic ethical duty'.
What both are talking about, in old parlance, is 'frank and fearless' advice: robust advice that does not second-guess the politics or the politicians. So the first suggestion I have for advancing evidence-based policy-making is for us to be frank and fearless. That doesn't mean being a loose cannon, or acting contrary to a Government's broad objectives, but using the opportunity of such political support to strengthen the public service's capacity to provide evidence-based advice—and delivering that advice, even if it is against the current, or not confined to a Minister's or a Government's favoured position, which often aren't set in concrete anyway.
Making better use of existing processes
There exist currently vehicles and frameworks within government that can be used more effectively to this end. Indeed, the recently upgraded regulation assessment requirements are ready-made for that purpose. These are based on a best practice 'policy cycle', with explicit provision for evidence to be utilised at each step. With the recent introduction of higher analytical hurdles, including greater quantification of benefits and costs, in conjunction with stronger sanctions for inadequate compliance, the line of least resistance for the bureaucracy should be moving in favour of an evidence-based approach. The extent to which this is happening remains unclear. Relatively high overall compliance rates under the new system, as recorded by the Office of Best Practice Regulation in its annual report, appear promising; though, as in the past, the compliance record is worse for the more significant regulatory proposals.
In relation to spending programmes, there is also likely to be scope to enhance some of the requirements, particularly to strengthen ex ante evaluation, and to make explicit provision for ex post review. This may be assisted by the financial crisis. For example, the Finance and Deregulation Minister, the Hon. Lindsay Tanner observed recently:
Every government dollar wasted on a poor programme is a dollar that a working person doesn't have to spend on groceries, health care and education. It is also a dollar that the Government does not have available to spend on its policy priorities.
A heightened sense of the tradeoffs has become apparent in some of the advocacy for publicly-funded paid parental leave, with questions being raised about the relative payoff from expenditure on industry support programmes.
Integral to advancing an evidence-based approach are the processes and institutions within government that enable different perspectives and information to be brought to bear at the 'pointy end' of a policy decision. These are well-described by Meredith Edwards in her book, Social Policy, Public Policy (2001). Crucial elements are the proper functioning of interdepartmental committees, the cabinet submission process (with 'coordination comments' on policy proposals by all relevant agencies) and, ultimately, well-informed discussions within Cabinet itself.
Effective COAG arrangements
At the COAG level, we have a new working group structure, which is well-placed to advance an evidence-based approach to public policy, given sufficient space and lead-time. That said, these arrangements in themselves represent an experiment. Their novel design, in which state government department CEOs essentially report to Commonwealth Ministers, faces obvious challenges.
More problematic, in my view, are the time constraints imposed on COAG processes under the punishing dictates of the quarterly cycle of meetings. The seeming imperative for public servants around the country to be constantly preparing for these meetings appears to be displacing some of the work that should be done to inform decisions. After all, those meetings are at the highest level and decisions need to be made to justify them. While the frequency of meetings may have initially been good for pressure and momentum, if maintained, it could prove self-defeating for evidence-based policy-making. It is to be hoped that over time we might see a return to a more measured approach, which retains, or even strengthens, the new framework of working groups behind the scenes, but involves more time between meetings of COAG itself and thus for the gathering of the evidence that decisions require.
Building greater institutional capacity
Building capacity—or rebuilding it—is also very important. But it can't happen overnight. For one thing, we need to be recruiting into the public service more graduates in the social and economic sciences. The UK saw a doubling in the number of researchers in the civil service in one decade under the Blair Government. There is some irony in the fact that many of those at the top of the bureaucracy began their careers in the public service as research officers and probably remain more highly skilled analysts today than many of their current subordinates.
Any agency that is serious about encouraging an evidence-based approach needs to develop a 'research culture'. Establishing dedicated evaluation units, achieving a critical mass of researchers, strengthening links with academic and other research bodies, are all integral to this.
There is also the broader question of institution-building to underpin better evaluation generally across government. Some initiatives have developed out of the foreign aid programmes and literature that may be instructive. These include evaluation clubs or forums that promote cross-fertilisation, peer support and learning about what works—both in relation to methodologies and policy approaches themselves. We could think of developing comparable institutions as centres of excellence to foster greater inter-jurisdictional learning in Australia—a kind of Cochrane Collaboration4 in the policy arena. Government and COAG sponsorship for such institution-building is worth considering. Indeed, it could be contemplated as a useful extension to the role of ANZSOG, given its 'ownership' by all governments in Australia and New Zealand.
Better use of external contracting
When it comes to the (inevitable) use of external contractors, I think we need to give far more attention to defining the task, and to identifying how contractors can best help us to make good public policy. Choosing the contractor—getting the right consultant for the task—is obviously fundamental. I would suggest that in many cases, it is better to go directly to the experts rather than to the big jack-of-all-trades management consulting firms that may be willing to tackle anything, but have an indifferent performance record in policy- related work (to say the least). Such firms often rely on sub-contractors anyway, so why not go direct to those with reputation in the field?
Part of the challenge, if consultants are to become contributors to a truly evidence-based approach, is to limit their tendency to 'second-guessing', which can compound public servants' own tendencies in this direction. This may be less of an issue for academics, who typically do not rely on such sources of income, than for business consultants, who do. An evidence-based approach ideally requires contractual arrangements that create neutral incentives for the researcher to make robust findings—for example, by making it clear that his or her work will be peer reviewed.
More generally, monitoring and reviewing the quality of such external work is crucial and, again, academic specialists would seem particularly well-placed to assist with that, as well as helping agencies choose which consultant to use in the first place.
Peer review can also be very worthwhile for the research that is done within Government, but this is not common practice. It is especially valuable where political sensitivities require secrecy during the policy development phase, but where there may be significant downside risks for the community from getting it wrong.
Resourcing evaluations properly
We need to ensure that all government programmes are designed and funded with future evaluation and review in mind. That includes data needs, especially baseline data, and making explicit budgetary provision for that. We should be pushing harder for more and better data generally, particularly in the social and environmental areas. Instead of being seen as an extra or a luxury, data for policy evaluation needs to be recognised as a necessity—and a funding priority right now if we are serious about developing an evidence-based approach.
As already emphasised, we also need to be building in more time, where it is needed to come up with robust evidence that is adequately tested. In a crisis situation such as the present, time is of the essence of course, and some decisions need to be made quickly. That is inevitable. But it is important that we lay the groundwork now to evaluate the consequences of those measures later, so the inevitable problems can be detected and timely adjustments made.
In the current context, this is particularly important for spending initiatives motivated by short-term demand management objectives, which could have an ongoing impact, or create a sense of entitlement and political pressure for their retention. For example, increased assistance to an industry—by strengthening its ability to withstand foreign competitors in a recessionary market—may initially help to shore up that industry's workforce. But this selective support will tend to weaken job retention in other local industries and, if sustained, inhibit overall job creation and productivity growth in the longer term.
In conclusion, the goal of evidence-based policy-making is unquestionably important, and it is encouraging that it has received vocal support at the highest political levels. However, measured against the various ingredients for an effective approach, it seems clear that current practice continues to fall short. Addressing this is now largely up to the public service. Not only is there a need to improve the capacity of the public service to deliver evidence-based policy advice, there is a need for it to improve political understanding of what that entails. If we fail, it won't just vindicate the public's cynicism about The Hollow Men syndrome; it will compromise government's capacity to implement the beneficial national reforms that this country needs for the long term.
Albanese, A. (Minister for Infrastructure, Transport, Regional Development and Local Government) 2008, Address to Australian Davos Connection Infrastructure Summit 21, Brisbane, 7 October.
Banks, G. 2002, 'Inter-State Bidding Wars: Calling a Truce', Speech to the Committee for Economic Development of Australia, Brisbane, 6 November.
- 2005, 'Regulation-Making in Australia: Is It Broke? How Do We Fix It?', Public Lecture Series, the Australian Centre of Regulatory Economics (ACORE) and the Faculty of Economics and Commerce, ANU, Canberra, 7 July.
- 2006, 'Tackling the Underlying Causes of Overregulation: An Update', presentation to the Conference, Australian Regulatory Reform Evolution, Canberra, 24–25 October.
- 2007a, 'Overcoming Indigenous Disadvantage in Australia', Presentation to the second OECD World Forum on 'Statistics, Knowledge and Policy: Measuring and Fostering the Progress of Societies', Istanbul, Turkey, 29 June.
- 2007b, 'Public Inquiries in Policy Formulation: Australia's Productivity Commission', Address to an International Workshop, China-Australia Governance Programme, Beijing, 3 September.
- 2008a, 'Riding the Third Wave: Some Challenges in National Reform', Productivity Commission, Melbourne, March.
- 2008b, 'Industry Policy for a Productive Australia', Colin Clark Memorial Lecture, Brisbane, 6 August.
- and Carmichael, W. B. 2007, 'Domestic Transparency in Australia's Economic and Trade Reforms: The Role of the Commission', Paper for the Lowy Institute and Tasman Transparency Group Conference, Enhancing Transparency in the Multilateral Trading System, Sydney, 4 July.
Blair, T. and Cunningham, J. 1999, Modernising Government, Prime Minister and Minister for the Cabinet Office, London.
Boruch, R. De Moya, D. and Snyder, B. 2002, 'The Importance of Randomised Field Trials in Education and
Related Areas' in Mosteller, F. and Boruch, R. (eds.), Evidence Matters: Randomised Trials in Education Research, Brookings Institution Press, Washington, D.C.
Bridgman, P. and Davis, G. 2000, Australian Policy Handbook, Allen and Unwin, Sydney.
BTE (Bureau of Transport Economics) 1999, Facts and Furphies in Benefit-Cost Analysis: Transport, Report 100, Canberra.
Bullock, H., Mountford, J. and Stanley, R. 2001, Better Policy-Making, Centre for Management and Policy Studies, UK Cabinet Office, London.
Carmody, G. 2008, 'User Pays Key to Climate', The Australian, 29 August.
Cochrane Collaboration, <http://www.cochrane.org>
CRA International 2007, Implications of a 20 Per Cent Renewable Energy Target for Electricity Generation, Report for Australian Petroleum Production and Exploration Association Limited, Canberra.
Deegan. M. 2008, Presentation given at the Industry Leaders' Luncheon, Sydney, 9 October, <http://www.infrastructureaustralia.gov.au>
Department of Climate Change 2008, Carbon Pollution Reduction Scheme: Australia's Low Pollution Future, White Paper, Canberra, December.
Donohue, J. 2001, The Search for Truth: In Appreciation of James J. Heckman, Working Paper No. 220, Stanford Law School, July.
Edwards, M. 2001, Social Policy, Public Policy: From Problems to Practice, Allen and Unwin, Sydney.
Farrelly, R. 2008, 'Policy on Trial', Policy, Vol. 24, No. 3, pp. 7–12, Centre for Independent Studies.
Garnaut, R. 2008, The Garnaut Climate Change Review: Final Report, Cambridge University Press, Melbourne.
Gillard, J. (Deputy Prime Minister) 2008, Leading Transformational Change in Schools, Address to the Leading Transformational Change in Schools forum, Melbourne, 24 November.
Head, B.W. 2008, 'Three Lenses of Evidence-Based Policy', Australian Journal of Public Administration, Vol. 67, No. 1, pp. 1–11.
Heckman, J. and Masterov, D. 2007 'The Productivity Argument for Investing in Young Children', Paper presented at Allied Social Sciences Association Annual Meeting, Chicago, 5–7 January.
IC (Industry Commission) 1997, Private Health Insurance, Report No. 57, AGPS, Canberra.
Infrastructure Australia 2008, A Report to the Council of Australian Governments, Sydney, December.
Leigh, A. and Ryan, C. 2008, How Has School Productivity Changed in Australia?, Australian National University, Canberra.
Leigh, A. and Thompson, H. 2008, How Much of the Variation in Literacy and Numeracy can be Explained by School Performance?, Treasury Economic Roundup, Issue No. 3, pp. 63–78.
McKibbin, W. 2009, 'Five Problems to Fix in a Flawed Policy', Australian Financial Review, 15 January.
MMA (McLennan Magasanik Associates) 2007, Increasing Australia's Low Emission Electricity Generation—An Analysis of Emissions Trading and a Complementary Measure, Report to Renewable Energy Generators of Australia, Melbourne.
Moggridge, D. and Johnson, E. (eds.) 1982, The Collected Writings of John Maynard Keynes. Vol. 21. Activities 1929–1939; World Crisis and Policies in Britain and America, Macmillan, London.
Moran, T. 2007, 2007 Leadership Lecture, Leadership Victoria, Melbourne, 7 June.
Mulgan, G. 2003, 'Government, Knowledge and the Business of Policy-Making', Canberra Bulletin of Public Administration, No. 108, pp. 1–5.
Nutley, S. 2003, 'Bridging the Policy/Research Divide: Reflections and Lessons from the UK', Canberra Bulletin of Public Administration, No. 108, pp. 19–28.
Nutley, S., Walter, I. and Davies, H. 2008, 'Past, Present, and Possible Futures of Evidence-Based Policy', in Argyrous, G. (ed), Evidence for Policy and Decision-Making: A Practical Guide, UNSW Press, Sydney, pp. 1–44.
Office of Best Practice Regulation 2008, Best Practice Regulation Report 2007–08, Department of Finance and Deregulation, Canberra.
PC (Productivity Commission) 1998, Battery Eggs Sale and Production in the ACT, Research Report, Canberra.
- 1999, Australia's Gambling Industries, Report No. 10, Canberra.
- 2003, From Industry Assistance to Productivity: 30 years of 'the Commission', Melbourne.
- 2005a, Economic Implications of an Ageing Australia, Research Report, Canberra.
- 2005b, Review of National Competition Policy Reforms, Report No. 33, Canberra.
- 2006a, Potential Benefits of the National Reform Agenda, Report to the Council of Australian Governments, Canberra.
- 2006b, Road and Rail Freight Infrastructure Pricing, Report No. 41, Canberra.
- 2006c, Waste Management, Report No. 38, Canberra.
- 2007a, Productivity Commission Submission to the Prime Ministerial Task Group on Emissions Trading, March.
- 2007b, Public Support for Science and Innovation, Research Report, Canberra.
- 2008a, Annual Report 2007-08, Annual Report Series, Canberra.
- 2008b, Paid Parental Leave: Support for Parents with Newborn Children, Draft Inquiry Report, Canberra.
- 2008c, Trade & Assistance Review 2006-07, Annual Report Series, Canberra.
- 2008d, What Role for Policies to Supplement an Emissions Trading Scheme?, Productivity Commission Submission to the Garnaut Climate Change Review, May.
Regulation Taskforce 2006, Rethinking Regulation: Report of the Taskforce on Reducing Regulatory Burdens on Business, Report to the Prime Minister and the Treasurer, Canberra, January.
Rudd, K. (Prime Minister) 2008, Address to Heads of Agencies and Members of the Senior Executive Service, Great Hall, Parliament House, Canberra, 30 April.
SCRGSP (Steering Committee for the Review of Government Service Provision) 2007, Overcoming Indigenous Disadvantage: Key Indicators 2007, Productivity Commission, Canberra.
- 2009, Report on Government Services 2009, Productivity Commission, Canberra.
Tanner, L. (Minister for Finance and Deregulation) 2008, quoted in Franklin, M. 'Out, Out Damned Waste', The Australian, 20 September.
Wells, P. 2007, 'New Labour and Evidence-Based Policy-Making: 1997–2007', People, Place and Policy Online, Vol. 1, pp. 22–29.
Wise, S., Silva, L., Webster, E. and Sanson, A. 2005, The Efficacy of Early Childhood Interventions, Australian Institute of Family Studies, Research Report No. 14, Canberra, July.
Appendix: Evidence-based policy-making in the United Kingdom
As Gary Banks remarks in his paper, the renewed emphasis on evidence-based policy- making in Australia has been influenced to some extent by the ideas of the 'New Labour' administration in the United Kingdom. This brief overview of recent developments in the UK Civil Service is included to provide a broader context for the debate in Australia, and a starting point for those interested in further exploring the lessons from the UK experience.
While there is a long tradition of evidence-based and evidence-informed policy within the UK, in recent times the concept has been most closely associated with the initiatives of the Blair and Brown Governments.
This began with the Modernising Government agenda, set out in the 1999 White Paper, which recognised the need for policy-making to be more responsive to citizens' demands, forward looking, evidence-based, properly evaluated and based on 'best practice'. It also called for 'higher quality evidence' to be used in policy-making and for policy to be more 'joined-up' across government departments and agencies. The need for policy-making to be more 'information aged' was also recognised.
At about the same time, and building on the thinking in the Modernising Government White Paper, the Cabinet Office published Professional Policy-Making for the Twenty-First Century, which included a 'Descriptive Model of Professional Policy-Making'.5 This was intended to guide the policy-making process, not to evaluate policies that were the outcome of the process or to be prescriptive about the type of management structures used in policy- making.
The model lists nine features of good policy-making and describes the core competencies related to each (summarised below):
|Feature||description of core competency|
|Forward looking||takes a long-term view, based on statistical trends and informed predictions, of the likely impact of policy|
|Outward looking||takes account of factors in the national and international situation and communicates policy effectively|
|Innovative and creative||questions established ways of dealing with things and encourages new ideas; open to comments and suggestions of others|
|Using evidence||uses best available evidence from a wide range of sources and involves key stakeholders at an early stage|
|Inclusive||takes account of the impact on the needs of all those directly or indirectly affected by a policy|
|Joined-up||looks beyond institutional boundaries to the Government's strategic objectives; establishes the ethical and legal base for policy|
|Evaluates||builds systematic evaluation of early outcomes into policy process|
|Reviews||keeps established policy under review to ensure it continues to deal with the problems it was designed to tackle, taking account of associated effects elsewhere|
|Learns lessons||learns from experience of what works and what doesn't|
Throughout Professional Policy-Making for the Twenty-First Century, there is a strong emphasis that policy-making should be based on evidence of what works and that the civil service must improve departments' capacity to make best use of evidence. To enable this to happen, the report called on departments to 'improve the accessibility of the evidence available to policy makers'.
This paper in turn was followed up by a review of policy analysis and modelling in government entitled, Adding It Up, which defined analysis and modelling as 'the examination and interpretation of data and other information, both qualitative and quantitative, to provide insights to improve the formulation of policy and the delivery of services'. The report identified a number of weaknesses in the capacity of the civil service to provide sound and robust analysis to support policy-making, as well as a lack of joined-up and cross-cutting policy-making and analysis. It called for 'a fundamental change in the culture of policy- making' that would involve 'good analysis', 'better planning to match policy needs and analytical provision' and the 'spreading of best practice across departments and professions'. There was also a call for better leadership from Ministers and senior officials, training for policy makers in analysis and more openness from both analysts and policy makers.
This was followed by the Professional Skills for Government initiative, developed to ensure the civil service has the right mix of skills and expertise to enable departments or agencies to deliver effective services. Within this framework, under the core skill of 'analysis and use of evidence', policy makers are expected to:
- anticipate and secure appropriate evidence
- test for deliverability of policy/practice, and evaluate
- use evidence to challenge decision-making
- identify ways to improve policy/practice
- champion a variety of tools to collect and/or use evidence
- ensure use of evidence is consistent with wider government requirements, and
- work in partnership with a wide range of experts and/or analysts.
In 2007, the Government Social Research Unit conducted a study to investigate what policy-makers in the UK understand by evidence-based policy-making; how they go about using research and analysis; and how evidence-based policy is working in practice. The study also attempted to gauge the extent to which the use of robust research evidence is embedded within day-to-day policy-making, and to understand what the remaining challenges are to the effective use of research in government decision-making.
In its report, Analysis for Policy: Evidence-Based Policy in Practice,6 the study found that while Government initiatives had made a difference to the thinking of policy-makers, there remained mixed views on the practicality and utility of evidence-based policy, and further work was necessary on why and how evidence can improve policy-making.
1 See Appendix for a brief overview of developments around evidence-based policy-making in the UK.
2 Co-founder of Access Economics.
3 Professor of International Economics at the Australian National University and a member of the Board of the Reserve Bank of Australia.
4 An international not-for-profit and independent organisation, dedicated to making up-to-date, accurate information about the effects of healthcare readily available worldwide.
5 UK Cabinet Office 1999, Professional Policy-Making for the Twenty-First Century, <http://www.civilservant.org.uk/profpolicymaking.pdf>
6 HM Treasury, Government Social Research Unit, 2007, Analysis for Policy: Evidence-Based Policy in Practice, <http://www.gsr.gov.uk>