Go to top of page

Appendix 1d: Outcomes

Planning guide for evaluating impact

This tool is designed to provide evaluators with an overview for evaluating outcomes—the impact of the learning and development on the organisation, branch or individual. It does not seek to be comprehensive, but focuses instead on strategic action for cost-effective and useful evaluation.

Evaluation element

Outcomes

Objective

Assess outcomes achieved at individual, group and/or organisational levels. They can be positive, negative or ambiguous.

Data collection method

Refer to evidence gathering techniques and measures of learning highlighted in the Techniques table.

Possible data sources

  • Organisational metrics
  • Organisational surveys
  • Performance review data;
  • 360/180 feedback data;
  • Learning agreements and plans
  • Insight from stakeholders
  • Participants
  • Supervisors
  • Team members and direct reports
  • Clients and customers

Analytical approaches

  • Success case method
  • Impact study
  • Return on investment
  • Calculations
  • Lessons learned method
  • Value for money assessment

Reporting tool

  • Reports

Timing

  • Conducted after sufficient time and opportunities for the effects of learning to be observed and felt.

Stakeholders and their roles

Evaluator

  • Oversight and conduct of the evaluation
  • Communicate findings to the broader organisation

Key programme sponsors

  • Provide support to the evaluation

Key line managers in areas where learning is to be applied

  • Participate in the evaluation and analysis and use findings to inform decisions on learning

Relevant management and other reference committees

  • Provide input, guidance and approvals

Learning designer

  • Incorporate findings into the design

Learners

  • Participant in the evaluation and analysis and use findings to inform their future decisions on learning.

The success case method

The success case method for evaluating impact is an approach that has been developed by Robert Brinkerhoff (2001). It is a method that can provide information on the factors which help or hinder the application of learning in the workplace and therefore sheds light on the nature and the degree of impact that learning has on the organisation. The general process is depicted below.

This method achieves evaluation efficiencies by adopting a purposive rather than a random sampling approach focusing the bulk of evaluation inquiry on only a relative few learners. The underlying principle here is that valuable information can be obtained from learners who have been either exceptionally successful or unsuccessful in applying their learning in their work. It is an economical way to account for the factors which shape the impact of learning.

Ways to use the success case method

To account for a variety of benefits and to demonstrate how learning interacts with other organisational variables

Effective learning and development can often result in a variety of tangible and intangible benefits. The success case approach can be used to account for these. Whilst many of these benefits many not be easily quantifiable, accounting for and describing these even in qualitative, but meaningful terms will aid in demonstrating the impact that learning can contribute to.

It is important to note that as learning and development is one of the many enabling organisational functions within an agency, it may be more productive and useful to evaluate how learning contributes to impact in conjunction with other variables and as part of a larger system rather than merely attempting to isolate the effects of learning. Understanding how other organisational variables interact with learning and the impact that this results in, will provide valuable information that will help the organisation better integrate and make better use of their interconnected organisational systems and processes for enhanced overall impact.

To support rapid prototyping of significant learning and development

Rapid prototyping is a term used to describe an iterative approach to the development of products, services and programs. The information that the case study method provides can be used to form and shape learning interactions in a relatively quick manner. It can be usefully deployed in pilot programs, given its sharp focus and relatively efficient data gathering methods.

A success case approach

  1. Understand the objectives and intended impact of learning
  2. Survey representative sample of learners
  3. Identify learners with highly successl or unsuccessful experiences in application of learning
  4. Conduct in-depth interviews with selected learners, and/or direct reports and/or supervisors and/or clients
  5. Identify and where possible quantify the nature and degree of impact
  6. Show the impact that learning has contributed to
  7. Provide and disseminate useful descriptions of impact and relate these to the objectives of the learning interaction
  8. Provide and disseminate useful descriptions of workplace factors which help or hinder application to engender greater levels of workplace support

Adapted from Brinkerhoff, R. (2001) High Impact Learning. Perseus. Cambridge, MASS.

Lessons learned method

Identifying lessons that individuals, teams and the organisation can learn from with regards to how learning may have or may have not contributed to change, improvement or enhanced effectiveness can be a succinct problem focused manner to both conduct and report on impact evaluation.

This methodology is primarily based on data gathering from multiple related sources with the view to understand the problems associated with the learning and development. Sources could include:

  • the learners
  • supervisors
  • direct reports
  • clients and stakeholders
  • reviews of evidence that demonstrate problems eg. work samples, organisational metrics and reports etc.

The focus of a lessons learned statement is usually formative and it does not generally make overall judgments about a programme’s merit or worth. It does however seek to provide practical idea and tips in a timely manner so that the organisation can ensure that that shortfalls and mistakes are not repeated and that successful practices are replicable.

When to use the lessons learned approach

When the learning and development looks like its going to fail

Conducting impact evaluations on problematic learning and development may be counterproductive. However, the opportunity to learn from problematic learning and development should not be thrown away. Understanding what went wrong and drawing lessons from this will ensure that future learning and development is well positioned for success.

As an interim evaluation measure

Some learning and development may not result in observable impact in the short term. Culture change programmes are examples of interactions which have a relatively long impact life cycle. Applying lessons learned approach after various transition points in the life cycle can contribute to continuos improvement and provide momentum to the overall change effort.

Template lessons learned

(please adapt and adjust for your own use)

Title: eg. Culture change programme

Sponsor: eg. Corporate services

Date:

Introduction

describe the objectives of the learning and development and its intended impact on individuals, teams and the organisation

Lesson learned statement

  • a summary of key insights and transferable ideas and practices
  • what should be done more of
  • what should be done less off

Findings and discussion

  • describe the difference that the learning and development has had on learners and their work and organisational culture
    • what went wrong
    • what went right

Analysis

  • what are the flow on positive, negative and indifferent effects of the learning on the individual and their work

Recommended actions

  • where to from here

Originator

  • the name of the report writer

Validated by

  • the name of the project sponsor

Contact

  • for more information

Keywords

 

References

 

Returns on investment (ROI)

ROI is often viewed by the HR industry as the most sophisticated level of evaluation. ROI is essentially the calculation of the monetary value or result of training as a function of training costs. It is expressed as:

  • ROI (%) = net programme benefits ¸ programme costs x 100

Examples of net programme benefits include:

  • money or time saved as a result of productivity gains
  • reduction on error rates and cost of error
  • reduction of time spent on correcting errors

As an example:

  1. time saved from productivity gains in applying new project management skills per person = 5 days
  2. 1 day saved = $227 net savings (based on daily, direct salary of someone on $80k a year not including on-costs etc)
  3. 5 days saved per person = $1136
  4. savings from 40 participants who have attended the project management programme is: 40 x $1136 = $45,440
  5. programme benefit for 40 people as a result of productivity gains in applying new project management skills = $45,440
  6. net programme benefits is calculated by subtracting programme costs, which in this example is $10,000 from the savings of $45,440 = $35,440.

The ROI therefore is:

  • $ 35,440 (net programme benefits) ¸ $ 10,000 (programme costs) x 100 = 354%

This means that for every $1 invested in the programme, there is a net return of $3.54 to the organisation after costs.

Caveats to ROI

Basic ROI calculation does however require that:

  • direct and indirect costs of training can be tracked
  • training results in a tangible demonstrable outcome eg, savings or reduction in error rates etc. which can be financially costed.

Outcomes that are less tangible including increased morale, enhanced organisational climate etc, while very important, are difficult to quantify in monetary terms unless they can be directly linked to indicators such as absentee or voluntary separation rates for example. In addition, outcomes such as enhanced organisational climate or morale are often the result a range of inter-related factors which lie outside the influence of any one training programme.

Understandably, there is considerable difficulty and cost involved in isolating these factors or variables to the degree that specific linkages between the training event and outcomes can be determined and measured financially.

Attempts at ROI therefore should be focused on training programmes which impart skills that result in observable and measurable workplace improvements. This could include:

  • faster processing or response times
  • reduced errors
  • reduced wastage
  • better utilisation of infrastructure or software for example
  • reduced accident rates
  • less time spent correcting errors
  • less time spent on counselling poor performance
  • less claims for workers compensation;
  • reduced grievance rates.

These improvements fall into output, time, quality and costs related factors, which can generally be measured in financial terms.

While it is becoming an increasingly fashionable concept with leading Australian organisations to attempt to measure the returns on training investment, ROI constitutes only one part of a good evaluation effort. In some situations as discussed above, ROI may not be an appropriate measure to use.

When to use ROI

The decision to employ ROI as a measure should be influenced by several key factors:

  • the degree that the need can be financially costed, e.g. What in financial terms is the current lack of skill in this particular area costing the agency?
  • the nature of the learning objectives within the programme in meeting those needs, e.g. Are learning objectives constructed around specific, measurable skills?
  • the degree that the skill need can be addressed by specific training, e.g. a lack of leadership ability may not always be solved through leadership training
  • the opportunity available for the learner to fully apply his/her learning back at the workplace
  • the degree that the results from the application of learning can be articulated as output, time, quality and/or cost factors.

While ROI can and does play an important role in helping make judgements regarding a programme’s worth or value, it clearly is not the most appropriate mechanism for all occasions.

Impact statements

Impact statements are qualitative and quantitative descriptions of how training or learning has made a difference to the learner’s ability on the job, the workplace and the organisation as a whole. Data for impact statements can be gathered by the evaluator through structured observations, interviews, work sample reviews, surveys and where appropriate, performance management reports. Descriptions on specific improvements and their consequences are written up as a summary of how training has impacted on the individual, group and organisation.

A structure for an impact statement could include:

  • the issue
    • summary of the problem or need and its consequences
  • the intervention
    • brief description of the Intervention strategy and associated costs
  • the impact
    • summary of the skills that were acquired
    • list of examples of how the new skills are now being applied
    • summary of what has changed as a result
    • assessment of the perceived benefit, negative outcomes or ambiguous outcomes this change has brought
  • potential impact
    • indication of longer term implications and possibilities
  • recommendations
    • for investment related decisions
    • for improvement related decisions.

While impact statements are generally used at the highest level of evaluation and generally constitute a component in a comprehensive evaluation exercise, a detailed impact statement itself can constitute an evaluation model. The amount of information that could be potentially captured and reported in comprehensive impact statements can be useful in presenting a picture of the overall benefit of a programme especially for programmes which have a more developmental or longer term focus and where other methods of evaluation would be inappropriate.

Value for money assessments

Value for money is an assessment of worth that is commonly used within Commonwealth agencies to guide procurement decisions. Value for money principles however, can also be used in making judgements about the worth or significance of a programme. There are three key factors in making value for money judgements:

1. Cost related factors including direct, indirect and lifecycle costs.

2. Assessment of the contribution which the programme makes to the advancement of organisational output and outcomes structures or to business unit goals. These may include:

  • Intangible contributions
    • Enhanced self awareness
    • Greater employability and deployability
    • Networking opportunities
    • Increased participant morale and enthusiasm
    • Increased awareness of programme initiatives by other employees
    • Enhanced team effectiveness
    • Integration of the programme with other organisational processes eg. performance management etc.
    • Enhanced organisational reputation and image
    • Enhanced organisational capacity to respond to contingencies
  • Tangible contributions
    • New or enhanced work related skills and knowledge
    • Alignment of the programme with business intentions and skill needs
    • Customer or client satisfaction
    • Reduced error rates
    • Faster response times
    • Enhanced productivity
    • Better quality
    • Employee retention (where this is directly related to training/learning)

Selected contributions are then weighted to reflect organisational emphasis or priorities and scored to provide a ‘contributive score’ if required, to summarise the ‘contributive value’ of the programme.

3. Judgement on the value of money as a result of programme costs versus weighted programme contributions.

Value for money analysis is typically expressed as statements rather than ratios or other mathematical figures. In summary:

  • Value for money = Total costs compared to Total contribution.

Tips when evaluating impact

1. Appreciate that “impact” can be complex and unexpected and is shaped by many variables.

  • Learning and development is one of many variables that shape and contribute to overall impact.
  • It may not be as important to isolate the effect of learning and development as the variable in question, as it is to account for all significant variables and to describe their inter-relationships, effects and contribution to the overall impact.
  • When measuring impact, be open to unexpected variables, their outputs and their relative significance.

2. Not all learning and development interactions can be evaluated for impact in the same way.

  • Understand the nature of the impact that will need to be evaluated and design and use evaluation techniques that fit.
  • Understand what the organisation considers and values as ‘worthwhile impact’ and measure this first without neglecting other forms of impact which may be as important but under appreciated.
  • Understand the evaluative perspective and relative strengths and limitations of each impact evaluation technique and apply the selected technique appropriately.
  • Appreciate that evaluation techniques such as Returns on Investment (ROI) are oriented to framing and measuring impact from a financial perspective and will necessarily have limitations to its application.
  • Focus on collecting ‘evidence’ rather than ‘proof’.

References —evaluating outcomes

Athanasou, J 2000, Evaluating adult education and training in Foley, G ed., Understanding adult education and training, 2nd edition, Allen Unwin, Sydney.

Succinct text introducing the various perspectives which frame and shape evaluation approaches. Also includes a holistic evaluation approach called “ECCOES” for Ethics, Coverage, Costs, Objectives, Effects, and Stakeholders, developed by the author.

Athanasou, J 1995, Issues in the evaluation of adult education in Foley, G ed., Understanding adult education and training, Allen Unwin, Sydney.

A handy overview of some issues and debates in adult education evaluation. The article also provides a very useful summary of Illuminative Evaluation. This is an approach that accounts for impact holistically and naturalistically.

Becker, BE, Huselid, MA & Ulrich, D 2001, The HR scorecard, HBS Press, Boston, MASS.

Comprehensive resource on embedding HR processes and initiatives including learning and development into the business and strategy of the organisation. Useful chapters on cost benefit analyses and measurement.

Ellinger, A D, Ellinger, A E Yang, B & Howton, SW 2002, ‘The relationship between the learning organisation concept and firms’ financial performance: an empirical assessment’, HRD Quarterly, vol 13, no 1, Spring 2002.

An academic study which demonstrates positive association between the learning organisation concept and organisational financial performance.

Fitz-enz, J 2000, The ROI of human capital, AMACOM, New York.

Collection of useful perspectives, tools and metrics for measuring and reporting on key aspects of HR including learning and development.

Queensland Government, 2003, Value for money—better purchasing guide, www.qgm.qld.gov.au/bpguides/value/

Useful ideas on assessing services and products which can be adapted to the learning and development field.

Wick, C & Pollock, R, ‘Making results visible’, T&D Magazine, ASTD Vol 58 No 6 June 2004.

Useful ideas on several different ways to evaluate impact and to report this in ways that are meaningful to managers and organisational culture.

Mayne, J 2004, Reporting on outcomes: setting performance expectations and telling performance stories, The Canadian Journal of Program Evaluation, vol. 19.

This article discusses how the use of results chains can assist in setting outcome expectations and reporting on outcomes achieved through the telling of performance stories.

Phillips, JJ 1997, Return on investment, Gulf, Huston, Texas.

The bible on determining returns on investment related quantification techniques.