Measuring Return On Investment In Project Management

Many organisations invest significant time, cost, and management effort in project management training. Project Management is about a team delivering clearly defined objectives to achieve a business benefit. How can we apply the same principles to Project Management Training Programmes? How can we measure the benefits of the project management training? How do we assess the acquisition of new knowledge and skills and the application of these in the work place? How do we measure the business benefits that flow from these new approaches to project management? Can we produce a realistic but prudent assessment of Return on Investment for a project management training programme? This paper looks at the current thinking in training evaluation and how it can be applied to Project Management development programmes.

Overview of Training Evaluation

‘The first important contributions and still the most influential’ framework for training evaluation (Bramely, 1996) was proposed in 1959 by Donald Kirkpatrick’s “Techniques for Evaluating Training Programmes”(Kirkpatrick D.L., 1996). He identified four levels against which objectives should be set and evaluated. These are:
Level 1: The reaction of the trainees to the programme
Level 2: Measuring the amount of Learning of princples, facts, skills and attitudes
Level 3: Changes in Behaviour
in the job
Level 4: Results or changes in organisational effectiveness.

More recently, Jack Philips (2003) proposed return on investment (ROI) as the fifth level in the Kirkpatrick framework. Measurement of ROI is important in Project Management Training Programmes because they must demonstrate a return from the significant investment in training. Other models exist such as CRIO (Warr, Bird and Rackham, 1974) and Peter Bramely’s evaluation against Effectiveness, Behaviour, Knowledge, Skills and Attitudes. Neither of these approaches have the clear link to the measurement of the impact on the business. CRIO focuses on the event itself while Bramely’s model lacks the structure and the link to return on investment. Hence, for this paper we will use Kirkpatrick’s four levels as extended by Jack Philips to include ROI and it’s application to project management training programmes.

Measuring ROI for Training Programmes

To measure the ROI from the programme we need to follow the steps shown in figure 1. In summary the first step is to develop plans and baseline data, this includes plans for data collection at each of the lower four levels in the model (reaction, learning, application and business impact). Next, we must isolate the effects of the programme from other changes taking place in the business. Once these have been isolated, we can convert the business improvements into a financial return and together with a record of the full cost of training (both direct and indirect) we can calculate an ROI. Benefits that do not deliver a measurable financial return are intangible benefits. In the next section, we identify specifically how to measure ROI for a Programme.

Application of ROI Model to BNFL ES Programme

Stage 1 Develop Evaluation Plans and Baseline Data

A number of data collection methods are available at different levels (see box 1). For example, tests are exclusively limited to the evaluation of learning (level 2) while focus groups capture data on application and business impact. The first step is planning which method of data collection is appropriate to the organisation and tabulate these decisions in a data collection plan as shown in Table 2. This defines the objectives, measures, data collection method, timing and responsibility for the data collection.

Level 1 Reaction

Level 2 Knowledge

Level 3 Application

Level 4 Business Impact

Questionnaires / Surveys

ü

ü

ü

ü

Tests

ü

Interviews

ü

Focus groups

ü

ü

Using current business measures

ü

ü

New measures

ü

ü

Action Planning Follow up

ü

ü

Follow-up Projects

ü

ü

Performance Contracts

ü

Table 1 Data Collection Methods used at each Level

Table 2 Sample Data Collection Plan for Project Management Training Programmes

Level Objective Measures/Data Data Collection Method Timing
1 Reaction / Satisfaction
Positive participant response for all sessions.

At least average rating of feedback sheets >4 (out of 6) for accommodation, course administration, briefing, course material, delivery and outcomes.

Analysis of feedback forms

After each event
2 Learning

Measurable increase in knowledge within the project management community.

At least 80% pass the professional Exams

Workplace assignment applying the principles of project management.

On going Competence or knowledge assessment (e.g. Knasto) shows a significant improvement in knowledge.

Data supplied by Professional Bodies

Collation of marks and feedback from work place assignments

Re-run of knowledge/competence assessment

5-6 weeks after tests
3-6 months after training
3 Application / Implementation

Projects apply the PM process (i.e. best practice project management)

Projects use effective project management processes.

Sharing of knowledge and experience across sites

Review and follow up of actions plans to capture specific application of new behaviours.

Focus groups at each location to capture and record evidence of implementation of training practice.

Project review / audit demonstrates wide spread use of the PM process.

Follow up on action plans by telephone

Focus groups at each location

Project reviews and audits

3-4 weeks after each event
3-4 months after initial training

Ongoing
4 Business Results

Business Specific Measures such as:

“Completion of project milestones to plan.

Completing full scope of work

Accelerating work through change control”

Project control and reporting systems

Internal reporting systems

6-9 months after start of programme
5 ROI
e.g. 50% to 100% within x years

Level 1 reaction Level Evaluation

Evaluation of programme at level 1 uses a questionnaire completed immediately at the end of the courses. Typically the questionnaire covers the areas of briefing, course material (including content), delivery, outcomes, administration and further training requirements. To ensure 100% of the forms are returned they are collected at the end of every session. A target is usually set of achieving an average of at least Good (4 out of 6) across all these area.

Level 2 Learning

Three mechanisms that are proposed to measure the effectiveness of learning. First, the programme includes a number of formal external exams and assessments and second the programme design was based on a knowledge based training needs assessment / competence assessment by line managers. Finally workplace assignment can assess the extent to which the knowledge can be applied in the workplace .
The first approach utilises the progressively graded examinations and assessment administered by the Association of Project Management (APM) as shown in figure 1. These summative criterion references tests, evaluate that participants have acquired knowledge at the levels defined by the APM.
The second approach will be to repeat, after say 12 months, the initial knowledge based Training Needs Analysis, based on tools. These norm-based tests will evaluate the extent to which the knowledge of the participants has improved.

The final approach is to use work placed assignments. These can be designed so that participants have to demonstrate that they are able to apply the knowledge gained in a work place setting. They also support the application of the knowledge gained which is evaluated at the next level.

Level 3 Application

The level of application cannot be evaluated until some time (3 to 6 months) after the training has been completed. However, three sets of measures are proposed. First, a review of action plans using telephone interviews 3 to 4 weeks after the training. During the interview participants should be asked about specific examples of how they have used the new knowledge from the training. The weakness of this approach is the difficulty validating the benefits of the individual actions. Second, focus groups could be used at each site to discuss the application knowledge gained during the training programme, these should be completed 3-4 months after the training. While the focus groups will provide significant feedback on the application of the learning, they may lack external credibility and may be difficult to get full coverage of a large population. The third measure is external audit and/or internal review to determine the extent to which the skills and processes have been implemented within day-to-day management of the projects. This approach, while expensive, has external credibility.

Level 5 Business Impact

The ultimate value of the programme is its impact on business performance. The measures at this level need to reflect the objectives of the organisation. However they would typically include:

  1. Completion of project milestones to plan
  1. Completing the full scope of work
  2. Accelerating work through change control
  3. Meeting Environmental, Health and Safety Requirements

The targets for these measures need to be agreed based on current performance and anticipated improvement. However not all the changes in these measures will be due to the training programme, so we must find ways to isolate the impact of the training from the other changes that are taking place in the organisation.

Isolating the Effects of Training

It is very unusual for training to be the only initiative or change taking place in a business. In fact the demand for learning is often triggered by an external change in the business environment or an internally driven change from within the business. It is unjustified to claim that all the improvement (or deterioration) in business performance is due to a training programme. Philips (2003) identifies several ways to isolate the effect of training, see Box 2. If baseline data exists, then trend analysis is the most convincing and reliable measure. It would be worthwhile supplementing this with focus groups and estimates from senior managers to validate the results of the trend analysis and improve buy-in.

Converting to Monetary Data

To calculate ROI we need to both measure the return and the full cost of the training programme. The return can be often derived from the improvement in business measures however Philips has identified a number of guiding principles to ensure that the results are prudent. These are:

  1. When a higher level evaluation is conducted it must be supported by data collected at lower levels. (i.e. poor quality training (level 1) is unlikely to lead to significant business improvement)
  2. When an evaluation is planned at a higher level, the previous level of evaluation does not need to be comprehensive. (i.e. if current business measures are being used to monitor the implementation then the need to project audit is reduced)
  3. When collecting and using data only use credible sources, it is better to have a lower but justifiable ROI than a measure based on poor quality data.
  4. When analysing data always choose the most conservative among alternatives
  5. At least one method must be used to isolate the effects of training
  6. If no improvement data is available for a population assume that no improvement has been occurred.
  7. Estimates of improvements should be adjusted to use worst case estimates to account for potential errors
  8. Extreme data items or unsupported claims should not be used for ROI calculations
  9. Only the first year of benefits should be used for short term programmes
  10. All costs should be loaded into the ROI calculation
  11. Intangible benefits are those that are deliberately not to be converted into robust monetary benefits.
  12. The results of ROI measurement should be communicated to all stakeholders.

 Following Phillip’s guiding principles ensures that the reported ROI is conservative and will be credible with the sponsors of the programme.

Discussion and Conclusions

The initial very positive level 1 reaction feedback is a critical initial step towards the overall success of the programme. Equally, if not more important, is supporting and stimulating the application of the new knowledge within projects. Monitoring and evaluation of these higher levels will be a vital part of the feedback loop that ensures the overall success of the programme. Plans for monitoring these higher levels using trend line analysis of existing business measures, telephone interviews, focus groups and estimates from senior managers have been proposed.

Bibliography

Bramley, P. ‘Evaluating Training’ Chartered Institute of Professional Development, 1996, ISBN 0-85292-636-7
Philips J, ‘Return on Investment in Training and Performance Improvements Programs’, third edition 2003, Butterworth-Heinemann, 2003, ISBN 0-7506-7601-9
Kirkpatric, D. L. ‘Techniques for Evaluating Training Programs’, Training and Development, American Society for Training Directors, Jaunary 1996.

For more information on the benefits of project management training

Avatar for Paul Naybour

Paul Naybour

Paul Naybour is a seasoned project management consultant with over 15 years of experience in the industry. As the co-founder and managing director of Parallel, Paul has been instrumental in shaping the company's vision and delivering exceptional project management training and consultancy services. With a robust background in power generation and extensive senior-level experience, Paul specializes in the development and implementation of change programs, risk management, earned value management, and bespoke project management training.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Upcoming Courses

Discover more about professional project management certification
and how it enhances the career prospects of individuals and the project delivery capabilities of organisations.

Scroll to Top