- 1 Develop program logic and review needs
- 2 Develop the evaluation brief
- 3 Commission the evaluation project
- 4 Manage development of the evaluation design
- 5 Manage development of the evaluation workplan
- 6 Manage implementation of the work plan, including production of report(s)
- 7 Disseminate report and support use of the evaluation
The actual conduct of the program evaluation involves implementing the workplan to collect and analyse the data, and to prepare reports. The development of the workplan was discussed at Step 5.
The tasks and the issues in managing this process are similar whether done internally or externally.
Using good project management techniques
You use the same processes and skills to manage a program evaluation as you would to manage other types of projects. Good project management is about communicating well and communicating early, within your own team and with any external teams.
In all evaluation projects, the internal group is responsible for managing the implementation, keeping the evaluation on track and ensuring emerging issues are dealt with in a timely way, without jeopardising the rigour or integrity of the project. Tasks for the internal team include reviewing drafts of plans and deliverables and ensuring governance groups are engaged with the evaluation. When an external provider is involved, the internal team manager's role is focused on keeping lines of communication open and ensuring milestones are met.
You may need to adapt the communication schedule developed in the project plan in response to emerging issues.
Collecting the data
Ensuring that data are collected using the agreed methods and within the specified timeframe is fundamental to executing a rigorous evaluation.
As the manager of an internal or an external team, you will need to be aware of how data collection is progressing. It is your responsibility to know about and respond to emerging issues, such as delays to fieldwork, low survey response rates or incomplete administrative datasets. If the scope or integrity of data collection is compromised, so too is the integrity of the evaluation.
Communicate often and early with your team, ensure they brief you on problems, possible solutions, and the implications for the overall evaluation timeframe, and importantly, rigour. In all evaluations, but particularly when working with an external team, be aware the fieldwork team is likely to have substantial experience in data collection and will be well-placed to advise you on how to proceed. You will need to be prepared to revisit the scope of data collection, if required.
It is also the program manager's responsibility to know about data quality issues. To be effective, data quality checking should occur as data is being collected. For example, check administrative datasets for completeness when they are provided to you and ask questions about spurious variables; spot-check survey responses as they come in and do random checks of any online systems to ensure that they are functioning properly. Finding out about data quality issues early maximises your opportunities to respond and overcome the problems with reduced impact on the evaluation.
Analysing the data
Analysing data to summarise it and look for patterns is an important part of every evaluation.
Data cleaning and analysis will likely be done by members of your internal or external team, but it is your responsibility to ensure you understand how the data is being managed. Well-performed analyses use the right methods for the purpose, which not only includes technical appropriateness, but also the most meaningful way to present the analysis depending on the audience and overall purpose of the project.
It is your responsibility to prompt internal analysts to conduct rigorous, meaningful analyses that will allow the evaluation questions to be answered. Similarly, if you are managing an external team, it is your responsibility to ensure the analysis is leading towards the final evaluation products.
Data visualization is the process of representing data graphically to identify trends and patterns that would otherwise be unclear or difficult to discern. Data visualization serves two purposes: to bring clarity during analysis and to communicate
The final stage of analysis is synthesis, where findings from all data sources are brought together and integrated to address the evaluation questions. Combining qualitative and quantitative data can improve a program evaluation by ensuring that the limitations of one type of data are balanced by the strengths of another.
It can be useful to arrange a preliminary findings presentation, where the internal or external analytic team discuss the key trends and findings. This process is particularly helpful if the team is external: it helps you know what to expect from the final report, but it also highlights errors or inconsistencies, and can help the analytic team interpret their findings from a broader policy perspective.
The final deliverable of a program evaluation project is typically a written report, although sometimes it is a PowerPoint presentation, positioning paper or communiqué.
The final evaluation report, either in full or summary form, needs to be meaningful to the intended audiences. The report should be readable, straight to the point, and use a writing style that promotes understanding.
A good evaluation report contains these basic components:
- An executive summary containing a condensed version of the most important aspects of the evaluation
- A description of the program and its context relevant to the evaluation
- A summary of the evaluation's scope and focus, including the purpose and key evaluation questions.
- An outline of the evaluation design and methods
- The findings of the evaluation, with responses to the key evaluation questions, supported by relevant quantitative and qualitative evidence
- Conclusions and recommendations
- Appendices with additional information needed such as terminology, details of who was involved in the evaluation, detailed methodology, and extended data summaries.
Regardless of the format, the final deliverable should explicitly respond to the key evaluation questions and present sufficient evidence to justify its position or recommendations. The final deliverable should be a synthesis of the data collected and present a balanced summary of all the evidence collected, noting where divergent viewpoints or edge cases exist and what these mean.
Ensuring the final deliverable is written clearly and concisely is important for how the report is used.
If you are writing the report, use plain English and data visualisation to highlight key trends and findings. Whether the final deliverable includes a summary of key findings or a list of recommendations will depend on the project context. If you have contracted the work externally, it is important to make clear your expectations about the format, style and content of deliverables.
Finally, you will need to review the deliverable for accuracy, and to ensure that the terms of reference or evaluation questions have been answered. If the work is done internally, you will work with your team to prepare the final deliverable. You will review the deliverable, often with input from a governance group, if the work is done externally. Many organisations and evaluation systems require the evaluation reports to be formally reviewed as part of quality control before they are publicly released.
Product – Evaluation report(s)