Are you Evaluation-Ready?

26 February 2020

Evaluation specialists Penny Fitzpatrick and Donella Bellett explain MartinJenkins’ approach to assessing ‘evaluability’, and share insights to help you assess how ready you and your programme are for evaluation.

As evaluation consultants, we get to look under the hood of many different programmes, organisations and sectors. But we don’t always get to step back and share what we have learned.

A couple of years ago we were hired by a transnational organisation to develop a bespoke tool that would help them to decide what to evaluate, when to evaluate and how to evaluate within their programme portfolio. It was a dream assignment because it was one of those rare opportunities to put down on paper a process that comes naturally to us as seasoned specialists in this area.

Even better, the client agreed that we could also develop a generic version of our model that we could share more widely, to assist policy professionals who want to make the most of their evaluation spend — and so here we are.

What is ‘Evaluability’ and why does it matter?

‘Evaluability’ is a shorthand way to talk about how ready a programme is to be evaluated.

Evaluability matters because not everything that can be evaluated should be — and because not everything that should be evaluated can be.

Knowing both what can and what should be evaluated is key to using your scarce evaluation resources wisely:

  • It reduces the risk of commissioning an unsuccessful evaluation — one that has to be abandoned part-way through, or that can’t answer the questions that have been posed

  • It allows a commissioned evaluation to be done more efficiently — by enabling you to identify early on the most appropriate evaluation type and design so that you can then develop a really good RFP

  • It makes it more likely the evaluation findings will be accepted and will lead to action — by ensuring that the evaluation questions are ones that are of interest to stakeholders and that key people are engaged in evaluation governance.

MartinJenkins’ model for assessing evaluability

The model we’ve developed here at our firm (below) draws on good practice and models that have come before — for example, the work of Rick Davies for the UK Department for International Development (DfID) — and on MartinJenkins’ 100-plus years of combined experience evaluating government policies and programmes (some of which have been very evaluation-ready and some not).

MJ event EIS Evidence for Impact evaluability frameworkHow evaluation-ready is your programme? / Image source: MartinJenkins 

Our model applies three lenses: evaluability in theory, evaluability in context and evaluability in practice. We describe them sequentially in this blog, but in reality, they are applied in parallel with insights gained through each lens informing the others.

1. Assessing Evaluability in Theory: what is plausible to evaluate?

This is the bit where we work out what a programme could realistically be expected to have achieved.

We do things like review the intervention logic model that the client has developed to present the programme theory (or, too often, we have to develop the model ourselves!). We assess the clarity and coherence of causal assumptions, and the plausibility of the programme theory overall.

We pose and answer questions like:

  • Do key stakeholders have the same ideas about how this programme works?

  • Has enough time passed for the desired long-term impacts to be achieved

  • Can an evaluation realistically identify the programme’s contribution to these long-term impacts compared to the contributions of other policies and programmes?

We look at whether there are competing cultural paradigms and whether stakeholders are seeking the same outcomes. The output of this lens is usually a detailed and robust programme theory, presented as an intervention logic model, which guides an evaluation going forward.

2. Assessing Evaluability in Context: what is appropriate to evaluate?

This is the bit where we work out who the key stakeholders are and what questions they want answered. It involves engaging with stakeholders, through interviews and focus groups, and analysing the wider context for the programme — in other words, the politics for and against change.

We pose and answer questions like:

  • Who needs to be involved in the governance of the evaluation? What power dynamics might enable or prevent their participation?

  • How ready are the stakeholders to respond to the findings that will come out of the evaluation?

  • Are there any ‘no go’ questions or methods?

We look at what types of data stakeholders value and will find persuasive. This goes beyond quantitative versus qualitative preferences, and explores cultural paradigms as well, in particular data of value in te ao Māori.

It’s in exploring context that we consider ethical issues around the ownership of knowledge that will be created. An issue that is too often left to the end of an evaluation process.

The key outputs of this second part of the assessment are a prioritised set of evaluation questions, proposed arrangements for governing the evaluation and an analysis of practical and ethical considerations that could enable or inhibit a research or evaluation project.

3. Assessing Evaluability in Practice: what is feasible to evaluate?

This is the bit where we really get stuck into the data.

Assessing evaluability-in-practice involves exploring the quality and availability of data, as it relates to evaluation questions that are of interest to stakeholders.

It includes both qualitative and quantitative data, and both data that has already been collected and data that is potentially collectable.

We pose and answer questions like:

  • What data already exists and what are its technical limits?

  • Who owns existing data and can we access it?

  • What are the costs and practical challenges involved in filling data gaps?

On the basis of our assessment of the quality and availability of data in the areas of stakeholder interest, we arrive at a judgment about the feasibility of particular types of evaluation.

Where to from here?

So those are the three lenses we use when assessing a programme’s readiness for evaluation. The next question is: how to use them?

In our experience, evaluability assessment is most often undertaken during the scoping phase of an evaluation; where it adds value by directing the evaluation focus and design but is limited by the predetermined context (ie when an evaluation has already been commissioned).

The model is useful for much more than this.

When the evaluability assessment is applied outside of the scoping phase, it can help you to decide to delay an evaluation (until better data is available, for example) or not to evaluate a project at all (perhaps no one is interested in responding to findings) or to know what you need to do to make sure your programme can be evaluated in future.

We have all heard the advice to get evaluation involved early in your programme design and implementation — but it is often challenging to know how to do that and exactly what you are asking for. A short piece of work that looks at the evaluability of your programme could be the answer. You might use the framework to improve your evaluation readiness by getting evaluation experts to:

  • Review (or develop) your theory of change

  • Assess your data quality and data gaps

  • Engage your stakeholders in development of future evaluation questions and design of governance

  • Think about the cultural values of the programme and your stakeholders and how this might affect the evaluation focus.

It doesn’t commit you to an evaluation now. It helps you to know that you are set up for the kind of evaluation that you want to commission in future, or what other options you might deploy to get evidence that will make a difference.

  


About the authors
  

Penny Fitzpatrick is a research, evaluation and learning specialist with a flair for facilitation — she designs engaging processes that strengthen clients’ capacity for continuing improvement.

She does also have considerable experience in advisory, strategic and research work in social and environmental fields. This combined theoretical depth and practical experience means she can design projects that meet your budget and timeframe without compromising quality.

Penny Fitzpatrick

 

Donella Bellett is a respected evaluator and researcher with a collaborative style who designs fit-for-purpose evaluation and research projects that support continuous improvement. She produces clear evidence and valuable advice for decision-makers and programme users.

Since joining MartinJenkins in 2009 Donella has led and contributed to evaluation, research and review projects for a wide range of clients, including the Ministry of Business, Innovation and Employment, the Health Quality and Safety Commission, Creative New Zealand, Te Puni Kōkiri, and the Ministry of Social Development. She has also provided leadership for a number of projects in the education sector, including an evaluation of Integrated Attendance Services and research into students with moderate special education needs.

Donella Bellet 01

Related Services

Related Industries