Increasingly, organisations – including government departments – are looking to evaluate the impact of their actions and build an evidence-base around what works.
It’s something that’s easy to say but a harder to do: this kind of work requires people to really think clearly about what they’re trying to achieve. Often, the next step is to define indicators for these things – reliable signs that show whether we’re achieving the goals we’ve set or not. Many monitoring and evaluation practitioners suggest coming up with a question about the goals first, then finding indicators to answer this question.
Either way, the idea is that a change in an indicator can tell you if you’re achieving the important things.
But there are a few places this process can get off track.
Indicators are signs that we can use to understand what is happening: the number of people who attended the training course, the average time that a patient has to wait. Simple, quantitative indicators can be easy to measure and easy to communicate, giving a great snapshot of what is happening. But indicators are not the goal.
Indicators also make for great headlines, which can be a problem when statistics are spun to suit the media release. And when people lose sight of the bigger picture and start focusing on the indicator, that’s when things can really go wrong.
Goal displacement is where people mistake the indicator for the overall goal. So using the previous examples, we might get so excited about the increasing number of people on our training course – and forget to think about the quality of the course and whether anyone has learnt anything. We might get so focused on reducing patient waiting time that we start rushing people through, taking greater risks with their health. Goal displacement can encourage a focus on quantity at the expense of quality, cutting corners and taking risks. And focusing on the indicator instead of the goal can mean we get led in a direction that doesn’t reflect what we really wanted to achieve.
An example is a recent AusAID media release about their remuneration framework (see http://tiny.cc/83lrdw).
The stated goals of AusAID’s Adviser Remuneration Framework are laudable. According to the media realease: ‘It [the remuneration framework] was introduced to ensure better value for money and better results from the use of advisers in the aid program’.
The headline of the media release proclaims:
‘Report confirms adviser reforms are working’
What does ‘working’ mean? Presumably it means that the goals of the AusAID’s remuneration framework have been achieved.
And on what basis are they making the claim that it is working? Mainly that:
‘…the average daily fee for short-term international advisers has fallen 41.1 per cent and the average monthly remuneration package for a long-term adviser has fallen 34.1 per cent.’
And here we can see where a focus on indicators leads people astray. The indicator tells us nothing about value for money, and nothing about ‘better results’. It just tells us that advisors are being paid less than they were being paid before. It’s not hard to think of ways that this could have unexpected negative consequences: the most experienced advisors may choose to work elsewhere, short-term advisors may accept a lower daily fee but push for the contract to be longer, advisors may be more reluctant to do extra work or go the extra mile. The goals of the Adviser Remuneration Framework are worthy ones indeed, but claims about its effectiveness need to be corroborated with better evidence than this.
The media release goes on to talk about better performance standards so there’s a broader program at work, which we hope is being fully evaluated. But it does show the temptation of ignoring a complex issue in favour of simple indicators – and an eye catching headline.
AusAID’s increased budget has led to an increased focus on evaluating whether the Australian aid program is efficient and effective. Let’s hope that they ignore the temptations of easy indicators, and really think about measuring what they want to achieve.