MCC’s First Five

October 23, 2012 By John Glenn

In today’s constrained budget environment, there is a constant drumbeat for focusing on results for taxpayer dollars.  And there should be, but just as often it’s hard to know what the results actually are, let alone whether they worked.  The Millennium Challenge Corporation has led the way in pioneering rigorous evaluation in its country selection and compact evaluations for years in global development, and the release of its first five independent impact evaluations is worth paying attention to.

The evaluations found that the MCC programs met and even exceeded the program targets they set for themselves.  But where most government agencies stop there, the MCC took the next step to ask if their program targets actually met their broader mission – to reduce poverty by raising household incomes. There they found a mixed picture that provides valuable feedback about what worked in the field and what didn’t.  It’s the kind of evaluation that rarely gets done in the risk averse environment of Washington, and it’s one to be appreciated, given the MCC’s commitment to learning and incorporating the insights from their evaluations.

While impact evaluations have been carried out in other policy areas like labor and education, they’re much rarer in foreign policy.  In part, that’s because they rely upon adapting the experimental method where you measure results by comparing one group that receives treatment with another that doesn’t.  In the dynamic world of foreign policy, you rarely have the opportunity to make such decisions because, more often than not, you’re putting out fires.  In global development, ethical questions have been about how to do this because it would mean choosing who would receive, say, life-saving health treatments or education and who wouldn’t so that you’d have a control group to compare with.  Inspired in part by research by MIT economists Abhijit Banerjee and Esther Duflo (and their book, Poor Economics), new methods have been developed that take into account these challenges and offer a real chance to see what works and what doesn’t.

The MCC’s five impact evaluations (all conducted by independent external evaluators) were done for a small slice of their overall portfolio, for farmer training activities in five countries — Armenia, El Salvador, Ghana, Honduras, and Nicaragua for programs were designed between 2004 and 2005 and implemented between 2005 and 2012.  In the design of these compacts with partner countries, they measured inputs and outputs (such as farmers trained), as well as interim outcomes (such as farmers using new techniques learned through training) and the growth of farmer incomes and household incomes more broadly.

On the measure of inputs and outputs, the MCC’s farmer training programs performed very well, meeting or exceeding their performance targets.  And they also found increases in farm income in some countries (in El Salvador, dairy farmers doubled their farm incomes) and in some regions in some countries (in northern Ghana, farmers’ annual crop income increased, while southern and central Ghana showed no impacts on farm income from the compacts’ farmer training activities).  In others (such as Honduras), they weren’t able to make measurements due to problems in compact implementation.

But did they affect the MCC’s long-term goal of improving household income and reducing poverty?  The independent evaluations scrupulously admit that they don’t know.  In some countries household income has risen during the compact period, but it’s difficult to attribute that growth to the program.  Why?  The evaluations have highlighted that some of the traditional methods of farming training may not work as well as assumed.  Training is often accompanied by starter kits, a package of seeds, fertilizer and equipment, to complement the training with the idea that it will give the farmers materials to apply what they’ve learned.  It turns out that sometimes that works, but some of the starter kits contained materials that weren’t useful.

Changing the content of starter kits is a relatively easy issue to fix, but, to their credit, they asked the harder questions of why they weren’t seeing the results they expected in some cases, given that they met their program targets.  When the MCC compared results across settings, they found that it may not always be effective to provide a little training to a lot of people, one of the more common impulses in situations of need.  Rather, training fewer farmers more intensively may produce better results because it ensures that the training actually takes hold and adapts to local conditions.  They also may need to adopt a longer time horizon – beyond the short term crop cycle to see whether and how training translates into higher incomes.  For the MCC, this will mean going back to re-assess their training programs in Burkina Faso and Moldova, as well as look for broader implications for their models of impact.  It will be challenging, given the time pressures to complete a compact in five years.  It may mean re-evaluating initial assumptions in future compacts.

The five impact evaluations are for just 2% of the overall MCC budget, so they’re just a start but they offer an initial glimpse into both the effectiveness of these programs and real efforts by the agency to take measuring results seriously.  They take the tough step of assessing assumptions – in this case, that farmer training really leads to higher farmer income and overall household income.  The MCC has been a leader in this effort to do rigorous evaluation, one that has supporters in other agencies (such as USAID, which has also committed to releasing evaluations of its programs as part of its evaluation policy).  It’s a good moment at both the policy and the political level, and the kind of commitment to transparency that should make friends on the Hill and in the development community.