Go to main navigation Navigation menu Skip navigation Home page Search

The Politics of Aid Effectiveness: Why Better Tools can Make for Worse Outcomes

by Anders Olofsgård, SITE Working Paper

The recent focus on impact evaluation within development economics has lead to increased pressure on aid agencies to provide "hard evidence", i.e. results from randomized controlled trials (RCTs), to motivate how they spend their money. In this paper I argue that even though RCTs can help us better understand if some interventions work or not, it can also reinforce an existing bias towards focusing on what generates quick, immediately verifiable and media-packaged results, at the expense of more long term and complex processes of learning and institutional development. This bias comes from a combination of public ignorance, simplistic media coverage and the temptation of politicians to play to the simplistic to gain political points and mitigate the risks of bad publicity. I formalize this idea in a simple principal-agent model with a government and an aid agency. The agency has two instruments to improve immediately verifiable outcomes; choose to spend more of the resources on operations rather than learning or select better projects/programs. I first show that if the government cares about long term development, then incentives will be moderated not to push the agency to neglect learning. If the government is impatient, though, then the optimal contract leads to stronger incentives, positively affecting the quality of projects/programs but also negatively affecting the allocation of resources across operations and learning. Finally, I show that in the presence of an impatient government, then the introduction of a better instrument for impact evaluation, such as RCTs, may actually decrease aid effectiveness by motivating the government to chose even stronger incentives.

Download the article here or read it on our SlideShare channel.

SITE Governance Politics Publication Working paper