The participation of my ministry in the research and writing of this book has been a great opportunity to mobilise the key stakeholders in the agriculture sector around the importance of evaluation and the incomparable benefits of developing public policy based on evidence. —Bonaventure Kouakanou, Deputy Minister of Agriculture, Benin I highly recommend this book. It is a cogent analysis of why all policy- makers ought to use evidence and citizen engagement, routinely, to improve decision-making. —Judi Wakhungu, EGH, Kenya Ambassador to France, Portugal, Serbia & Holy Sea; previously Cabinet Secretary, Ministry of Environment and Natural Resources Profiling the work of policy-makers to use evidence in their decision-making is so important, and the world has much to learn from our experiences in Africa. This book makes a valuable contribution for all of us working to support better decisions based on better evidence. —Ruth Stewart, Africa Centre for Evidence, University of Johannesburg, South Africa; Chair, Africa Evidence Network In an era of Africa awakening, there is no time to waste on traditions, fads, success stories, and intuitions. The time has come for African policy-makers and development professionals to move to evidence-based decision making and actions. This book showcases the rich use of evidence from selected African countries that can serve as a guide for those interested to learn what others have done in an African context. —David Sarfo Ameyaw, CEO/President, The International Centre for Evaluation and Development, Kenya This book asks how governments in Africa can use evidence to improve their policies and programmes, and ultimately, to achieve positive change for their citizens. Looking at different evidence sources across a range of contexts, the book brings policy makers and researchers together to uncover what does and doesn’t work and why. Case studies are drawn from five countries and the ECOWAS (west African) region, and a range of sectors from education, wildlife, sanitation, through to government procurement processes. The book is supported by a range of policy briefs and videos intended to be both practical and critically rigorous. It uses evidence sources such as evaluations, research synthesis and citizen engagement to show how these cases succeeded in informing policy and practice. The voices of policy makers are key to the book, ensuring that the examples deployed are useful to practitioners and researchers alike. This innovative book will be perfect for policy makers, practitioners in government and civil society, and researchers and academics with an interest in how evidence can be used to support policy making in Africa. Ian Goldman, Advisor on Evaluation and Evidence Systems, CLEAR Anglophone Africa; Adjunct Professor, University of Cape Town, South Africa. Mine Pabari, Visiting Research Fellow, CLEAR Anglophone Africa, Kenya. Using Evidence in Policy and Practice Rethinking Development offers accessible and thought-provoking overviews of contemporary topics in international development and aid. Providing original empirical and analytical insights, the books in this series push thinking in new directions by challenging current conceptualisations and developing new ones. This is a dynamic and inspiring series for all those engaged with today’s debates surrounding development issues, whether they be students, scholars, policy makers and practitioners internationally. These interdisciplinary books provide an invaluable resource for discussion in advanced undergraduate and postgraduate courses in development studies as well as in anthropology, economics, politics, geography, media studies and sociology. Energy and Development Frauke Urban Power, Empowerment and Social Change Edited by Rosemary McGee and Jethro Pettit Innovation for Development in Africa Jussi S. Jauhiainen and Lauri Hooli Women, Literature and Development in Africa Anthonia C. Kalu Rural Development in Practice Evolving Challenges and Opportunities Willem van Eekelen Using Evidence in Policy and Practice Lessons from Africa Edited by Ian Goldman and Mine Pabari www.routledge.com/Rethinking-Development/book-series/RD Rethinking Development http://www.routledge.com Using Evidence in Policy and Practice Lessons from Africa Edited by Ian Goldman and Mine Pabari First published 2021 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 52 Vanderbilt Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2021 selection and editorial matter, The University of the Witwatersrand, Johannesburg; individual chapters, the contributors The right of Ian Goldman and Mine Pabari to be identified as the authors of the editorial matter, and the authors for their individual chapters, has been asserted by them. The copyright for the selection and arrangement of the Work as a whole will remain the property of The University of the Witwatersrand, in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. The Open Access version of this book, available at www.taylorfrancis.com, has been made available under a Creative Commons Attribution 4.0 license. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Goldman, Ian, editor. | Pabari, Mine, editor. Title: Using evidence in policy and practice : lessons from Africa / edited by Ian Goldman and Mine Pabari. Description: Abingdon, Oxon ; New York, NY : Routledge, 2020. | Includes bibliographical references and index. Identifiers: LCCN 2020004937 (print) | LCCN 2020004938 (ebook) | ISBN 9780367440121 (hardback) | ISBN 9780367440077 (paperback) | ISBN 9781003007043 (ebook) Subjects: LCSH: Education and state—Africa. | Sustainable development—Government policy—Africa. | Conservation of natural resources—Government policy—Africa. | Africa—Social policy. | Africa—Economic policy. Classification: LCC JQ1875.A55 P78 2020 (print) | LCC JQ1875.A55 (ebook) | DDC 320.6096—dc23 LC record available at https://lccn.loc.gov/2020004937 LC ebook record available at https://lccn.loc.gov/2020004938 ISBN: 978-0-367-44012-1 (hbk) ISBN: 978-0-367-44007-7 (pbk) ISBN: 978-1-003-00704-3 (ebk) Typeset in Bembo by Apex CoVantage, LLC http://www.taylorfrancis.com https://lccn.loc.gov https://lccn.loc.gov List of figures ix List of tables x List of boxes xi List of contributors xii Foreword by Prime Minister of the Republic of Uganda xvi Foreword by Professor Paul Cairney, Stirling University xvii Acknowledgements xix List of abbreviations xxi 1 Introduction to the book 1 IAN GOLDMAN AND MINE PABARI 2 An introduction to evidence-informed policy and practice in Africa 13 IAN GOLDMAN AND MINE PABARI 3 Using evidence in Africa: a framework to assess what works, how and why 34 LAURENZ LANGER AND VANESA WEYRAUCH 4 Mere compliance or learning – M&E culture in the public service of Benin, Uganda and South Africa 54 IAN GOLDMAN, WOLE OLALEYE, STANLEY SIXOLILE NTAKUMBA, MOKGOROPO MAKGABA AND CARA WALLER 5 Using evaluations to inform policy and practice in a government department: the case of the Department of Basic Education in South Africa 75 NEDSON POPHIWA, CAROL NUGA DELIWE, JABULANI MATHE AND STEPHEN TAYLOR Contents viii Contents 6 Use of evidence in a complex social programme: case of an evaluation of the state’s response to violence against women and children in South Africa 92 MATODZI M. AMISI, THABANI BUTHELEZI AND SIZA MAGANGOE 7 The influence of local ownership and politics of the use of evaluations in policy making: the case of the public procurement evaluation in Uganda 115 ISMAEL KAWOOYA, TIMOTHY LUBANGA, ABDUL MUWANIKA, EDWIN MUHUMUZA AND RHONA MIJUMBI-DEVE 8 Rapidly responding to policy queries with evidence: learning from Rapid Response Services in Uganda 133 ISMAEL KAWOOYA, ISAAC DDUMBA, EDWARD KAYONGO AND RHONA MIJUMBI-DEVE 9 The potential and the challenges of evaluations to positively influence reforms: working with producers in the Benin agricultural sector 152 BONAVENTURE KOUAKANOU, DOSSA AGUEMON, MARIUS S. AINA, ABDOULAYE GOUNOU AND EMMANUEL M. DAVID-GNAHOUI 10 Parliament and public participation in Kenya: the case of the Wildlife Conservation and Management Act 2013 169 MINE PABARI, YEMESERACH TESSEMA, AMINA ABDALLA, JUDI WAKHUNGU, AHMED HASSAN ODHOWA AND ALI KAKA 11 The contribution of civil society generated evidence to the improvement of sanitation services in Ghana 188 LAILA SMITH, DEDE BEDU-ADDO, MOHAMMED AWAL AND ANTHONY MENSAH 12 Using evidence for tobacco control in West Africa 206 PAPA YONA BOUBACAR MANE, ABDOULAYE DIAGNE AND SALIFOU TIEMTORE 13 Lessons for using evidence in policy and practice 224 IAN GOLDMAN AND MINE PABARI Index 242 1.1 Simplified version of the analytical framework 9 2.1 Policy/programme cycle used in training in Africa 17 2.2 Supply, demand and knowledge brokering for evidence 25 2.3 Different evidence: policy dynamics 26 2.4 Levels of knowledge brokering 28 3.1 Science of Using Science conceptual framework 41 3.2 Combined analytical framework 45 6.1 Timeline showing the evolution of VAWC-related national policy 95 7.1 Procurement timeline reform 121 9.1 The journey of the agricultural sector policy 154 10.1 The WCMA 2013 journey 176 12.1 Milestones in the ECOWAS new tobacco fiscal directive process 209 13.1 The analytical framework showing the Context, Mechanism, Outcome relationships 225 Figures 3.1 Evidence use mechanisms 40 3.2 Dimensions of context according to the Context Matters framework 43 3.3 Immediate outcomes 47 4.1 The situation with regard to evaluation/M&E units in each country 57 4.2 Perceived responses when the ministry/department’s performance is below expectation 60 4.3 Values and culture barriers to effective use of evaluation in decision making, learning and accountability in your department 61 4.4 How evaluation recommendations are used 65 4.5 Stage at which countries use evaluation evidence 65 4.6 Summary of strengths and weaknesses 67 4.7 Summary of features of the context in Uganda, Benin and South Africa 70 5.1 List of DBE’s research and evaluations to date 80 5.2 Use interventions and how these influenced use 85 6.1 Interventions to promote use and their effect 103 6.2 Summary of enabling factors and barriers from the context 104 7.1 Use interventions and their effect 124 9.1 Recommendations from the 2009 evaluation and what has been implemented 158 9.2 Interventions that influenced use 162 Annex 9.1 (Chapter 9): Summary of the main landmarks in Benin’s agriculture sector 1990–2009 166 10.1 Use interventions and change mechanisms 179 11.1 Evidence use interventions around the IAA/DLT and the changes these influenced 196 12.1 Use interventions and their influence 217 13.1 Contextual influencers of evidence use emerging in the case studies 226 13.2 The range of evidence use interventions, as part of, or external to, national evaluation systems 229 13.3 The change mechanisms 231 Tables 2.1 Distinguishing between monitoring and evaluation 20 12.1 Tax categories 212 Boxes Contributors Editors Ian Goldman, Advisor on Evaluation and Evidence Systems, CLEAR Anglophone Africa; Adjunct Professor, University of Cape Town, ian.goldman@wits.ac.za Ian was the Head of Evaluation and Research in South Africa’s Depart- ment of Planning, Monitoring and Evaluation (DPME), where he led the establishment of South Africa’s national evaluation system and spearheaded work on evidence-based policy. He is a commissioner of 3ie and a founder of Twende Mbele, a partnership of African governments promoting moni- toring and evaluation. He joined CLEAR Anglophone Africa at the Uni- versity of Witwatersrand in July 2018. In addition, he is an adjunct professor at the Nelson Mandela School of Public Governance at the University of Cape Town and a visiting professor at the University of Reading in the UK. Mine Pabari, Visiting Research Fellow, CLEAR Anglophone Africa, mine. pabari@athariadvisory.co.ke Mine has over 20 years’ experience in the natural resource management and sustainable development as an evaluator as well as programme manager and implementer. From 2001 to 2004, she was responsible for monitor- ing and evaluation processes across a large-scale regional initiative covering 12 countries in eastern Africa. Thereafter, from 2005 to 2009, she provided technical advisory services to environmental and agricultural programmes in eastern and southern Africa. This included carrying out evaluations, facilitat- ing self-appraisals and internal evaluations as well as training and supporting the development and implementation of monitoring systems. From 2009 to 2017, she was a senior manager heading up the International Union for the Conservation of Nature (IUCN) regional programme in eastern and southern Africa. She is currently a visiting research fellow with CLEAR Anglophone Africa and the managing partner of Athari Advisory. Foreword Hon Dr Ruhakana Rugunda, Prime Minister, Republic of Uganda Paul Cairney, Professor of Politics and Public Policy, University of Stirling, UK, p.a.cairney@stir.ac.uk mailto:mine.pabari@athariadvisory.co.ke mailto:mine.pabari@athariadvisory.co.ke Contributors xiii Contributing authors Hon Amina Abdalla, CBS, Policy Advisor, Governance and Natural Resources Consultant. Previously, nominated MP and Chair Committee on Environ- ment and Natural Resources, Kenya National Assembly, Honaminaabdalla@ gmail.com Dossa Aguemon, Director of Planning and Prospective, Ministry of Agricul- ture, Livestock and Fisheries (MAEP), Benin, Aguemondossa@yahoo.fr Marius S. Aina, Deputy Director of Planning and Prospective, Ministry of Agriculture, Livestock and Fisheries (MAEP), Benin, Asmarius@yahoo.fr Matodzi M. Amisi, Researcher, CLEAR-AA, University of Witwatersrand, South Africa, michellematodzi@gmail.com Mohammed Awal, Team Leader, Social Accountability & SDGs, Ghana Cen- tre for Democratic Development (CDD-Ghana), m.awal@cddgh.org Dede Bedu-Addo, Coordinator, Ghana Monitoring & Evaluation Forum (GMEF), abedums@gmail.com Thabani Buthelezi, Chief Director M&E, Department of Social Develop- ment, South Africa, ThabaniB@dsd.gov.za Emmanuel M. David-Gnahoui, University of Abomey-Calavi, Benin, eda- vid1@gmail.com Isaac Ddumba, Assistant District Health Officer, Mukono District Local Gov- ernment, Uganda iddumba08@outlook.com Abdoulaye Diagne, Executive Director, Consortium pour la Recherche Economique et Sociale, Université Cheikh Anta Diop de Dakar, Sénégal, adiagne@cres-sn.org Abdoulaye Gounou, Chief of the Bureau of Evaluation of Public Policies and Government Actions, Presidency, Benin, agounou0@gmail.com Ali Kaka, Wildlife Sector Policy Advisor to the Cabinet Secretary, Ministry of Tourism and Wildlife, Kenya, ali.kaka@adeptconservation.net Ismael Kawooya, Research Scientist, The Centre for Rapid Evidence Syn- thesis (ACRES), Regional East African Health Policy Initiative (REACH PI), Makerere University College of Health Sciences, Uganda, ikawooya@ acres.or.ug Edward Kayongo, Research Scientist, The Centre for Rapid Evidence Syn- thesis (ACRES), Regional East African Health Policy Initiative (REACH PI), Makerere University College of Health Sciences, Uganda, ekayongo@ acres.or.ug Hon Bonaventure Kouakanou, Deputy Minister of Agriculture, Livestock and Fisheries (MAEP), Benin, bonaventure_kouakanou@yahoo.fr mailto:Honaminaabdalla@gmail.com mailto:Honaminaabdalla@gmail.com mailto:ikawooya@acres.or.ug mailto:ikawooya@acres.or.ug mailto:ekayongo@acres.or.ug mailto:ekayongo@acres.or.ug xiv Contributors Laurenz Langer, Senior Researcher, African Centre for Evidence, University of Johannesburg, South Africa, llanger@uj.ac.za Timothy Lubanga, Commissioner Monitoring & Evaluation, Office of the Prime Minister, Uganda, tlubanga@gmail.com Siza Magangoe, Chief Director, Families and Social Crime Prevention, Department of Social Development, South Africa, SizaM@dsd.gov.za Mokgoropo Makgaba, Data Analysis Specialist, Department of Planning, Monitoring and Evaluation (DPME), South Africa, mokgoropo@dpme. gov.za Papa Yona Boubacar Mane, Coordinateur scientifique, Consortium pour la Recherche Economique et Sociale, Université Gaston Berger–Saint-Louis, Senegal, yonamane@gmail.com Jabulani Mathe, Formerly Senior Evaluation Specialist, Department of Plan- ning Monitoring and Evaluation, now Senior Advisor, Monitoring and Eval- uation, National Planning Commission, Office of the President, Namibia, jmathe@integration.org Anthony Mensah, Director, EHSD, Ministry of Sanitation and Water Resources, Ghana, mensahanthony@hotmail.com Rhona Mijumbi-Deve, Senior Research Scientist, The Centre for Rapid Evidence Synthesis (ACRES), Regional East African Health Policy Initia- tive (REACH PI), Makerere University College of Health Sciences, rmi- jumbi@acres.or.ug Edwin Muhumuza, Director Corporate Affairs, Public Procurement and Dis- posal of Assets (PPDA) Authority, Uganda Abdul Muwanika, Principal Economist, Office of the Prime Minister, Uganda, abdulmuwanika@hotmail.com Stanley Sixolile Ntakumba, Acting Director-General, Department of Plan- ning, Monitoring and Evaluation (DPME), South Africa, stanley@dpme. gov.za Carol Nuga Deliwe, Chief Director, Strategic Planning, Research & Coor- dination, Department of Basic Education, South Africa; Research Associate, University of Pretoria, carolnuga@gmail.com Ahmed Hassan Odhowa, Principal Research Officer, Parliament of Kenya, odhowa.ah@gmail.com Wole Olaleye, PhD Candidate, University of Witwatersrand, South Africa; Vis- iting Research Associate, CLEAR-AA, wole4467@gmail.com Nedson Pophiwa, Senior Researcher, National Consumer Commission; Research Fellow, CLEAR AA, pophiwan@gmail.com mailto:mokgoropo@dpme.gov.za mailto:mokgoropo@dpme.gov.za mailto:stanley@dpme.gov.za mailto:stanley@dpme.gov.za Contributors xv Laila Smith, Senior Consultant, Universalia Management Group, lsmith@uni versalia.com Stephen Taylor, Director, Research Coordination, Monitoring and Evalua- tion, Department of Basic Education, South Africa; Research Associate, Uni- versity of Stellenbosch, taylor.s@dbe.gov.za Yemeserach Tessema, Researcher, Athari Advisory, Kenya, emi.tessema@ athariadvisory.co.ke Salifou Tiemtore, Director of Customs Union & Taxation, Department of Trade, Customs & Free Movement, ECOWAS, saliftiemtore@yahoo.fr H.E. Prof. Judi Wakhungu, EGH, Kenya Ambassador to France, Portugal, Serbia & Holy See. Previously Cabinet Secretary, Ministry of Environment and Natural Resources, Kenya, judiwakhungu@gmail.com Cara Waller, Programme Manager, Twende Mbele programme, based at CLEAR-AA, University of Witwatersrand, cara.waller@wits.ac.za Vanesa Weyrauch, Co-founder, Politics & Ideas, Argentina, v_weyrauch@ yahoo.com mailto:lsmith@universalia.com mailto:lsmith@universalia.com mailto:emi.tessema@athariadvisory.co.ke mailto:emi.tessema@athariadvisory.co.ke mailto:v_weyrauch@yahoo.com mailto:v_weyrauch@yahoo.com Africa needs to develop, and to do so we need the best evidence to inform our choices for policies and programmes and how we implement them. In Uganda we have been implementing our evaluation system since 2011, and we now have one of Africa’s widely recognised evaluation systems. In fact, we have discovered we already have over 500 policy and programme evaluations that have been undertaken in Uganda! We also have a well-established research system and a growing Science and Innovation Fund, and Makerere University is one of the top universities in Africa with a promising Knowledge Translation (KT) innovation, the Rapid Response Service (RRS). In addition, we have well-established processes for citizen engagement, including our community information fora, Barazas. We have to use these resources to help inform our policy choices. But how do we do so most effectively? How do we maximise the likelihood that evi- dence does not just sit on a shelf but is used? This is a timely book to help us in that journey, and it is so refreshing to see these interesting African examples of using evidence for us to learn from, including two examples from Uganda. It also reflects the value that we have obtained from our partnership with Benin, South Africa and more recently Ghana and Kenya through the Twende Mbele programme, and the value of transcending colonial boundaries to learn from our peers across Africa. We look to our public managers and our scholars to read this book, learn from the experience, and see how it can be applied in our context, so that we don’t only generate the evidence, but we are consciously planning how to maximise the likelihood of use. And I see that one of the conclusions is that we need to take the role of our monitoring and evaluation units more seriously, and their role in brokering the demand from policy makers for evidence and ensuring that evidence is generated in a systematic way to inform ministers and senior managers. For countries that have less well-established evaluation and research systems, this provides an idea of what they can be aiming for. I commend the authors, I welcome the learnings, and I look forward to see- ing more high-quality evidence at Cabinet, and in pan-African fora, contribut- ing to improving development outcomes on the African continent. Dr Ruhakana Rugunda Prime Minister of the Republic of Uganda Foreword by Prime Minister of the Republic of Uganda A lot of the literature on ‘evidence-based policymaking’ (EBPM) is written from a narrow perspective, such as by researchers commenting on the patholo- gies of politics and failings of politicians, and in a small number of ‘Western’ or ‘Global North’ countries. A very limited perspective often masquerades as general knowledge of the world. This book represents an important antidote to that problem. First, it focuses on the topic of EBPM partly by examining the experiences of people who use and demand evidence. Second, it seeks to give ‘voice to African experience’ in the context of a growing movement to decolonise the ways in which people create and use knowledge. In performing both aims, it reminds us that knowledge production and use is a highly social and political process that varies according to context, rather than a technical process that can be reduced to a small number of ‘universal’ rules for high-quality research. As such, the relationships and interactions between peo- ple can be more important than ‘the evidence’ to the uptake of certain forms of knowledge. This book also recognises that we should not expect to find so-called rational policymaking during the completion of a simple, orderly, linear ‘policy cycle’ in which we know how evidence will be used at each stage. Rather, people combine elements such as cognition, emotion, belief, and tradition to help them understand and cooperate within their world. Further, the policy process is best understood as a ‘complex system’ or ‘environment’ in which there are many policymakers and influencers interacting across many levels or types of governance, in which each venue for policymaking has its own rules, dominant ideas, networks and ways to respond to socioeconomic conditions and events. What ‘works’ in one context may not in another. As such, we need more in-depth and rich descriptions of case studies of evidence use. They help us to capture the sense that, although we have ways to make comparisons and learn from each other, we recognise that no two case studies are the same and there is no ‘blueprint’ for evidence uptake. In that context, this book has two profoundly important lessons for key audi- ences. First, it provides lessons that are relevant to the development of capacity and culture in evidence generation and use in Africa. A large part of giving Foreword by Professor Paul Cairney, Stirling University xviii Foreword by Professor Paul Cairney voice to African experience is to use case studies from some African countries to enable many more to reflect on their own – current and future – proce- dures. As the editors describe, this book is part of a trust- and capacity-building exercise to extend policy learning from the short to the long term. Second, it provides lessons to a much wider, global, audience that tends to reply dispro- portionately on experiences from the Global North. The overall result is a book that is greater than the sum of its highly informative parts. Paul Cairney Professor of Politics and Public Policy University of Stirling, UK p.a.cairney@stir.ac.uk Acknowledgements This book was only able to be produced by the involvement of many people. The 39 contributors are already recognised in the individual chapters and the list of contributors. Many thanks for the dedication to the project and manag- ing to get the research and writing completed quickly, for the additional hours put in, and the backwards and forwards needed to finalise the chapters. We note also the people who developed the framework which was central to the book, Laurenz Langer and David Gough amongst others. We are very grateful to the Prime Minister of Uganda Hon Dr Ruhakana Rugunda for his championing of evaluation and evidence use in Uganda, his support for the Twende Mbele initiative, and his contributing a foreword for the book. Similarly, Professor Paul Cairney for his insightful writing which has helped inspire us, and also for contributing a foreword. The advisory group met physically twice and provided very useful guidance and support on the book, as well as many peer reviewed individual chapters. We would like to thank Laila Smith (CLEAR-AA and Universalia) who chaired the Advisory Group; Norma Altshuler of the William and Flora Hewlett Foun- dation, David Ameyaw (ICED, Kenya); Abdoulaye Gounou (BEPPAG, Benin); Alan Hirsch (University of Cape Town); Beryl Leach, formerly of 3ie; Tim Lubanga (OPM, Uganda); Constance Mabela and David Makhado (DPME, South Africa); Adeline Sibanda (IOCE, EvalPartners, formerly AFREA); Peter Taylor of IDRC, Canada (and now IDS); Eliyah Zulu of AFIDEP. Many thanks for your contributions. Peer reviewing of chapters was done first by chapter authors of each oth- er’s work. Then at draft final stage Advisory Group members and others peer reviewed chapters, providing so much value in their constructive criticism. Apart from the advisory group members we had 16 peer reviewers including Robert Cameron (University of Cape Town), Phil Davies (Oxford Evidentia), Hans de Bruijn (University of Delft), Bridget Dillon (formerly of DFID), Saliem Fakir (World Wildlife Fund), Dugan Fraser (CLEAR-AA), Marie Gaarder (3ie), Gonzalo Hernandez Licona (formerly CONEVAL), Manny Jimenez (formerly 3ie), Patricia Kameri-Mbote (University of Nairobi), Brian Levy (University of Cape Town), Mala Mann (University of Cardiff), Ada Ocampo (UNICEF), xx Acknowledgements Cosmas Ochieng (AfDB), Lynn Osomo (Makerere University) and George Wamukoya. Judy Scott-Goldman and Lynn Southey did the detailed editing and added a lot of insight and clarity in the writing. Barbara Herweg managed the finances of the project. Laila Smith and later Dugan Fraser of CLEAR-AA enthusias- tically took on the project and have supported it throughout. Thanks to the William and Flora Hewlett Foundation for funding the book and the wider evidence use project and to Norma Altshuler for providing detailed suggestions and inputs, contributing far beyond just funding. We hope that all of you have learnt from the experience and that it contrib- utes to improving development outcomes in Africa. Ian Goldman and Mine Pabari ABePROFA Agence Béninoise de Promotion des Filières Agricoles (Benin Agency for the Promotion of Agricultural Value Chains) ACE African Centre for Evidence AEN African Evidence Network AfDB African Development Bank AfrEA African Evaluation Association ANA Annual National Assessments ANAW Africa Network for Animal Welfare ANCB Association Nationale des Communes du Bénin (Benin National Local Government Association) APNODE African Parliamentarians Network on Development Evaluation ASTA African Tobacco Situation Analysis BCURE Building Capacity to Use Research Evidence BEPP Bureau d’Évaluation des Politiques Publiques (Office for Evalu- ation of Public Policies) BEPPAAG Bureau d’Evaluation des Politiques Publiques et de l’Analyse de l’Action Gouvernementale (Office for Evaluation of Public Policies and Government Actions) CAK Conservation Association of Kenya CANAM Conservancies Association of Namibia CBO Community-based organisation CCIB Chambre de Commerce et d’Industrie du Bénin (Chamber of Commerce and Industry of Benin) CDD-Ghana Ghana Centre for Democratic Development CITES Convention on International Trade in Endangered Species CLEAR Centres for Learning and Evaluation for Results CLEAR-AA Centre for Learning on Evaluation and Results for Anglo- phone Africa CNOS Conseil National d’Orientation et de Suivi (National Guidance and Monitoring Council) COMESA Common Market for Eastern and Southern Africa CONIWAS Coalition of NGOs in Water and Sanitation Abbreviations xxii Abbreviations CPAR Country Procurement Assessment Report CRES Consortium pour la Recherche Economique et Sociale (Consor- tium for Economic and Social Research) CREST Centre for Research on Evaluation, Science and Technology CS Cabinet secretary CSO Civil society organisation CWSA Community Water and Sanitation Agency DA District Assembly DAC Development Assistance Committee of the OECD DACF District Assemblies Common Fund DBE Department of Basic Education DCENR Departmental Committee on Environment and Natural Resources DESO District Environmental Sanitation Officer DESSAP District Environmental Sanitation Strategy and Action Plan DFID Department for International Development (UK) DGE Direction Générale de l’Évaluation (Directorate General for Evaluation) DHET Department of Higher Education and Training DHO District health officer DHT District health team DoH Department of Health DP Development partner DPAT District Performance Assessment Tool DPCU District Planning and Coordinating Unit DPDR Déclaration de Politique de Développement Rural (Declaration of Rural Development Policy) DPME Department of Performance, Monitoring and Evaluation, South Africa (in 2014 renamed the Department of Planning, Monitoring and Evaluation) DRC Democratic Republic of the Congo DSD Department of Social Development DUR Department of Urban Roads EAWLS East African Wildlife Society EBPM Evidence-based policy making ECD Early childhood development ECOWAS Economic Community of West African States EHSD Environmental Health and Sanitation Directorate EIDM Evidence-informed decision making EIPP Evidence-informed policy and practice EPA Environmental Protection Agency ESC Evaluation Steering Committee (South Africa) ESC Evaluation Sub-committee (Uganda) ETWG Evaluation Technical Working Group EU European Union Abbreviations xxiii FCTC Framework Convention on Tobacco Control FLBP Funza Lushaka Bursary Programme GAIN Global Alliance for Improved Nutrition GBV Gender-based violence GDP Gross domestic product GEF Government Evaluation Facility GIZ Deutsche Gesellschaft fur Internationale Zusammenarbeit (Ger- man Society for International Cooperation) GMEF Ghana Monitoring and Evaluation Forum GoG Government of Ghana GPRS Ghana Poverty Reduction Strategy GPRSII Growth and Poverty Reduction Strategy GWCL Ghana Water Company Limited HC Health centre HSD Health sub-district ICT Information and Communication Technology IDLO International Development of Law Organisation IDRC International Development Research Centre IE Impact Evaluation IGG Inspector General of Government IMC Inter-Ministerial Committee IP Implementation partner IRC International Research Centre ISS Institute for Security Studies ITE Initial Teacher Education IUCN ESARO International Union for Conservation of Nature, Eastern and Southern Africa Regional Office JBSF Joint Budget Support Framework KEPSA Kenya Private Sector Alliance KUAPO Kenyans United Against Poaching KWCA Kenya Wildlife Conservancies Association KWCF Kenya Wildlife Conservation Forum KWS Kenya Wildlife Service LDPDR Lettre de Déclaration de Politique de Développement Rural (Let- ter of Declaration of Rural Development Policy) LWF Laikipia Wildlife Forum M&E Monitoring and Evaluation MAEP Ministère de l’Agriculture, de l’Élevage et de la Pêche (Ministry of Agriculture, Livestock and Fisheries) MDA Ministries, Departments and Agencies MDG Millennium Development Goal MDGLAAT Ministère de la Décentralisation, de la Gouvernance Locale, de l’Administration et de l’Aménagement du Territoire (Ministry of Decentralisation, Local Governance, Administration and Land-use Planning) xxiv Abbreviations MDR Ministère du Développement Rural (Ministry of Rural Development) MESTI Ministry of Environment, Science, Technology and Innovation MEWNR Ministry of Wildlife, Environment, Water and Natural Resources MLGRD Ministry of Local Government and Rural Development MMDAs Metropolitan, Municipal and District Assemblies MoEM Ministry of Energy and Mineral Development MoFPED Ministry of Finance, Planning and Economic Development MOH Ministry of Health MoME Ministry of Monitoring and Evaluation MoWT Ministry of Works and Transport MPD Ministère du Plan et du Développement (Ministry of Planning and Development) MPDEPP-CAG Ministère du Plan, du Développement, de l’Évaluation des Poli- tiques Publiques et du Contrôle de l’Action Gouvernementale (Ministry of Planning, Development, Evaluation of Public Policies and Monitoring of Government Implementation) MSWR Ministry of Sanitation and Water Resources MWRWH Ministry of Water Resources, Works and Housing NDP National Development Plan NDPC National Development Planning Commission NEP National Evaluation Plan NEPF National Evaluation Policy Framework NES National Evaluation System NESSAP National Environmental Sanitation Strategy and Action Plan NGO Non-governmental organisation NIMES National Integrated Monitoring and Evaluation Strategy NLLAP National Learning Alliance Platform NPC National Planning Commission NPM New Public Management NPO Non-profit organisation NRT Northern Rangeland Trust NSFAS National Student Financial Aid Scheme NSNP National School Nutrition Programme NSO National Statistics Office NWG National working group OECD Organisation for Economic Co-operation and Development OPM Office of the Prime Minister PACE Programme for Accessible Health Communication and Education, Uganda PAG Programme d’Action du Gouvernement (Government Pro- gramme of Action) Abbreviations xxv PASCiB Plateforme des Associations de la Société Civile du Bénin (Net- work of Civil Society Associations of Benin) PDE Procuring and disposing entity PDU Procuring and disposing unit PFM Public financial management PNIASAN Plan National d’Investissement pour l’Agriculture, la Sécurité Alimentaire et la Nutrition (National Plan for Investment in Agriculture, Food Security and Nutrition) PNOPPA Plateforme Nationale des Organisations de Paysans et de Produc- teurs Agricoles (National Network of Small Farmer and Agri- cultural Producer Organisations) PoA Programme of Action PPDA Public Procurement and Disposal of Assets PPDA Public Procurement and Disposal of Assets Authority PPH Postpartum haemorrhage PPMS Public Procurement Management System PRS Parliamentary Research Services PSDSA Plan Stratégique pour le Développement du Secteur Agricole (Stra- tegic Plan for the Development of the Agricultural Sector) PSI Population Service Initiative PSO Plan Stratégique Opérationnel (Strategic Operational Plan) PSRSA Plan Stratégique pour la Relance du Secteur Agricole (Strategic Plan for the Revival of the Agricultural Sector) RCME Research Coordination, Monitoring and Evaluation REACH-PI Regional East African Community Health Policy Initiative RRS Rapid Response Service SAMEA South African Monitoring and Evaluation Association SAPS South African Police Service SDDAR Schéma Directeur du Développement Agricole et Rural (Master Plan for the Agricultural and Rural Development Sector) SEA Strategic Environmental Assessment SESIP Strategic Environmental Sanitation Investment Plan SNCA Stratégie Nationale de Conseil Agricole (National Strategy for Agricultural Advice) SNE Système National d’Evaluation (National Evaluation System) SNV Netherlands Development Organisation SR Systematic review TASU Technical Administration Support Unit TEHIP Tanzania Essential Health Interventions Project TNC The Nature Conservancy ToRs Terms of reference TWG Technical working group UBOS Uganda Bureau of Statistics UCIMB Union des Chambres Interdépartementales de Métiers du Bénin (Union of Interdepartmental Chambers of Trade of Benin) xxvi Abbreviations UEA Uganda Evaluation Association UNDP United Nations Development Programme UNICEF United Nations Children’s Fund URA Uganda Revenue Authority USAID United States Agency for International Development VAC Violence against children VAW Violence against women VAWC Violence against women and children VEP Victim Empowerment Programme VHT Village health team WACIE West African Network on Impact Evaluation WAEMU West African Economic and Monetary Union WB World Bank WCMA Wildlife Conservation and Management Act WHO World Health Organization WSUP Water and Sanitation for the Urban Poor WWF World Wildlife Fund Summary This introductory chapter outlines the rationale for this book on evidence- based policy making and practice in Africa and sets the scene for the chapters that follow. The book takes a policy-maker, not a researcher perspective, and is concerned with how the use of evidence by policy makers and practitioners (project/programme managers) can be supported. The book documents eight African experiences in using evidence, from Benin, Ghana, Uganda, Kenya, South Africa and the ECOWAS region (i.e. West Africa). The chapter gives a brief contextual overview of the five countries from which the case studies in the book are drawn, locating the cases within their context. The research meth- odology is based on case studies and a realist approach to evaluation research. The case studies cover evidence generated through evaluation, research, research synthesis and the involvement of civil society. However, the book does not focus on the evidence itself but on how interventions that promote use played out and how they influenced individuals, organisations and systems, building capability or motivation to use evidence, and creating opportunities to use evi- dence. The cases show how these led to policy outcomes. The chapter briefly introduces the analytical framework and outlines the structure of the book. South Africa has had one of the highest rates of HIV/AIDS in the world. At an early stage of the pandemic, the then South African president refused to accept the evidence of the link between HIV and AIDS and that anti-retroviral (ARV) therapy was a solution. As a result, the rollout of ARVs was delayed, lead- ing to the deaths of millions of people whose lives could have been prolonged with ARVs. With the rollout of ARVs, the percentage of deaths due to AIDS has fallen from 42.9% in 2006 to 23.4% in 2019, and life expectancy for men has risen from a low of 52.3 in 2006 to 61.5 today (Statistics South Africa, 2019). Africa – a period of self-discovery There is a sense of dynamism in Africa, with many countries having higher eco- nomic growth rates than developing countries elsewhere in the world (AfDB, 2019). African leaders today are determined to bring about meaningful change, 1 Introduction to the book Ian Goldman and Mine Pabari 2 Ian Goldman and Mine Pabari and the African electorate is demanding more of its leaders and pushing harder for accountability. African governments have a wealth of knowledge and experience to draw on, and there is much that the world can learn from Africa and that Afri- can countries can learn from one another. Rwanda and Ethiopia, for example, brought down the percentage of people living with HIV/AIDS from 9.5% and 4.7%, respectively to less than 0.5% in just under 20 years.1 South Africa has a GDP more than 35 times that of Rwanda and four times that of Ethiopia.2 Yet, in 2018, South Africa’s incidence of HIV was at 8.7% while that of Rwanda sat at 0.5% and Ethiopia at 0.4%. What can be learned from the achievements of Rwanda and Ethiopia in their respective contexts? African countries are grappling with increasingly complex policy challenges. Across the continent, governments are struggling to translate economic growth into opportunities for all citizens to prosper. From 2008 to 2018, Africa’s com- bined GDP increased by nearly 40%. Yet, almost half its citizens ‘live in one of the 25 countries where sustainable economic opportunity has declined in the last ten years’ (The 2018 Ibrahim Index of African Governance, 2018). Con- currently, the repercussions of unsustainable growth on both present and future generations are increasingly apparent (AfDB, 2019; UNECA, 2018). Enabling growth, improving the well-being of all, strengthening resilience to shocks and stabilising the integrity of the environment and natural resource base can only be achieved through a transformational shift in policy and prac- tice, and so in the way we learn and make decisions. This book is based on the premise that using evidence3 contributes to improving development policies, programmes and practice. It responds to a central challenge: how can we better harness the wealth of evidence that is being generated in Africa to develop wisely and equitably under the diversity of conditions that we face? Rationale for a book on evidence use There is no shortage of data and information today with technology increasing access exponentially. Evidence comes from multiple sources ranging from scientific research, evaluations, traditional/indigenous knowledge and administrative data to sur- veys of public opinion. To answer our development challenges, we need to be able to recognise and access evidence that is relevant, credible and robust. We need to know how to access existing evidence and when and how to commission and generate new evidence to fill the gaps. Equally important is our ability to navigate the political and social context to create opportunities to use this evidence in decision making, integrating the evidence with deci- sion makers’ and practitioners’ knowledge, skills, experience, expertise and judgement. The use of evidence is the focus of this book, and it was written to improve understanding of how using evidence can help inform and strengthen develop- ment policy, programmes and practice in Africa. We analyse the processes which Introduction to the book 3 support or inhibit evidence use rather than focusing on the sources of evidence or the evidence generation process, of which much has been written. The book approaches evidence from the perspective of policy makers rather than researchers. It explores how African policy makers and development prac- titioners can apply interventions to promote the use of evidence to improve development outcomes and impacts. Practitioners may be government or NGO staff. This book should also be of value to knowledge brokers from both government and non-governmental organisations contributing to development outcomes as well as academics interested in the use of evidence. There are debates as to whether to refer to ‘evidence-based’ or ‘evidence- informed’, ‘policy making’ or ‘decision making’, ‘evidence-based policy mak- ing’ (EBPM) or ‘evidence-informed decision making’ (EIDM) (e.g. Stewart et al., 2019). Banks (2018) quotes an Australian public servant as saying, some have interpreted the term EBPM so literally as to insist that the word ‘based’ be replaced by ‘influenced’, arguing that policy decisions are rarely based on evidence alone. That of course is true, but few using the term would have thought otherwise. In this book we use EBPM and EIDM interchangeably, with a preference for evidence-informed policy and practice (EIPP). Our focus is around the rela- tionship between evidence and change in its various forms – policy, develop- ment practice or beliefs, and world views. Learning from African experiences of using evidence The movement for evidence-based policy has been promoting the use of evi- dence to inform policy making and practice since the 1970s. The early work in this field was led by the health sector and took place primarily outside Africa. Since 2010, however, work on evidence-based policy making has been expand- ing in Africa. Most countries have national statistics agencies with the capacity to gather national data, although there are issues around the quality of the data (PARIS21, 2019). Most countries also have some form of national monitoring and evaluation (M&E) system, but usually they focus on monitoring, perfor- mance and accountability rather than evaluative thinking and learning (Porter and Goldman, 2013). Some countries, such as Benin, South Africa and Uganda have national evaluation systems and are systematically evaluating key policies and programmes (Goldman et al., 2018). There are numerous examples of evi- dence being systematically synthesised from multiple studies rather than relying on single studies, particularly in the health sector. Countries are also investing in promoting evidence use. For example, Benin, South Africa and Uganda have offered advocacy courses to senior managers to stimulate demand for evidence. Evidence-related organisations and networks are emerging, ranging from the Centres for Learning on Evaluation and Results (CLEAR) in Anglophone and Francophone Africa to the Africa Evidence Net- work. Today, there are African examples of policy makers using evidence from 4 Ian Goldman and Mine Pabari evaluations and evidence synthesis, of experimentation in approaches to evalu- ation and evidence synthesis, and evidence use is being discussed in national and international platforms. This book draws on the wealth of practice around evidence generation and use across Africa, giving voice to African experience. We use case stud- ies to explore the experiences of organisations and individuals using evidence to inform development outcomes. We do so across multiple countries, sectors and sources of evidence including evaluations, research synthesis and citizen engagement. In doing so, the book recognises the importance of going beyond evidence to include the knowledge of actors, acknowledging local world views, values, practitioner and citizen experience and the tacit knowledge needed to judge whether an idea is relevant and how to adapt it to a local context (Mar- tinuzzi and Sedlačko, 2017). The cases are written by researchers and policy makers working together to explore evidence journeys with the aim of identifying the critical factors that enabled or hindered the use of evidence in the particular context. The authors do their best to ‘tell an honest story’, recognising that often the most important insights emerge from challenges and failures. The book identifies and documents lessons from the participating African countries with the aim of sharing these with policy makers, practitioners and researchers across Africa and beyond, and contributing to building the networks and processes which help to promote the use of evidence on the African con- tinent. In this way it seeks to support action-learning, giving a voice to policy makers directly involved in the evidence process and to the researchers of the cases. Videos and policy briefs have also been developed to provide diverse ways of conveying the lessons. The research process The research undertaken for this book used an analytical framework drawn from existing work on research impact (Langer et al., 2016) and work led from Latin America on the importance of policy context (Weyrauch et al., 2016). Further detail is given in Chapter 3. The methodology is based on a case study approach, relevant when ‘how’ or ‘why’ questions are being posed and where it is difficult to separate the phenomenon to be studied from the context (Yin, 1994). Case studies were identified from countries linked to the Twende Mbele programme, a partner- ship of African governments focusing on using M&E to inform change.4 These partners enabled ready access to policy makers and the potential to use the book itself as a change intervention in these countries.5 The case studies include good examples of evidence use, from a variety of evidence sources and a range of sectors. Research tools included document review, interviews with key stakeholders, participant observation6 and, in some cases, workshops or focus groups, using similar checklists. The authors were asked to follow a similar structure in writ- ing up the cases, and policy makers were involved in writing each of the cases. The editors then turned the cases into chapters, which were reviewed by the authors. The chapters document and analyse these examples of generating and Introduction to the book 5 using evidence, how these have or have not informed policy and practice and draw out what facilitated or inhibited the use of evidence, and the lessons from this experience for Africa. Introduction to the case studies Eight case studies are presented, from five countries plus the ECOWAS region (the Economic Community of West African States). Benin, South Africa and Uganda have a national evaluation system in place and are demonstrating the use of evaluation in decision making. Kenya and Ghana have draft M&E policies in place, with Kenya’s awaiting approval from Cabinet. All the countries have uni- versities and think tanks conducting research as well as national statistical organi- sations. In some cases such as Uganda, Kenya and South Africa, parliamentarians are starting to request evidence. In the case of Uganda, the use of evidence for large-scale projects is mandatory and built into policy requirements. This section provides a brief overview of the five case study countries and the cases selected, with the sixth covering the ECOWAS countries. Kenya Kenya’s system of government is made up of national government and recently devolved local government units, known as counties. It has an active parliament with a research service and a Parliamentary Budget Office. Kenya has developed an M&E policy which has been awaiting approval from Cabinet for some time. The M&E Directorate in the Ministry of Finance and Planning is responsible for the national integrated M&E strategy (NIMES) as well as Kenya’s Vision 2030 and its national economic recovery strategy. Other sources of evidence include the rapid results initiative (RRI), which is focused on monitoring pro- ject performance, an electronic national integrated M&E system (E-NIMES) and an electronic county integrated M&E system (E-CIMES), which facilitate real-time information sharing on project implementation. According to a diag- nostic study carried out by CLEAR-AA on the status of national evaluation systems (NES) in Kenya (Khumalo, 2019), the introduction of NIMES signifi- cantly improved the role of M&E in policy formulation and implementation. Kenya has an active National Bureau of Statistics and well-established uni- versities and research institutes such as the African Population Health Research Centre, Kenya Institute for Public Policy Research and Analysis and the African Institute for Development Policy. These institutes play a key role in evidence generation, promote research synthesis, and are directly involved in training and support for EBPM. There is also a range of international research institutions based in Nairobi. However, persistent challenges hinder effective evidence use at multiple levels. These include inadequate resources and capacities and a weak evidence culture in the country as well as limited engagement of civil society and other non-state actors (Ibid.). The Kenyan case in Chapter 10 focuses on the wildlife sector, a sector with a long history of polarised ideologies amongst its stakeholders. Specifically, 6 Ian Goldman and Mine Pabari it reflects on the role of a parliamentary committee which led the review of the Wildlife Conservation and Management Act 2013 through citizen engagement. Uganda Uganda has two levels of government, national and local, with two levels of local government, district and subcounty. Most services to citizens are run by local governments. The Office of the Prime Minister (OPM) is responsible for coordinating national-level monitoring and evaluation. The national integrated M&E strategy (NIMES) was launched in 2005/06 and the national M&E pol- icy was passed by Cabinet in March 2013. A Government Evaluation Facility (GEF) was established in 2013 which oversees the management and use of evaluations. There is collaboration between state and non-state actors, with the Uganda Evaluation Association (UEA) working closely with government. The Uganda Parliament is a member of the African Parliamentarians’ Network on Development Evaluation (APNODE) and has a parliamentary research office, a budget office and its own M&E office. According to the NES diagnostic study carried out by CLEAR AA (David- Gnahoui, 2018), evaluations of large projects are mandatory and approximately 12% of the evaluations conducted by the time of the diagnostic study in 2018 had been commissioned and/or co-managed by government, with relatively high levels of quality. There are many organisations undertaking evaluation includ- ing consultants, universities, and think tanks like the Economic Policy Research Centre. The subject of Chapter 7 is an evaluation of the national public procure- ment system, undertaken relatively early in the national evaluation system. Uganda has a well-established university sector – Makerere University is one of the highest-rated universities in Africa – and a range of think tanks such as the African Centre for Health & Social Transformation and the Institute for Public Policy Research. There are evidence users outside government such as the Civil Society Budget Advocacy Group (Obuku, 2018). Makerere University College of Health Sciences has had a programme promoting research synthesis and knowledge translation for some years, producing knowledge products for government (Nankya, 2016). The second Ugandan case, discussed in Chapter 8, is a rapid research synthesis service in the health sector, situated in Makerere University. This is a pioneer- ing service which has been synthesising existing research and testing out rapid models, where government gets a summary of existing health research within 28 days (Mijumbi-Deve et al., 2017). This is a model of great interest to Africa.Three mini-cases of evidence use are also discussed. One comes from national level and the other two are from decentralised district health services. South Africa South Africa is a semi-federal state, with semi-autonomous spheres of national, provincial and local government. Local government is divided into district and local municipality levels. Services to citizens such as education and health are Introduction to the book 7 run by provincial governments, and water and electricity services are run by municipalities. South Africa has well-established institutional arrangements for M&E at the national and provincial levels. Key at national level is the Depart- ment of Planning, Monitoring and Evaluation (DPME), which runs various monitoring systems, as well as the national evaluation system (NES), and eval- uations are happening at national, provincial and departmental levels. Research is widely used in government, which is also starting to use research synthesis and evidence mapping (Stewart et al., 2019). DPME, with the Uni- versity of Cape Town, has run training for the top three levels of the public service in EBPM. The South African Monitoring and Evaluation Association (SAMEA) is actively providing a platform for multiple actors to engage around M&E. Another player in evaluation, not just in South Africa, is the Anglophone Centre for Learning on Evaluation and Results (CLEAR-AA), based at the University of the Witwatersrand. Capacity for evidence supply, such as consultancies and research institutes, is considerably higher than in other countries in Africa. Research councils such as the Human Sciences Research Council conduct research, and there is a net- work of universities producing high-quality research. Research synthesis work is strong in the health sector, led by the South African Cochrane Centre and the Medical Research Council, and emerging in other sectors, with an important role played by the Africa Centre for Evidence at the University of Johannes- burg. There is a South African network of five research centres committed to synthesising evidence for decision making (Stewart, Dayal, et al., 2019). The two cases in this book from South Africa cover evaluations conducted within the national evaluation system. The first is of the evidence work of the Department of Basic Education (Chapter 5), highlighting a department which has been forward-thinking in developing its own evaluation and research capac- ity and was an early adopter of evaluations, as well as exploring use of evidence synthesis. The case looks at two evaluations, of a teacher bursary programme and the National School Nutrition Programme, and reflects on the journey of the department in promoting and using evidence, highlighting the role of an internal knowledge broker. Chapter 6 is the second South African case, focusing on an evaluation of the state’s response to violence against women and children, a very complex issue, where significant progress has been made in informing policy supported by a deep dialogue process. Benin Benin has two levels of governance, central and local government, the latter with a considerable degree of autonomy. It has a national evaluation policy in place, evaluation guidelines and a national evaluation board (Porter and Gold- man, 2013). The Bureau of Public Policies, Evaluation and Government Action Analysis (BEPPAG), hosted by the General Secretariat of the Presidency, estab- lishes and leads the national evaluation system and evaluation capacity develop- ment and ensures the use of evaluation in management. 8 Ian Goldman and Mine Pabari Between 2007 and 2018, Benin carried out 17 national-level evaluations and has had an impressive record of uptake and use of evaluations to influence imple- mentation. However, the NES diagnostic study showed that approximately 90% of the demand for evaluations came from donors. Evaluations are made available to the public through a database established in 20187 (Présidence du Bénin, n.d.). Ministries are required to send annual performance reports to the Supreme Court which are then published on approval and utilised in informing legislation.8 Broadly, there is a strong level of competence amongst evaluation consultants in the country (though some specialised skills are lacking) as well as structures offer- ing capacity-building in evaluation (David-Gnahoui, 2018). However, in spite of its institutional framework, evidence generation and use continues to be a signifi- cant challenge, particularly with regard to local supply and demand. The case from Benin in Chapter 9 is an evaluation of its agriculture sector policy conducted in 2009, relatively early in the implementation of the evalua- tion system, and subsequent evolution of the policy process to involve produc- ers, building on the evidence in the evaluation. Ghana Ghana has two levels of government, national and local (district assemblies), with an intermediate regional coordinating structure (regional coordination councils). Most services are run by national departments, but there has been an ongoing decentralisation programme transferring functions to districts and municipal structures. M&E functions take place at the national level, with the Ministry of M&E responsible for priority flagship programmes and projects and the National Development Planning Commission (NDPC) responsible for planning and monitoring the national development plan and work in the sectors. A draft M&E Policy has been developed, but meanwhile the NDPC provides guid- ance for M&E activities, including manuals and guidelines. There is an active national statistical office: Ghana Statistical Service. Evaluations usually take place in response to donor requirements and are carried out by external evaluators in private firms. Interest in evaluation is growing, with the Ghana M&E Forum playing an important role in promoting evaluation and evidence use (Amatoey et al., 2019). A diagnostic review carried out by CLEAR-AA found limited use of evaluation findings, due to limita- tions in time as well as competing interests (Ibid.). Sampong (2018) provides an example of one government agency, the Environmental Protection Agency, showing limited engagement of research institutions, fragmented evidence col- lection and uncoordinated research efforts. There is a range of universities undertaking research and consultancy, notably institutes like the Ghana Institute for Management and Public Administration and the Institute of Statistical, Social and Economic Research at the University of Ghana. Research is usually commissioned with donor funding. Sampong (2018) suggests there is limited systematic collaboration between researchers and policy makers. Introduction to the book 9 Ghana’s use of evidence for decision making is still fairly nascent, and empha- sis remains on performance management and compliance as opposed to the use of evaluations and other forms of evidence for learning and policy making. There is an active civil society which has produced some interesting tools for tracking government performance and the state of services. Chapter 11 focuses on two civil society tools for looking at performance of services at district level, focusing particularly on management of sanitation services. ECOWAS ECOWAS is a regional economic community covering 15 West African states and operates in parallel to the West African Monetary Union (WAMU). It has a primary role in legislation for taxation across the community. The last case (Chapter 12) focuses on a regional initiative through ECOWAS using research evidence and a well-facilitated process of dialogue at technical and political lev- els to increase tobacco taxation, with the aim of reducing tobacco consumption and increasing government revenue. The case tracks the journey to get a new directive on tobacco taxation adopted and the important role played by a West African research institution, CRES, in facilitating the process. Introducing the analytical framework The analytical framework is discussed in depth in Chapter 3. However, for those who do not wish to read this in any depth, a simplified version is given in Figure 1.1. The theory of change which underlies the analytical framework is based on the following: • The external and internal context is key to the design, operationalisation and success of any initiative. Therefore, each case is analysed in relation to the context. • In different cases we see different levels of demand for evidence, which can be from policy makers, donors, researchers or civil society. • Evidence is generated through a variety of types of evidence production. MECHANISMS CONTEXT OUTCOMES DEMAND EVIDENCE GENERATION EVIDENCE USE INTERVENTION CHANGE MECHANISM IMMEDIATE OUTCOMES • Changes in Capability, Opportunity or Motivation DEVELOPMENT IMPACT • Policy performance and impact • Wider Systems change WIDER OUTCOMES Evidence Use - Changes in policy or practice Figure 1.1 Simplified version of the analytical framework Source: Author generated. 10 Ian Goldman and Mine Pabari • Interventions to promote ownership, use and consensus around the findings occur prior to, as part of and after the evidence generation process. Other interventions promote the credibility of the evidence or wider access. • These interventions trigger change mechanisms within individuals and organisations, which are critical to enabling use. Examples are making the evidence accessible, stakeholders aware, building trust in the evidence, or insti- tutionalising the recommendations in some way. • That should lead to an immediate outcome of strengthening the capability of stakeholders to use evidence, the opportunity to use the evidence in a particular setting, and the motivation for the evidence to be used. • If that happens, then we may begin to see changes in behaviour, policies and practice at individual, organisational and wider system levels. • The process is rarely linear and is usually iterative, and it may involve successive levels of evidence generation and use. Indeed, the triggers for change may come from different stages. For example, in Benin and South Africa, the need for theories of change in evaluations led to the introduc- tion of theories of change for planning. The cases seek to track the real pathway of change – what actually contributed to use of the evidence – so that we can learn how to do so better and more deliberately. To that end, the book draws out lessons emerging (Chapter 13) so that readers of the book can apply the learnings to their context – what use interventions might be likely to work and what change mechanisms need to be triggered to get the outcomes you want. The flow of the book The book essentially has three parts – four introductory chapters which intro- duce the topic, eight chapters presenting case studies, and a concluding chap- ter that provides an analytical reflection on the initial assumptions and how the understanding of evidence-based decision making prior to this research project was adjusted based on what has emerged from the case studies. Following this introductory chapter, Chapter 2 discusses the evidence- based policy field. Chapter 3 provides an in-depth explanation of the analyti- cal framework that guided the case study research carried out for this book. Chapter 4 looks at the context of three of the countries in relation to use of evidence, based on a M&E culture survey carried out by Twende Mbele in Benin, Uganda and South Africa in 2017. Chapters 5 to 12 present specific cases, documenting and analysing examples of generating and using evidence and how the evidence did or did not inform policy and why. In doing so, factors facilitating or inhibiting the use of evidence and lessons going forward are identified and discussed. In each of these chap- ters, the focus is on the processes which support or inhibit evidence use rather than the type of evidence source, but the cases do include a variety of evidence sources ranging from evaluations to research synthesis to citizen engagement. Introduction to the book 11 The final chapter of the book (Chapter 13) provides an overall picture of what has emerged across the cases; the lessons emerging from the case studies including a refined version of the analytical framework for evidence use, based on the experience from the case studies; and lessons emerging about how to promote evidence. We hope you enjoy the read! Notes 1 Incidence of HIV (per 1,000 uninfected population ages 15–49; World Bank, n.d.-b). 2 GDP (current USD; World Bank, n.d.-a). 3 According to Ciarney, ‘evidence is assertion back by information’ (Cairney, 2016). We discuss definitions around evidence further in Chapter 2. 4 www.twendembele.org 5 The cases are not projects of the Twende Mbele partnership itself, except for Chapter 3, which uses research funded by Twende Mbele into the performance culture in Uganda, Benin and South Africa. 6 A key method was participant observation, where the researcher or co-authors had been involved in the case. This provided access to detailed knowledge of the context, ready access to information, historical recall of events and insight into the background and motives with the possibility of a ‘thick description’ of the events that happened as well as a detailed picture of the context. A thick description is usually a lengthy description that captures the sense of actions as they occur. It places events in contexts that are understand- able to the actors themselves. 7 Available at www.presidence.bj/évaluation-politiques-publiques. 8 Personal communication, Elias A.K. Segla, Spécialiste en gouvernance et évaluation des politiques publiques Présidence de la République du Bénin Bureau de l’Évaluation des Politiques Publiques et de l’Analyse de l’Action Gouvernementale. References The 2018 Ibrahim index of African governance: Key findings. 2018, November 7. Retrieved 17 August 2019, from Mo Ibrahim Foundation website: http://mo.ibrahim.foundation/ news/2018/2018-ibrahim-index-african-governance-iiag-key-findings/ AfDB. 2019. African economic outlook. Abidjan, Côte d’Ivoire: African Development Bank. Amatoey, C., Adaku, E. and Otoo, R.K. 2019, January. Diagnostic report: Current status of the national evaluation system in Ghana. CLEAR Anglophone Africa, Graduate School of Public and Development Management, University of the Witwatersrand. Banks, G. 2018, November 30. Whatever happened to ‘evidence-based policymaking’? Australian online magazine for public sector managers. Retrieved 26 March 2019, from The Man- darin, Australian online magazine for public sector managers website: www.themandarin. com.au/102083-whatever-happened-to-evidence-based-policymaking/ Cairney, P. 2016. The politics of evidence-based policy making. London: Palgrave Macmillan. David-Gnahoui, E. 2018. Etude diagnostique de l’offre et de la demande d’évaluation au Bénin. Twende Mbele. Goldman, I., Byamugisha, A., Gounou, A., Smith, L.R., Ntakumba, S., Lubanga, T., . . . Rot- Munstermann, K. 2018. The emergence of government evaluation systems in Africa: The case of Benin, Uganda and South Africa. African Evaluation Journal, 6(1), 11. https://doi. org/10.4102/aej.v6i1.253 http://www.twendembele.org http://www.presidence.bj http://mo.ibrahim.foundation http://mo.ibrahim.foundation http://www.themandarin.com.au http://www.themandarin.com.au https://doi.org https://doi.org 12 Ian Goldman and Mine Pabari Khumalo, L. 2019, January. Diagnostic report: Current status of the national evaluation system in Kenya. Centre for Learning on Evaluation and Results Anglophone Africa (CLEAR-AA). Faculty of Law, Commerce and Management, University of the Witwatersrand. Langer, L., Tripney, J. and Gough, D. 2016, April. The science of using science; researching the use of research evidence in decision-making. EPPI-Centre, Social Science Research Unit, UCL Institute of Education, University College London EPPI Centre. Martinuzzi, A. and Sedlačko, M. 2017. Knowledge brokerage for sustainable development: Innova- tive tools for increasing research impact and evidence-based policy-making (1st ed.). https://doi. org/10.4324/9781351285483 Mijumbi-Deve, R., Rosenbaum, S.E., Oxman, A.D., Lavis, J.N. and Sewankambo, N.K. 2017. Policymaker experiences with rapid response briefs to address health-system and tech- nology questions in Uganda. Health Research Policy and Systems, 15(1), 37. https://doi. org/10.1186/s12961-017-0200-1 Nankya, D.E. 2016. Evidence-informed decision-making landscape at Makerere University, College of Health Sciences, Uganda. Retrieved from www.africaevidencenetwork.org/wp-content/ uploads/2016/11/19-Nankya.pdf Obuku, E. 2018. Rapid landscape review map: A navigation guide to the R2P jungle in Uganda. Retrieved from www.africaevidencenetwork.org/wp-content/uploads/2016/11/9.- Obuku-2018.pdf PARIS21. 2019. Statistical capacity development outlook. Retrieved from https://paris21.org/ sites/default/files/inline-files/Statistical%20Capacity%20Development%20Outlook%20 2019.pdf Porter, S. and Goldman, I. 2013. A growing demand for monitoring and evaluation in Africa. African Evaluation Journal, 1(1). https://doi.org/10.4102/aej.v1i1.25 Présidence du Bénin. n.d. Évaluation des politiques publiques. Retrieved 17 April 2019, from Présidence du Bénin website: www.presidence.bj/evaluation-politiques-publiques Sampong. 2018. Evidence ecosystem map: Ghana Environmental Protection Agency. Retrieved from www.africaevidencenetwork.org/wp-content/uploads/2016/11/13.-Sampong-2018.pdf Statistics South Africa. 2019, July 29. Mid year population estimates. Statistics South Africa. Stewart, R., Dayal, H., Langer, L. and van Rooyen, C. 2019. The evidence ecosystem in South Africa: Growing resilience and institutionalisation of evidence use. Palgrave Com- munications, 5(1), 90. https://doi.org/10.1057/s41599-019-0303-0 Stewart, R., Langer, L. and Erasmus, Y. 2019. An integrated model for increasing the use of evidence by decision-makers for improved development. Development Southern Africa, 36(5), 616–631. https://doi.org/10.1080/0376835X.2018.1543579 UNECA, 2018. 2018 Africa Sustainable Development Report: Towards a transformed and resil- ient continent, Addis Ababa, United Nations Economic Commission for Africa, eISBN: 978-92-1-047600-3. Weyrauch, V., Echt, L. and Suliman, S. 2016, May. Knowledge into policy: Going beyond context matters. Framework. Politics & Ideas and the International Network for the Availability of Scientific Publications. World Bank. n.d.-a. GDP (current US$) – Rwanda, South Africa, Ethiopia | Data. Retrieved 1 November 2019, from World Bank Open Data website: https://data.worldbank.org/ indicator/NY.GDP.MKTP.CD?locations=RW-ZA-ET World Bank. n.d.-b. Incidence of HIV (per 1,000 uninfected population ages 15–49) – Rwanda, South Africa, Ethiopia | Data. Retrieved 1 November 2019, from World Bank Open Data website: https://data.worldbank.org/indicator/SH.HIV.INCD.ZS?locations= RW-ZA-ET Yin, R. 1994. Case study research: Design and methods. London: Sage. https://doi.org https://doi.org https://doi.org https://doi.org http://www.africaevidencenetwork.org http://www.africaevidencenetwork.org http://www.africaevidencenetwork.org http://www.africaevidencenetwork.org https://paris21.org https://paris21.org https://paris21.org https://doi.org http://www.presidence.bj http://www.africaevidencenetwork.org https://doi.org https://doi.org https://data.worldbank.org https://data.worldbank.org https://data.worldbank.org https://data.worldbank.org Summary This chapter introduces the theory around evidence and evidence-based policy making, otherwise referred to as evidence-informed policy and practice. The authors acknowledge that, in practice, policy makers use values, experience and political necessity as well as evidence to inform decisions, so they apply a lim- ited or ‘bounded rationality’. We discuss different types of evidence use, includ- ing instrumental, conceptual, symbolic and process use. An overview is given of the historical development of use of evidence, in Africa and internationally, from a focus on data to monitoring and evaluation to evaluation as a distinct discipline, and the move from single studies to research synthesis. The role of knowledge brokers is discussed, dealing with both the supply and demand for evidence. The authors emphasise the importance of creating an enabling envi- ronment for evidence use. This is introduced in this chapter and is a theme throughout the book. Evidence matters, or does it? All governments have to make choices about how to deploy their resources. In Africa, where resources are more limited and social problems are pressing, these choices are critical. Much has been written about how evidence can assist, for example, in demonstrating progress in implementing national plans, negoti- ating and designing large-scale investments and assisting in decision making (Parkhurst, 2017; Weiss, 1979). Yet, in spite of the rhetoric around the importance of evidence, use of evidence for policy and practice remains challenging and somewhat elusive. A study of policy makers in South Africa found that while 45% of senior man- agers hoped to use evidence in decision making, only 9% reported being able to translate this intention into practice (Paine Cronin and Sadan, 2015). In Chapter 4 of this book, it is reported that between 40% and 50% of managers in Benin, Uganda and South Africa rarely or never use evaluation evidence. This situation is not just limited to Africa. Stewart et al. (2019) refer to several examples of evidence not being used: 85% of health research not being used internationally; in the Obama administration only 1% of government fund- ing was informed by evidence (Bridgeland and Orszag, 2015); and, despite 2 An introduction to evidence-informed policy and practice in Africa Ian Goldman and Mine Pabari 14 Ian Goldman and Mine Pabari extensive spending on What Works Centres in the UK, only 4 out of 21 gov- ernment departments were able to account for the status and whereabouts of their commissioned research evidence, let alone demonstrate that they were using it (Sedley, 2016). Evidence can also be used inappropriately, for example, to validate pre-existing viewpoints – sometimes referred to as policy-based evidence (Weatherall et al., 2018). There is little research into African policy makers’ use of evidence. A study carried out in South Africa in 2011 found that policy makers’ primary source of evidence was informal rather than more rigorous sources of evidence (Paine Cronin and Sadan, 2015). A positive sign, however, was that officials across departments were unanimous about the need to improve the use of evidence in policy making. In Chapter 1, it was pointed out that generating or acquiring high-quality evidence does not automatically lead to use. Attention, therefore, needs to be given to the processes and factors that enable use so that they can be more con- sciously promoted. That is the central focus of this book. We explore how evidence use was promoted in eight different regional, national and subnational cases and how evidence use influenced the eventual policy outcomes. This chapter lays out the theoretical foundations for evidence- based policy and practice and how it is applied in Africa. What do we mean by evidence and evidence use? Policy making is grounded in theory, values, ideology and practice, and so it will always be subject to political contestation (Davies, 2011). Chapter 1 refers to the debates as to whether we should refer to ‘evidence-based’ or ‘evidence-influenced’, ‘policy making’ or ‘decision making’, ‘evidence-based policy making’ (EBPM) or ‘evidence-influenced decision making’ (EIDM). This book is based on the premise that policy decisions are not and can- not emanate solely from evidence – or even rational analysis! Such is the nature of humanity that emotions, politics, power, fear and many other factors play a central role in the directions we take and choices we make. In terms of using the words ‘policy making’ or ‘decision making’, Cairney (2016, p. 2) suggests that in practice, policy is ‘the sum total of government action, from signals of intent to final outcome’, i.e. actors make and deliver policy continuously. We recognise that policies and practice are influenced by many factors of which evidence is a part, and so prefer ‘informed’. We also feel that policy is important because it is the agreed guideline as to what is to be done (policies, legislation, plans, etc.), but that in the end it is what is actually done (i.e. imple- mentation or practice) that matters. So while the different authors use EBPM and EIDM interchangeably, we have a preference for using evidence-informed policy and practice (EIPP). Evidence-informed policy and practice 15 Some definitions of the term ‘evidence’ According to Cairney, ‘evidence is an argument or assertion backed by information’ (Car). Evidence is sometimes associated with rigorous quantitative scientific studies. However, evidence can take many forms and come from many sources, including:1 • Statistical evidence from surveys, official statistics and administrative data, each of which can indicate the size, nature and dynamics of the problem in hand; • Descriptive and experiential evidence including experience and intuitive/ tacit knowledge from stakeholders which illuminate the nature, size and dynamics of a problem; • Individual evaluations and research studies; • Research synthesis including systematic reviews of evidence, meta-analyses and rapid evidence assessments; • Economic and econometric evidence which refers to the cost benefit or cost- effectiveness of interventions; • Implementation evidence indicating how similar policies have been success- fully implemented and how barriers to successful implementation have been overcome; • Ethical evidence in terms of questioning or understanding the ethical impli- cations of a policy. Davies (2013) defines quantitative evidence as data that meet the standards of: • Internal validity: what is the extent to which the design and conduct of the study eliminate the possibility of bias? • Adequacy of reporting: are the statistics adequate, and does the data support the findings? • External validity: does the study have the potential to be extended to the wider world? In this definition, the key qualities of evidence are independence, objectivity and verifiability. In contrast, opinions are statements and claims that do not meet the standards of evidence, as they are positional, subjective, partial (selective) and hard to verify.2 Qualitative evidence has an equally strong claim and a need for rigour. Spen- cer et al. (2003) suggest that qualitative evidence meets the tests of: • Contribution: does the study advance wider knowledge or understanding about a policy? • Defensibility: does the study provide an appropriate research strategy to address the evaluative questions posed? 16 Ian Goldman and Mine Pabari • Rigour: how systematic and transparent are the collection, analysis and interpretation of qualitative data? • Credibility: how well-founded and plausible are the arguments about the evidence generated? The use of evidence involves policy makers and practitioners drawing on these different sources of evidence and linking them to their own experience and their local context when making choices of what and how to implement. What do we mean by evidence use? Humans have been refining scientific research methods over the centuries but have only recently turned their attention to methods for using research to inform policy. Weiss (1979) is the author of one of the earliest papers looking at research utilisation, which talks about a knowledge-driven versus problem- solving model. In a researcher-focused model, basic research leads to applied research, which is then developed and finally applied. In the problem-solving model, A problem exists, and a decision has to be made, information or under- standing is lacking either to generate a solution to the problem or to select among alternative solutions, research provides the missing knowledge. With the gap filled, a decision is reached. (p. 427) Weiss also distinguishes between research undertaken to anticipate needs, or the commissioning of evaluation or research to address a knowledge gap. Cairney (2016) points out the importance of understanding policy-making processes and the use of evidence within these complex and political processes in order to bet- ter understand which model might be most effective in a particular circumstance. In this book, we apply the concepts of instrumental, conceptual, process and symbolic use. Johnson et al. (2009) define these terms in the following way. Instrumental use refers to instances where a specific action has been taken arising from an evaluation or research. Conceptual use refers to cases when no direct action has been taken but where people have greater understanding as a result of the evaluation. Symbolic use occurs when evidence is used to legitimise pre-existing views. We also consider the case of positive symbolic use, for exam- ple where the presence of an evaluation raised the profile of an issue.3 Patton emphasises the importance of process use, the ‘individual changes in thinking and behaviour and program or organizational changes in procedures and cul- ture that occur among those involved in evaluation as a result of the learning that occurs during the evaluation process’ (Patton, 1998, p. 225).4 Apart from intended use, there may well also be unintended uses, which are as important to identify and learn from. Evidence use will rarely be the result of a single study or piece of evidence changing the world but will rather be the result of multiple small steps. The Evidence-informed policy and practice 17 larger changes can take many years to accumulate (Stewart et al., 2019), often through interactions between multiple agents (Weiss, 1979). The case studies in this book all highlight the multiple steps that took place, sometimes forward, sometimes backward. When we talk about influencing policy makers – who are we influenc- ing? Here, we draw on the definition used by Cairney: ‘Policy-makers include elected and unelected civil servants, individuals and organisations who collec- tively make decisions’ (Cairney, 2016, p. 2). When can evidence be used? Figure 2.1 is a policy/programme cycle developed for training in evidence-based policy and implementation in Africa. The cycle includes the stages of agenda setting, diagnosis, selection of intervention, planning/design, implementation, DIAGNOSING INTERVENTION AGENDA Analysis of the problem and options DESIGN Policy/Programme planning and budgeting Document, evaluate, reflect & learn OUTCOMES AND IMPACTS IMPLEMENTATION What is known about the problem Understanding the root causes Options for addressing the problem Theory of change Design Operational plan and resourcing Implementing the plan Monitoring the plan, environment and budget Review, refine and continueAre planned outcomes being achieved? Value for money? What is the change – desired and undesired? Figure 2.1 Policy/programme cycle used in training in Africa Source: University of Cape Town; Department of Planning, Monitoring and Evaluation, South Africa. 18 Ian Goldman and Mine Pabari evaluation and ongoing learning.5 Evidence can (and should) be used at differ- ent stages of the policy cycle: • Diagnosis (e.g. to establish the size or extent of a problem); • Design of an intervention (e.g. in developing the theory of change, appro- priate outcomes or appropriate indicators of scale or quality); • During implementation to assess progress (e.g. as part of monitoring); To assess outcomes (e.g. the effectiveness of a solution). How likely are policy makers to use evidence? The reality of many development situations is the pervasiveness of ‘wicked’ emergent problems ranging from climate change to violence against women to migration, where simple answers do not work, solutions are unclear and the values of groups in society differ. We need to be realistic about what to expect from the use of evidence in this complex world where policy makers and prac- titioners are subject to multiple pressures, there are many different stakehold- ers and decisions often have to be made rapidly. In an idealistic rational view which Cairney refers to as comprehensive rationality, policy makers have clear priorities, gather and understand the relevant information and make informed choices. In a bounded rationality model, policy makers have unclear aims and lim- ited information, and the rationale behind the choices they make is often not clear (Cairney, 2016). If one looks at key features of comprehensive or bounded rationality within Africa, what does the evidence tell us? The research presented in Chapter 4 on the performance culture in Uganda, Benin and South Africa suggests that 50%–60% of managers are using evidence. However, around 40% of man- agers say departments do not champion monitoring and evaluation (M&E) and are not honest about performance; approximately 25% reject the accu- racy of results that reflect negatively on their performance; and in around 30% of cases, learning is not documented to improve future results (refer to Table 4.3 in Chapter 4). This is clearly within the bounded rationality end of the spectrum. Historical development of different forms of evidence National statistics Pre-colonial states which had a centralised administration and hierarchical organisation, such as the Shongai Empire in western Africa, the Luba kingdom in central Africa and the kingdoms of Buganda and Ankole in eastern Africa, must have had mechanisms for collecting, storing and using data. However, cur- rent national statistics offices (NSOs) originate from colonial governments. For example, Kenya’s National Bureau of Statistics has its origins in 1925 when the colonial government appointed its first official statistician. The first population census in Kenya was undertaken in 1948, with the results published in 1952.6 Evidence-informed policy and practice 19 Today, all African countries have national statistics offices and are gradu- ally strengthening their capacities. A study carried out in 2014 by the Centre for Global Development and the African Population Health Research Centre states: Nowhere in the world is the need for better data more urgent than in Africa, where data quality is low and improvements are sluggish, despite investments from country, regional and international institutions to improve statistical systems and build capacity. (Center for Global Development and the African Population and Health Research Centre, 2014) The Statistical Capacity Development Outlook 2019 indicates that countries in sub-Saharan Africa have performed relatively well in improving their capaci- ties since 2009 (PARIS21, 2019). The same study identifies four main obstacles to data collection and use in Africa (Center for Global Development and The African Population and Health Research Center, 2014): • Limited autonomy and unstable budgets: The majority of NSOs across Africa lack autonomy and do not manage their own budgets. Capacity and resource limitations are the most commonly cited reasons for the lack of progress in statistical capacities. This often results in a reliance on develop- ment partners and increases the vulnerability of data production and man- agement to political and interest group pressures. • Lack of adequate incentives to produce accurate data: Accuracy of data is a signifi- cant problem across the region. While technical capacities are one obstacle, politics and the uses of data can work against producing accurate data or create incentives to produce inaccurate data. • Dominance of donor priorities: The larger proportion of funding for data gath- ering across many Africa countries comes from donor-driven initiatives. Therefore, NSOs and their individual staff often spend more time involved in donor-funded projects than in improving national statistics. • Access and usability of data: There is often a reluctance or lack of capacity to generate useful data, manage data and ensure that it is easily and widely accessible for use. Despite these challenges, NSOs are a primary source of data which is much used by policy makers. However often the data is not analysed to the depth that is possible and necessary for evidence use, and NSO data is not necessarily good for explaining why things are happening or whether interventions are working. The development of monitoring and evaluation Following the shock of the Second World War, in the 1940s the US and Europe implemented a variety of social programmes to address the challenges facing society. Many of these programmes were innovative and redistributive, with the 20 Ian Goldman and Mine Pabari creation of welfare states and widescale social assistance for children. Govern- ments looked for ways to assess whether the money was being well spent, and in 1949 in the US, performance budgeting emerged as a response. This was fol- lowed by ‘management by objectives’ and ‘monitoring for results’ in the 1960s (Parkhurst, 2017). Meanwhile, adoption of a logical framework approach in 1969 by the United States Agency for International Development (USAID) was a significant milestone in the monitoring, evaluation and evidence jour- ney (see Box 2.1 for a definition of M&E). The logical framework approach included the development of a programme logic, with indicators at different levels of performance. This was widely adopted in the aid industry in the 1970s and is still widely used – one of the drivers for monitoring as well as evaluation in the developing world. During the 1980s, the advent of New Public Management (NPM) led to a focus on separating the commissioner of a service from the deliverer of a service, the creation of agencies and the resultant need for performance control, includ- ing through public service agreements, with monitoring of key performance indicators (Ranson and Stewart, 1994; Mouton et al., 2014). In the 1990s, the Government Results and Performance Act of 1993 in the US provided for the establishment of strategic planning and performance measurement and for a widespread assessment of government performance. In Africa, a network of evaluation practitioners was established in east Africa as early as 1977, supported by UNICEF. This network comprised the national evaluation associations of six countries.7 The imposition of structural adjust- ment programmes in the 1980s and 1990s, and the adoption of NPM frame- works, were important influences in the development of evidence use in Africa. Management frameworks and economic models necessitated the establishment of mechanisms to strengthen results orientation, transparency and accountabil- ity. This, in turn, stimulated a demand for M&E (Basheka and Byamugisha, 2015; University of the Witwatersrand, 2012). Evidence use in Africa during this period was primarily driven by external influences from former colonial countries (Mouton et al., 2014). Box 2.1 Distinguishing between monitoring and evaluation Monitoring helps managers and policymakers to understand what the money invested is producing and whether plans are being followed. Evalu- ation helps to establish what difference is being made, why the level of performance is being achieved, what is being learned from activities and whether and how to strengthen implementation of a programme or policy. (Porter and Goldman, 2013) Evidence-informed policy and practice 21 In the 1990s, the newly established Expert Group on Evaluation, estab- lished by the Development Assistance Committee of the OECD, together with multiple donors and financing institutions, convened two pivotal pan-African forums to raise awareness of and identify African evaluation needs and capabili- ties, with the support of multiple international donors and financing institu- tions.8 These forums led to the creation of the African Evaluation Association (AfrEA), formed in 1999 as an umbrella organisation for African evaluators across the continent. Howard White refers to this period, with its focus on a results agenda and on measurement and monitoring, as wave one of the evidence revolution (White, 2019b; see also Figure 2.4). Across multiple countries over the last 10 to 15 years, national M&E systems and processes have been established to respond to increasing pressures to dem- onstrate performance, accountability and transparency – key elements under- pinning good governance. The development of these systems has taken place in very different ways across the different countries, but the process accelerated generally in the 2000s. For example, Benin started an evaluation system in 2007, while Uganda and South Africa started theirs in 2011. In addition to evaluation networks and associations, a number of regional initiatives exist. These regional initiatives were established to support countries in strengthening M&E and evidence-based decision making. Examples include AfrEA, Twende Mbele,9 the African Evidence Network (AEN), the African Parliamentarians Network on Development Evaluation (APNODE) and the West African Capacity-building on Impact Evaluation (WACIE). The emergence of evaluation as distinct from monitoring The use of evaluation, with its focus on independent, objective and credible assessments of performance and reasons for under-performance, was already well established in the US, Australia and Canada by the 1980s. As mentioned earlier, the first moves to focus on evaluation in Africa were as early as 1977. Since then the discipline of evaluations has expanded dramatically in Africa. The Centre for Learning and Evaluation for Results for Anglophone Africa (CLEAR-AA), and the Centre for Research on Evaluation, Science and Tech- nology (CREST) at the University of Stellenbosch in South Africa, undertook a search for African evaluations between 2005 and 2015 in 12 countries and came up with 2,635 evaluations (Blaser Mapitsa and Khumalo, 2018).10 Taking just one country as an example, the Campbell Collaboration11 worked with Uganda’s Office of the Prime Minister to search for evaluations and found a total of over 500 evaluations since the year 2000 (White, 2019a). A limitation of conventional M&E work is how to identify causal links and attribution of effects to interventions. To address this, the use of impact evalua- tions (IEs) using randomised controlled trials grew in the mid-1990s (Banerjee, 2016), in what White (2019b) refers to as the second wave of the evidence revo- lution. We would rather describe the development of evaluation more generally as the second wave, with impact evaluation as the third wave. Key organisations 22 Ian Goldman and Mine Pabari were created in the 2000s to support impact evaluations such as the Inter- national Initiative for Impact Evaluation (3ie) in 200812 and the Abdul Latif Jameel Poverty Action Lab (J-PAL).13 The expansion of evaluation can also be seen in the increasing numbers of impact evaluations. The IE repository of 3ie shows the number of completed impact evaluations per year, rising from less than 10 in 1995 to around 50 in 2003, 100 in 2008, and over 500 in 2012, having levelled off since then. IEs focused on health, nutrition and population account for around half of these, followed by education, social protection, and agriculture a