Journal articles
Permanent URI for this collection
Browse
Browsing Journal articles by Author "Caitlin, Blaser Mapitsa"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Can massive open online courses fill African evaluation capacity gaps?(African Evaluation Journal, 2019-06-26) Caitlin, Blaser Mapitsa; Linda, Khumalo; Hermine, Engel; Dominique, WooldridgeTheory of Change for Development is a free online course developed at an African institution to strengthen evaluation capacity in the region. Massive Open Online Courses (MOOCs) provide a platform for building skills at scale in the region. Scholars of evaluation have long pointed to a gap between supply and demand that frustrates both evaluation practitioners and commissioners. This article explores the possibilities and limitations of MOOCs to bridge this gap.Item Designing diagnostics in complexity: Measuring technical and contextual aspects in monitoring and evaluation systems(African Evaluation Journal, 2017-04-28) Caitlin, Blaser Mapitsa; Marcel T., KorthThis article emphasizes the importance of reflecting on the methods employed when designing diagnostic tools for monitoring and evaluation (M&E) systems. It sheds light on a broader debate about how we understand and assess M&E systems within their political and organisational contexts.Item Diagnosing monitoring and evaluation capacity in Africa(African Evaluation Journal, 2018) Caitlin, Blaser Mapitsa; Linda, KhumaloSince 2015, the Centre for Learning on Evaluation and Results-Anglophone Africa (CLEAR-AA) has implemented more than seven diagnostic tools to better understand monitoring and evaluation (M&E) systems in the region. Through the process of adapting global tools to make them more appropriate to an African context, CLEAR-AA has learned several lessons about contextually relevant definitions and boundaries of M&E systems.Item Gender responsiveness diagnostic of national monitoring and evaluation systems – methodological reflections(African Evaluation Journal, 2017-04-26) Caitlin, Blaser Mapitsa; Madri S., Jansen van RensburgThis article reflects on the implementation of a diagnostic study carried out to understand the gender responsiveness of the national monitoring and evaluation (M&E) systems of Benin, South Africa and Uganda. Carrying out the study found that the potential for integrating the cross-cutting systems of gender and monitoring and evaluation (M&E) are strong. At the same time, it highlighted a range of challenges intersecting these two areas of work. This article explores these issues, which range from logistical to conceptual.Item Institutionalising the evaluation function: A South African study of impartiality, use and cost(Elsevier, 2019-05-03) Caitlin, Blaser Mapitsa; Dr Takunda, ChirauPurpose: This article explores the implications of outsourcing the evaluation function in South Africa, a context where there is a mismatch between evaluation supply and demand. It unpacks the tradeoffs between internal and external evaluation, and challenges some commonly held assumptions about both. Approach: Based on experiences as an internal evaluator, external evaluator, evaluation manager, and building evaluation capacity, the author explores how each role changes when evaluation is a scarce skill, and looks at implications outsourcing has for both the organization, and the evaluation. Findings: The purpose of the evaluation must drive the decision to outsource. However, with changing models of collaboration, there may be hybrid options that allow organizations to build evaluation capacity. Practical implications: Organisations are faced with a trade-off between commissioning an evaluation, and building internal evaluation capacity. To better understand each approach, it is important to consider the purpose and context of the evaluation. This shifts some commonly held assumptions about internal and external evaluations. Re-examining these assumptions will help organizations make a more informed decision about an evaluation approach. Originality/value: The field of evaluation is particularly concerned with evaluation use. Most of the literature on this has focused on the approach of individual evaluators, and insufficient attention has been paid to the institutional architecture of the evaluation. This article considers how some of the organisational structures around an evaluation contribute to evidence use, and the case study of South Africa also shifts the focus to the central but overlooked role of context in the debate.