The contribution of Randomised Control Trials (RCTs) to improving education evaluations for policy: evidence from developing countries and South African case studies

Abstract
As access to formal schooling has expanded all over the world, there is acknowledgement that the quality of learning in many schooling systems, including South Africa, is extremely weak. Nationally representative samples of South African children participated in the PIRLS 2006 and pre-PIRLS 2011 studies, along with 48 other countries as a benchmarking exercise to measure the literacy levels of primary schools according to international standards. The PIRLS 2006 study indicated that more than 80% of South African children had not yet learned to read with meaning by grade 5. The pre-PIRLS results provided a new baseline of reading literacy levels for Grade 4 learners in South Africa, 29% of Grade 4 learners that participated did not have the rudimentary reading skills required at a Grade 2 level. Learners tested in African languages, particularly Sepedi and Tshivenda, achieved the lowest performance overall and were considered to be educationally at risk (University of Pretoria, 2012). The context in which schooling takes place is key in understanding learner performance in South Africa. After decades of differential provision of education on the basis of race, the education system has been overhauled since the early 1990s. The South African government has introduced several initiatives and policies to address these systemic imbalances. All things considered, South Africa’s learner performance has remained poor, even relative to several poorer countries in the region. There is a wealth of research describing weaknesses in the education system. However, going a step further and identifying resources and practices that actually improve learner performance is central to improving education planning, policy and ultimately classroom practice. Rigorous evidence on classroom-based practice and resources that will have a measurable effect on learner performance in a developing country like South Africa is limited. The most significant shortfall of non-experimental evaluation methods (including qualitative and many quantitative approaches) is the absence of a valid estimate of the counterfactual – what outcomes would have been obtained amongst programme beneficiaries had they not received the programme. This often leads to the reporting of large positive effects of programmes being evaluated. By using a lottery to allocate participants to an intervention and a control group, the Randomised Control Trial (RCT) methodology constructs a credible ‘counterfactual’ scenario – what might have happened to those who received an intervention had they not received it. This study provides a systematic literature-based argument on why RCTs should be part of the methodological options education researchers and policy makers consider in developing countries such as South Africa. Both the strengths and limitations of RCTs are discussed in light of the debate on RCTs and evaluation methods in education, as well as the technical critique of the methodology. The main critique of external validity is also elaborated on with efforts that may be taken to diminish the limitations discussed. In addition, the study illustrates the value of RCTs using data from two South Africa RCTs on early grade reading interventions through a secondary analysis of the RCT data. The first case study in Chapter 4, is the Reading Catch-Up Programme (RCUP) conducted in Pinetown, KwaZulu-Natal. The main findings of the RCUP evaluation were that although learners in intervention schools improved their test scores between the baseline and the endline assessment, the learners in comparison schools improved by a similar margin. The results should contribute to a sobering realisation that the effects of the various interventions introduced by education stakeholders including NGOs and government are not obviously positive or more importantly, different from normal schooling. This points to the need to evaluate programmes before they are rolled out provincially or nationally, using RCTs and other rigorous methods. The new analysis of data in this study explores the so-called “Matthew Effect” - the notion that initially better-performing children typically gain more from additional interventions and from schooling itself. The data from the RCUP RCT indicates that children with higher baseline test scores benefited from the intervention, whereas children with very low English proficiency at the outset did not benefit from the programme. Although females significantly outperform males in the reading tests used, there was no clear evidence of a differential effect of the intervention by gender. The Matthew Effect therefore seems to be driven by prior knowledge and not gender or any other characteristic that was measured in the data. The second case study in Chapter 5, is the Early Grade Reading Study (EGRS) conducted in the North West province. The EGRS may be seen as a more extensive follow-up to the RCUP to answer some of the unanswered questions. For example, will an early grade reading intervention that is implemented over a longer duration (two years) have an impact? Can intervening right at the start of school be a strategic point to intervene? Can a Home Language literacy intervention have lasting educational benefits? In conclusion, although the policy formulation and evaluation process should draw on research using a variety of methods, the policy process will certainly be impoverished if there is a lack of research meeting two core criteria: interventions and findings that are relevant to the larger schooling population; and the precise measurement of the causal impact of interventions and/or policies. This study makes a clear literature-based argument on the contribution of internally valid methods, specifically RCTs in fulfilling these criteria and illustrates this with two case studies of RCTS. The study also provides a demonstration of the insights that are possible through secondary analysis founded on the richness of RCT data.
Description
A research report submitted to the Wits School of Education, University of Witwatersrand, in partial fulfilment of the requirements for the degree Master of Education Submission 17 October 2016
Keywords
Citation
Mohohlwane, Nompumelelo Lungile (2016) The contribution of Randomised Control Trials (RCTs) to improving education evaluations for policy: evidence from developing countries and South African case studies, University of the Witwatersrand, Johannesburg, <http://wiredspace.wits.ac.za/handle/10539/22682>
Collections