Human capacity to coordinate the City of Johannesburg’s Monitoring and Evaluation framework. Phello Mohlamonyane 0510051e Supervisor: Marcel T. Korth June 2021 A research report submitted to the Faculty of Commerce, Law and Management, University of Witwatersrand, in partial fulfilment of the requirements for the degree of Master of Management in the field of Governance (Public and Development Sector Monitoring and Evaluation). Declaration I Phello Elvis Mohlamonyane (student no: 0510051E) hereby declare that this research report, submitted in partial fulfilment of the requirements for a degree; Master of Management in the field of Governance (Public and Development Sector Monitoring and Evaluation), is submitted by myself, and that it has not been submitted at this or any other University prior. It is the product of my own work and all reference material used have been acknowledged. -------------------------------------------- 30 June 2021 Signature Date Acknowledgement and dedications This work would not have been possible without the support, guidance, assistance and invaluable inputs of my Supervisor, Marcel T. Korth. Thank you sincerely for your patience, understanding and for being part of this rather challenging journey from the beginning through to the end. I would also like to thank the colleagues at the City of Johannesburg’s Group Strategy, Policy and Coordination’s Monitoring and Evaluation Unit who took their valuable time out of their busy schedule to assist in data collection process. To my family, friends and loved ones, thank you for the support and encouragement. Special appreciation to my wife, Ntladile Mohlamonyane and kids; Oabile and Thoriso Mohlamonyane from whom I spent time away in the process of completing this research, which I dedicate to you. I would like to dedicate this research report to my mother, Meshidi Mohlamonyane for her continuous and unconditional love. Abstract The City of Johannesburg adopted a monitoring and evaluation system, the City-wide M&E framework in 2012 The framework was adopted primarily to help the City of Johannesburg to track the progress made towards the achievement of the outcomes of its long-term strategy, the Joburg 2040 GDS. Literature points to the fact that making effective use of an M&E system requires human capacity as one of the key components. This study aimed to assess the existing human capacity levels for the coordination of the City-wide M&E framework in the Group Strategy, Policy Coordination and Relations - M&E (GSPCR-M&E) unit. To answer the research question empirically, a qualitative case study research approach was used through which semi-structured interviews were utilised in the collection of narrative data. Using these interviews, primary data was collected from M&E specialists currently and previously employed in the GSPCR-M&E unit. The participants were selected using purposive non-probability sampling method. Thematic analysis of the participants' responses points to the fact that the City-wide M&E framework is not adequately utilised. The analysis further indicates that the reason for this inadequate use relates to the fact that the framework is not practical on the one hand and the fact that the M&E unit does not have adequate human capacity on the other. The results of the study demonstrate that the M&E unit does not have adequate capacity to coordinate the City-wide M&E framework. On the basis of this conclusion, it is recommended that the City increases its M&E human capacity for the enhancement of overall functioning of M&E in the City of Johannesburg. List of Figures and Tables Figure 1: GSPCR organizational structure Figure 2: The 12 components of an effective M&E system Figure 3: A Simple Logic Model Table 1: Breakdown of Cluster composition Table 2: Breakdown of respondents List of Abbreviations and Acronyms A.G: Auditor-General ANC: African National Congress CEO: Chief Executive Officer CLEAR- AA: Centre for Learning on Evaluation and Results- Anglophone Africa COJ: City of Johannesburg COVID-19: Corona Virus Disease of 2019 DA: Democratic Alliance DPME: Department of Planning, Monitoring and Evaluation ECB: Evaluation Capacity Building GDS: Growth and Development Strategy GGT: Growing Gauteng Together GSPCR: Group Strategy, Policy Coordination and Relations GW-M&E: Government-Wide Monitoring and Evaluation HCD: Human Capacity Development IDP: Integrated Development Plan SDBIP: Service Delivery and Budget Implementation Plan MMC: Member of Mayoral Committee M&E: Monitoring and Evaluation NCCC: National Corona Virus Command Council NGOs: Non-Governmental Organisations NPC: National Planning Commission NT: National Treasury OECD: Organisations for Economic Co-operation and Development PSC: Public Service Commission RSA: Republic of South Africa SAMEA: South African Monitoring and Evaluation Association TOC: Theory of Change Table of Contents Declaration ______________________________________________________________________ Acknowledgement and dedications _______________________________________________ Abstract ________________________________________________________________________ List of Figures and Tables ________________________________________________________ List of Abbreviations and Acronyms ______________________________________________ CHAPTER ONE: INTRODUCTION AND BACKGROUND ____________________________ 1 1. Introduction and background __________________________________________ 1 1.1. Monitoring and Evaluation ____________________________________________ 1 1.2. Monitoring and Evaluation in South Africa ________________________________ 3 1.3. Human Capacity for Monitoring and Evaluation ___________________________ 4 1.4. Monitoring and Evaluation in the City of Johannesburg _____________________ 6 1.5. Research problem __________________________________________________ 7 1.6. Research purpose __________________________________________________ 8 1.7. Research objectives ________________________________________________ 8 1.8. Research questions _________________________________________________ 9 1.9. Limitations of the study ______________________________________________ 9 1.10. Justification of the research __________________________________________ 10 1.11. Chapter outline ___________________________________________________ 10 1.11.1. Chapter 1: Introduction and background _____________________________ 10 1.11.2. Chapter 2: Literature review ________________________________________ 10 1.11.3. Chapter 3: Research procedure, methods and design __________________ 11 1.11.4. Chapter 4: Presentation of findings __________________________________ 11 1.11.5. Chapter 5: Discussion of findings ___________________________________ 11 1.11.6. Chapter 6: Conclusion _____________________________________________ 11 CHAPTER TWO: LITERATURE REVIEW _________________________________________ 12 2. Introduction ______________________________________________________ 12 2.1. Physical research context ___________________________________________ 13 2.2. The City of Johannesburg’s Monitoring and Evaluation Framework ___________ 15 2.3. Research knowledge gap ___________________________________________ 19 2.4. Human Capacity for M&E in the GSPCR-M&E Unit _______________________ 20 2.5. Monitoring and Evaluation ___________________________________________ 22 2.6. The emergence of M&E in South Africa ________________________________ 24 2.7. Theoretical and Conceptual Frameworks _______________________________ 28 2.7.1. Results-Based M&E _________________________________________________ 28 2.7.2. Elements of Results-Based Monitoring and Evaluation ___________________ 30 2.7.3. 12 components of effective M&E systems ______________________________ 32 2.7.4. Skills and Competencies for M&E _____________________________________ 36 2.7.5. M&E Capacity Building and Development ______________________________ 38 2.7.6. Theory of Change ________________________________________________ 40 2.8. M&E professionalisation debates _____________________________________ 42 CHAPTER THREE: RESEARCH PROCEDURE, METHODS AND DESIGN ____________ 45 3. Introduction ______________________________________________________ 45 3.1. Research strategy _________________________________________________ 45 3.2. Research design __________________________________________________ 48 3.3. Research procedure and methods ____________________________________ 49 3.3.1. Data collection instruments ___________________________________________ 49 3.3.1.1. Interviews ________________________________________________________ 51 3.3.1.2. Document analysis ________________________________________________ 54 3.4. Target population and sampling ______________________________________ 55 3.5. Process of data analysis ____________________________________________ 58 3.6. Research Limitations and positionality _________________________________ 59 3.7. Ethical considerations ______________________________________________ 61 CHAPTER FOUR: PRESENTATION OF FINDINGS ________________________________ 63 4. Introduction ______________________________________________________ 63 4.1. Presentation of findings _____________________________________________ 64 4.1.1. Theme 1: City-wide Monitoring and Evaluation Framework _______________ 65 4.1.2. Theme 2: Human Capacity for M&E ___________________________________ 69 4.1.3. Theme 3: Evaluation practice _________________________________________ 73 4.1.4. Theme 4: Skills and Competencies for M&E ____________________________ 76 4.1.5. Theme 5: M&E Capacity building and development _____________________ 81 4.2. Conclusion _______________________________________________________ 82 CHAPTER FIVE: DISCUSSION OF FINDINGS _____________________________________ 84 5. Introduction ______________________________________________________ 84 5.1. The implementation of the City-wide M&E framework ______________________ 85 5.2. Human Capacity for Monitoring and Evaluation __________________________ 88 5.3. Evaluation Practice ________________________________________________ 93 5.4. Capacity Building and Development ___________________________________ 99 5.5. Skills and Competencies for M&E ____________________________________ 103 CHAPTER SIX: CONCLUSION _________________________________________________ 108 6. Introduction _____________________________________________________ 108 6.1. Conclusion ______________________________________________________ 108 List of References ____________________________________________________________ 113 Appendices __________________________________________________________________ 118 Appendix A: Information sheet ____________________________________________ 118 Appendix B: Informed consent ____________________________________________ 119 Appendix C: Interview guide______________________________________________ 120 1 CHAPTER ONE: INTRODUCTION AND BACKGROUND 1. Introduction and background This paper provides a report of a research project undertaken at the City of Johannesburg Metropolitan Municipality. The research project was tailored to assess the human capacity levels for monitoring and evaluation in the M&E unit within the City of Johannesburg’s Group Strategy, Policy Coordination and Relations department. The study is therefore located within the Monitoring and Evaluation field. 1.1. Monitoring and Evaluation Monitoring and Evaluation are distinct, yet related and complementary concepts or functions whose juxtaposition is crucial for one’s clarity. Monitoring refers to a “continuous function that uses the systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds” (OECD, 2002 p. 27, cited in Kusek and Rist, 2004, p. 12). Evaluation on the other hand refers to “the systematic, periodic and objective assessment of an ongoing or completed project, program, or policy, including its design, implementation, and results” (OECD, 2002 p. 21, cited in Kusek & Rist, 2004, p. 12). These two related functions, which have become popularly known as M&E have gained popularity over the last few decades. Over the years M&E has become what is arguably a powerful public management tool which has been used to manage performance in projects, programmes, and policies alike (Masuku and Ijeoma, 2015; 2 Kusek & Rist, 2004). Among many other purposes, the monitoring function provides stakeholders with information about the progress of an ongoing intervention, and evaluation provides scientific evidence for the achievement of project objectives or lack thereof (Kusek & Rist, 2004). Owing largely to the well canvased benefits of their use, which include the “provision of evidence base for public resource allocation decisions”, M&E as management tools have over the past few decades made notable inroads in international Non- Governmental Organizations, Governments and Private companies the world over (Presidency, 2007, p. 1; Kusek & Rist, 2004). Whilst growing in popularity, these tools have equally, although somewhat covertly, undergone what one can call ‘evolution stages’, notably the evident transition from “traditional Implementation-based M&E to Results-based M&E” (Kusek & Rist, 2004, p. 11; Masuku & Ijeoma, 2015). For the purpose of a clear delineation, a traditional implementation-based M&E system can be characterised as an M&E system that is “designed to address compliance - the ‘did they do it’ question” with a focus limited primarily on inputs, activities and outputs (Kusek & Rist, 2004, p. 15). A results-based M&E system on the other hand answers the ‘so what’ question by focusing on the results (outcome and impact) of an intervention (Kusek & Rist, 2004) Implementing an effective results-based M&E system is an inherently challenging task. It is essentially a challenge because it requires, among others, a great deal of institutional reform, political will, and especially adequate human capacity (both in terms of quality and quantity), which are challenging to fulfil in most government institutions (Masuku & Ijeoma, 2015). As a result, and as reported by some authors, 3 there is a prevalence of traditional implementation focused M&E and not so much of results-based M&E (Kusek & Rist, 2004; Masuku & Ijeoma, 2015) 1.2. Monitoring and Evaluation in South Africa South Africa, like many other countries institutionalised M&E as a result of internal and external pressures. Internally, the pressure was occasioned by persistent levels of poverty and inequality, corruption as well as high rates of service delivery demonstrations, among others (Goldman, Engela, Akhalwaya, Gasa, Leon, Mohamed, & Phillips, 2012, p. 3). As a result of this pressure, the government adopted a plan for the establishment and institutionalization of M&E in its governance as a mechanism for planning, accountability and good governance after the 2009 general elections from which the African National Congress (ANC) emerged victorious and its President, Mr. Jacob Zuma was elected as President of the Country ( Goldman et al., 2012, p. 3). Prior to the 2009 elections, however, there were persistent attempts to institutionalise M&E in the South African government. Masuku and Ijeoma (2015) report early attempts at establishing Government-Wide M&E (GWME) systems dating back as far as the late 1990s. It was only around 2005 when a plan was approved for the development of a Government-Wide M&E framework. The policy framework; the Government-Wide M&E system was subsequently adopted two years later, in 2007 (Goldman et al., 2012; Masuku & Ijeoma, 2015). The adopted framework remained in the government’s radar for about two years, until after the 2009 general elections. The Government-Wide M&E framework provides policy guidance to all the three spheres of government; the National, Provincial and Local Government respectively (Goldman et al., 2012). The policy imperative rests with the Ministry which was created 4 in 2009 and strategically located in the Office of the Presidency, following which the National Department of Planning, Monitoring and Evaluation (DPME) was created in 2010 respectively, together with the National Planning Commission (NPC) which serves as an M&E advisory body focusing on the long-term 2030 National Development Plan (Goldman et al., 2012). Considering the semi-federal nature of the South African government system, the GWM&E led by the DPME had to be cascaded down to the nine Provinces as well as to local government municipalities (Presidency, 2007). To that effect, the Presidency states it explicitly that every accounting officer of a department or a municipality should, as a matter of statutory requirement establish an M&E system for their respective institution, be it a department, state entity or a municipality (Presidency, 2007, p. 4). 1.3. Human Capacity for Monitoring and Evaluation Implementing a Monitoring and Evaluation system for an institution as big and as complex as the City of Johannesburg metropolitan municipality is a complex task which demands, among others, functional institutional structures to be in place. According to Görgens and Kusek (2009), making effective use of an M&E system requires twelve interconnected components to be in place. The 12 components are components relating to “people, partnership and planning”, components relating to “collecting, capturing and verifying of data”, as well as components relating to “using data for decision making” (Görgens & Kusek, 2009, p. 7). Within the components relating to people, partnership and planning is the second component, the human capacity for M&E which is equally crucial for the effective functioning of an M&E system. 5 This component, which is the central feature of this study emphasizes the importance of having adequate number of personnel in place to ensure the functionality and effectiveness of an M&E system. This component further makes it a requirement that the personnel be adequately skilled and competent in undertaking monitoring and evaluation functions (Maphunye, 2013). Several research studies conducted in different government departments and municipalities have reported human capacity to be a prominent challenge in the implementation of M&E systems (Dube, 2015; Maepa, 2015, Maphunye, 2013). As a result of these widely reported human capacity constraints, government has not derived sufficient benefit from the use of M&E systems as guided by the GWM&E. Human capacity can be measured both in terms of quality and quantity. Whereas quantity refers to the number of officials in a department responsible for M&E, quality can be determined by the “existence of properly and highly skilled personnel who perform their M&E functions effectively, efficiently and sustainably” (Görgens & Kusek, 2009 as cited in Maphunye, 2013, p. 22). An adequately skilled M&E official would typically be one who has technical M&E skills such as skills to design a logical framework with all its elements clearly defined, skills to design a theory of change, skills to monitor performance using the M&E plan, skills to evaluate interventions using variety of evaluation methods. Competence in undertaking qualitative and qualitative research is equally crucial, together with analytical skills, strategic thinking as well as good report writing, among others. 6 1.4. Monitoring and Evaluation in the City of Johannesburg The City of Johannesburg is the largest metropolitan municipality in South Africa, both in terms of population size and the economic power (Statistics South Africa, 2018). Located in Gauteng, the most populous Province in the Country, the COJ is compounded with a myriad of challenges which include; high levels of rapid urbanization, unemployment, poverty and extreme levels of inequality (City of Johannesburg Integrated Development Plan, 2019). As a response to these challenges, the City launched its long-term strategic planning document, Growth and Development Strategy also known as Joburg 2040 GDS. As the City’s long-term plan, Joburg 2040 GDS is hinged on 4 key outcomes, and it is supported by medium-term planning document, the five years Integrated Development Plan (IDP) which is given effect by Chapter 5, Section 25 of the Municipal Systems Act 2003 (COJ IDP, 2019). The COJ’s IDP has 9 strategic priorities which are cascaded into short-term planning documents, the annual institutional Service Delivery and Budget Implementation Plan (SDBIP) as well as departmental business plans (COJ IDP, 2019). To systematically measure the progress made toward the Joburg 2040 GDS outcomes, and as a statutory requirement, the City developed an M&E framework in 2012, and subsequently established an M&E system (COJ M&E Framework, 2012; Presidency, 2007). The framework is overseen by a central M&E Office in the City, the Group Strategy, Policy Coordination and Relations (GSPCR) which is strategically located in the Office of the City Manager, the accounting officer in the City (Ndhlovu et al., 2017). The development of the M&E framework meant that all City departments (18) and entities (13) had to establish M&E units and align their annual SDBIP programmes with the City’s strategic priorities as documented in the IDP. As a result, M&E tools find expression in the IDP and SDBIPs, hence, departments report on Key 7 Performance Indicators which include quantitative indicators, baseline values and targets (COJ M&E Framework, 2012). Monitoring and Evaluation functions in the City of Johannesburg are decentralised, and therefore City Departments and Entities have their own M&E personnel who report to their respective Heads of Departments and/or Chief Executive Officers of the respective Entities. In respect of M&E and performance management, departments and entities are grouped into four clusters, headed by an M&E specialist from the GSPCR-M&E unit who is referred to as a ‘cluster champion’. A cluster champion provides support and technical guidance to M&E personnel in the departments and entities. Furthermore, a cluster champion receives monthly and quarterly performance reports from members of their respective clusters and make an input in the consolidation of an annual report, as well as to brief the Members of Mayoral Committee (MMC) about their respective department’s performance. 1.5. Research problem Various studies have reported human capacity constraints to be a prominent challenge for effective use of M&E in some of the South African government departments, as well as in municipalities (Dube, 2015; Maepa 2015, Masuku & Ijeoma, 2015). These challenges manifest in several forms, from lack of M&E technical skills, high staff turnover of M&E personnel, lack of training, among others (Kusek & Rist, 2004). More specifically, a study conducted in 2015 by the Centre for Learning on Evaluation and Results in Anglophone Africa (CLEAR-AA) reported that the GSPCR does not have “capacity both in terms of number of staff and technical skills or capability to advise departments appropriately” (Ndhlovu, Smith, Narsoo, 2017, p. 8), thus attributing the ineffective M&E use to the lack of capacity in the GSPCR-M&E unit. 8 One Senior Director mentioned that there is an unresolved debate about whether GSPCR-M&E Unit has adequate capacity for effective discharge of M&E functions (Senior M&E Director, personal communication, March 23, 2020). Considering the decentralised nature of M&E in the City and the number of Departments (18) and Entities (13) for which GSPCR M&E specialists are responsible, the questions of the required individual skills and competencies to execute their mandate remain crucial. It was essential for the study to hone in on the GSPCR-M&E unit to solicit thoughts and perceptions about the availability of the required skills and competencies in the unit from the perspective of the M&E specialists themselves as well as their perceptions about the overall capacity of the unit to oversee the City-wide M&E framework within the cluster system. 1.6. Research purpose The purpose of this research was to assess the human capacity levels for coordinating the M&E framework in the COJ’s Group Strategy, Policy Coordination and Relations- Monitoring and Evaluation unit. Human capacity constraints have been reported to be amongst the major causes of ineffective implementation of the M&E system in the City (Ndhlovu et al., 2017). Thus, the purpose of this study was to assess the existing human capacity levels and find out the ideal skills and competencies required for the effective discharge of M&E functions by the M&E unit from the vantage point of the M&E specialists. 1.7. Research objectives Broadly speaking, the study sought to investigate human capacity levels as they relate to M&E tasks and responsibilities in the COJ’s Group Strategy, Policy Coordination and Relations – Monitoring and Evaluation unit. 9 1.8. Research questions Primary research question What are the human capacity levels for carrying out M&E work in the COJ’s Group Strategy, Policy Coordination and Relations unit? Secondary research question: What are the required skills and competencies for coordinating the M&E framework in the GSCPR-M&E Unit? 1.9. Limitations of the study This study was conducted within a veritably limited time frame. Only seven interviews were conducted in collecting primary data. This was caused by the fact that the unit is small, with only four M&E specialists in its organogram. At the time of data collection, two specialists had just left the unit to join other units in the COJ. The specialists were identified and asked to participate in the research. To make up the numbers, the researcher also interviewed a participant who had just joined the unit, coming from another department and this specialist was not in a suitable position to comment about the unit’s capacity. Furthermore, the study was conducted at the hight of the covid-19 pandemic in South Africa where ‘non-essential services workers’ were working from home and as a result, some interviews were held through Microsoft Teams, a virtual online platform. Where there were connectivity issues which made it impossible for the use of Microsoft Team, telephone interviews were held. This made it impossible for the researcher to be in the field and make observations which could’ve been recorded as field notes. 10 1.10. Justification of the research This research provided an elaborate picture of the existing human capacity for M&E in the City of Johannesburg in general and in the GSPCR-M&E unit in particular. The study put in context the roles and duties of an M&E specialist in an organization as complex as the COJ. The existing human capacity cannot yield effective results without a functional M&E system and contra wise; an M&E system cannot function effectively without adequate human capacity. It was therefore pivotal to assess the existing human capacity and identify gaps if any and draw up recommendations to address the identified gap. A long-term plan, Joburg 2040 GDS for which the City-wide M&E framework is responsible for overseeing is constantly nearing its deadline, but its crucial outcomes are not being achieved. The study focused on human capacity as a crucial component for the effective use of an M&E system in the COJ which should in turn provide support for the realisation of GDS 2040 outcomes. 1.11. Chapter outline This research report is composed of six interlinked chapters outlined as follows; 1.11.1. Chapter 1: Introduction and background This chapter was tailored to lay down the foundation of the study by capturing the introduction and locate the study within the broader context of the M&E field as well as to locate it within its physical research context, the City of Johannesburg. 1.11.2. Chapter 2: Literature review 11 This chapter discusses the relevant literature in M&E and ongoing debates in the field. It also interrogates the theoretical framework to point the foundational theories in a multidisciplinary field that is M&E and interrogates the conceptual framework to build interpretive frameworks for analysis in the study. 1.11.3. Chapter 3: Research procedure, methods and design This chapter puts in context the research strategy, research design and methods used for the study. It also spells out and justifies the choice of the sample, sampling method as well as data collection method, sources of data and the justification thereof. 1.11.4. Chapter 4: Presentation of findings This chapter presents the findings of this research from the voices of the research participants. 1.11.5. Chapter 5: Discussion of findings This chapter presents a thorough discussion of the findings presented in the previous chapter and interprets these findings to derive meaning therefrom. 1.11.6. Chapter 6: Conclusion This chapter presents the conclusions arrived at, following the discussion in the preceding chapter. 12 CHAPTER TWO: LITERATURE REVIEW 2. Introduction This chapter presents and discusses relevant literature in relation to the concepts and tools of monitoring and evaluation in general, and human capacity for monitoring and evaluation in particular. As s starting point, the chapter commences with a brief description of the physical research context, the City of Johannesburg, to provide the context of the study. Within this frame, the study outlines the COJ M&E framework as an important tool against which a reflection is made, followed by a discussion of key aspects of M&E in relation to the history of M&E in South Africa's public sector. The chapter further presents theoretical as well as conceptual frameworks to spell out the underlying theories and concepts underpinning the monitoring and evaluation system. In this case, concepts such as the results-based M&E are described, as well as the elements thereof, followed by the key components of a functional or effective M&E system as purported by the World Bank Handbook, with a specific focus on the first two components. In the same section, the chapter proceeds to discuss the concepts and functions of human capacity assessment and capacity development. Furthermore, the chapter refers to the theory of change as a lens through which to view development interventions as well as fundamental considerations underpinning the functions of monitoring and evaluation and in particular, the results-based monitoring and evaluation systems. The underlying frameworks which accompany the theory of change, i.e. the results chain and the results framework are equally described. Lastly, the chapter refers to the construct of evaluative thinking to paint a picture of the reportedly missing evaluation functions in the City, followed by a brief outline of the 13 professionalisation debates which have a significance in the broader discourse of M&E human capacity. 2.1. Physical research context The City of Johannesburg is the largest metropolitan municipality in South Africa, both in terms of population size and the economy (Statistics South Africa, 2018). Located in Gauteng, the most populous Province in the Country, the COJ is compounded with a myriad of challenges which include, but not limited to; high levels of rapid urbanization, unemployment, poverty, and extreme levels of inequality (City of Johannesburg Integrated Development Plan, 2019). To address these challenges, the City developed a long-term strategy, the Growth and Development Strategy which articulates the City’s development path looking at the year 2040 (Growth and Development Strategy, 2011). The development of the Joburg 2040-GDS is consistent with the Gauteng Provincial strategy, the Growing Gauteng Together (GGT2030) as well as the South African government’s National Development Plan (NDP2030). The Joburg 2040-GDS incorporates numerous strategies which preceded it, and they include the Human Development Strategy, Integrated Transport Plan and the City Safety Strategy, among others (GDS, 2011). As articulated in the GDS (2011, p. 9), the 2040-GDS is underpinned by four key outcomes which include; (i) improved quality of life and development-driven resilience for all, (ii) provide a resilient, liveable, sustainable urban environment – underpinned by infrastructure support of a low carbon economy, (iii) an inclusive, job-intensive, resilient and competitive economy that harnesses the potential of citizens, 14 (iv) a high performing metropolitan government that proactively contributes to and builds a sustainable, socially inclusive, locally integrated and globally competitive Gauteng City region. Being a long-term strategy, the Joburg 2040-GDS has, for operational and strategic reasons been cascaded down to five-year medium-term plans, the Integrated Development Plans (IDP) as obliged by the Municipal Systems Act 23 of 2000. These five-year IDPs coincide with Mayoral terms of office and build cumulatively towards the achievement of the outcomes of the Joburg 2040-GDS and translate these outcomes into implementable programmes (City of Johannesburg Integrated Annual Report, 2012/2013). These implementable programmes are further cascaded down to annual institutional and departmental Service Delivery Budget Implementation Plan (SDBIP) as well as the Municipal Entities Business Plans (COJ-Integrated Annual Report, 2012/2013). The SDBIPs are short-term planning strategies which are drafted and adopted annually for each department (COJ-Integrated Annual Report, 2012/2013). Likewise, business plans are short-term planning documents which are drafted and adopted annually for each Municipal Entity. Therefore, Departments and Municipal Entities are responsible for the implementation of the City’s medium-term strategy, the IDP and long-term strategy, the Joburg 2040-GDS through their various programmes which are informed by and reflected their SDBIPs and business plans respectively. With the need to achieve the objectives as set out in the Joburg 2040 GDS and IDPs it had become prudent for the City to improve its internal processes to be more effective (City of Johannesburg M&E framework, 2012, p. 4). This effectiveness would 15 ensure that there is alignment between the SDBIPs and Business Plans, the IDP and the Joburg 2040-GDS. As a result, the core priorities/outcomes underpinning the Joburg 2040- GDS have been used as a foundation for a refined governance arrangement in the City, which has taken a form of a cluster sub-mayoral committee system (COJ M&E Framework, 2012). Four sub-mayoral committee systems were subsequently established in 2011, composed of the clusters of sustainable services, economic growth, good governance as well as human and social development cluster respectively (COJ M&E framework, 2012, p. 4). 2.2. The City of Johannesburg’s Monitoring and Evaluation Framework To systematically measure the progress made toward the Joburg 2040-GDS outcomes, and as mandated by the Presidency’s M&E unit, the National Department of Planning Monitoring and Evaluation, the City of Johannesburg developed a City- wide Monitoring and Evaluation framework in 2012, and therefore, established an M&E system (COJ M&E Framework, 2012). The development of the M&E framework aims to establish the following objectives as stated in the COJ M&E framework (2012, p. 4) ➢ A City-wide understanding of M&E; ➢ A common, standardised language and approach for the application of M&E principles across the city as a whole; ➢ Enhanced M&E practices – in terms of M&E methodology and tools, and the quality, frequency and application of findings; ➢ Clarity in relation to the roles and responsibilities of all those who are directly or indirectly involved in M&E activities; 16 ➢ The means through which to institutionalise M&E – and ensure application of learnings arising from analysis, for improved delivery; ➢ A mechanism for greater integration of M&E practice within the City’s public participation, planning, budgeting, delivery, policy development, oversight, reporting and governance related processes; and ➢ Greater transparency and accountability, through the generation of sound information to be used in reporting, communication and the improvement of service delivery. Whereas all the stakeholders have a role to play in the effective functioning of the M&E framework, it is the office of the City Manager which has the ultimate responsibility in this regard. Hence, the city's central M&E unit is strategically located in the office of the City Manager, the accounting Officer of the City, just as the National M&E system is strategically located in the Presidency (COJ M&E Framework, 2012). Symmetrical to the National M&E arrangement, the City-wide M&E system is overseen by the planning and strategic hub of the City, the Group Strategy Policy Coordination and Relations department’s M&E Unit (Ndhlovu et al., 2017). The development of the City-wide M&E system meant that all City departments and Municipal entities had to establish their ‘domestic’ M&E units and align their annual Business Plans and SDBIP programmes with the City’s strategic priorities as documented in the IDP. The development of M&E units in the City departments and entities suggests that the M&E functions in the City are decentralised. The departments and entities are expected and mandated to develop their strategic plans in such a way that M&E tools and functions find expression in the Business plans 17 and SDBIPs, and therefore report on Key Performance Indicators which include quantitative indicators, baseline values and targets (COJ M&E Framework, 2012). In the formulation of their annual targets and indicators thereof, the delivery agents are mandated to ensure synergy and alignment with the City’s strategic medium and long-term objectives (M&E Specialist Job Description, 2020). Although the M&E personnel in the delivery agents report to their respective HODs and CEO’s, they get technical M&E support and guidance from the M&E specialists based in the GSPCR- M&E unit. This support, and by extension oversight, takes the form of a cluster system wherein the delivery agents as per their respective programmes are assigned to different clusters which include; sustainable services cluster, economic growth cluster, good governance cluster as well as human and social development cluster (COJ M&E framework, 2012). From an M&E perspective, these clusters are headed by M&E specialists also known commonly as cluster champions. In respect of their respective cluster, each cluster champion is required as per a signed job description to ensure that a strategic plan is properly developed by; on a quarterly basis providing input into the 5-year Cluster Strategic plans and providing input into the SDBIPs for their respective cluster (M&E specialist Job Description, 2020). On a weekly basis, a cluster champion has a duty to develop, execute and monitor operational plans for their cluster, approve and oversee implementation in their respective cluster operational plans and perform regular reviews and analysis of trends and performance related to their respective cluster (M&E specialist Job Description, 2020). 18 Furthermore, a cluster champion is mandated to, on a daily basis, lead and manage programme evaluations by; circulating the calls for submission of evaluation proposals; development of Cluster Evaluation proposals and selection of evaluation projects to be evaluated, facilitate the appointment of external evaluator to undertake studies; review evaluator’s reports; as well as disseminate evaluation findings to relevant stakeholders (M&E specialist Job Description, 2020). By the 4th anniversary of the COJ M&E framework, it had become clear that the framework was not fully implemented and therefore short to medium-term progress towards Joburg 2040 GDS could still not be measured (Ndhlovu et al., 2017). Hence, in 2016, the City approached the Centre for Learning on Evaluation and Results Anglophone-Africa (CLEAR-AA) to assist in understanding the policies, practices and use of (M&E) in tracking the performance of the City towards meeting the goals of Joburg 2040 (Ndhlovu et al., 2017). Using mixed research methods, the CLEAR-AA collected primary data from 54 M&E officials across the City through online organizational survey, key informant interviews as well as focus group discussions and secondary data through desktop review. The CLEAR-AA study made several findings which range from misaligned and poorly phrased indicators, unclear targets, as well as poor internal horizontal communication between departments and entities (Ndhlovu et al., 2017). Key to these findings was the issue of M&E human capacity constraints across the City, but especially in the GSPCR to which line departments and entities look up (Ndhlovu et al., 2017). In this case, some of the respondents have gone on to submit that the GSPCR does not have capacity “both in terms of number of staff and technical skills or capability to advise departments appropriately” (Ndhlovu et al., 2017, p. 8). 19 2.3. Research knowledge gap Against the background of the CLEAR-AA study and following a problem analysis of the perceived ineffective implementation of the City-wide M&E system, it was prudent for this study to conduct a human capacity assessment. The human capacity assessment was meant to address the knowledge gap and therefore establish an understanding of the existing capacity against the ideal capacity, the required skills against the prevalent skills, the required experience against the prevalent experience, as well as the prevalent human capacity development initiatives. Succinctly, there has never been a case study conducted in the City which directly sought to conduct a qualitative M&E human capacity assessment in the GSPCR-M&E unit from the perspective and lived experiences of M&E specialists as a unit of analysis. Thus, there exists both knowledge and methodological gap which this study sought to fill. The human capacity assessment was conducted using a qualitative case study method wherein, the M&E specialists who are experts in their own right were given a platform through semi-structured interviews to provide a detailed account of the unit’s capacity from their vantage point and not that of ‘outsiders’. Using a qualitative research method served its purpose and enabled this study to fill the knowledge gap as the specialists were able to express themselves and provided a painstaking account and analysis of the unit’s overall capacity as well their own strengths and weaknesses in respect of the required skills and competencies to perform M&E functions. 20 2.4. Human Capacity for M&E in the GSPCR-M&E Unit Considering the brief duties of the cluster champions outlined in section 2.2, it is pivotal to provide a snapshot of the personnel responsible for discharging M&E functions in the GSPCR-M&E unit, and by extension ascertain the effective functioning of a City- wide M&E system. Below is an organisational structure of the M&E department as of 24 February 2016, but still applicable to date. Figure 1: GSPCR-M&E Organizational Structure From looking at the above organogram, one can see that the specific personnel responsible for overseeing the M&E functions are only four, as marked Monitoring, Evaluation and Reporting specialist in the two boxes at the bottom left of the diagram circled in red. MONITORING AND EVALUATION DIRECTOR 1 MONITORING, EVALUATION AND REPORTING DEPUTY DIRECTOR 1 CORPORATE, COMPLIANCE, RESEARCH AND REPORTING SPECIALIST 1 MONITORING EVALUATION AND REPORTING SPECIALIST 2 MONITORING EVALUATION AND REPORTING SPECIALIST 2 CORPORATE PERFORMANCE MANAGEMENT DEPUTY DIRECTOR 1 CORPORATE PERFORMANCE SUPPORT SPECIALIST 2 DATA ANALYTICS AND MODELLING DEPUTY DIRECTOR 1 DATA ANALYTICS AND MODELING SPECIALIST 2 DATA ANALYTICS MODELING SPECIALIST 2 PROGRAMME ADMINISTRATOR 1 IT ADMINISTRATOR SPECIALIST SAP 1 21 In this case, each M&E specialist is responsible for overseeing the M&E functions of their respective cluster, which is composed of a combination of about six core departments and entities, depending on the cluster size (see table 1 below). To put it succinctly, each cluster champion is expected to monitor, report and evaluate a diversity of programmes in each department and entity clubbed together as a cluster. At the time of data collection, the unit had four occupied M&E specialist positions. However, the specialist for the economic development cluster had just started in their role, having joined the unit in the month of June, after a specialist who served in that cluster since 2016 had taken a lateral transfer to join another department. The specialist for the sustainable services cluster was assuming secondment in another department, which means he was in the process of transferring (lateral) to another unit. The position of a Deputy Director had recently been occupied, after being vacant for a considerable amount of time. Below is a breakdown of the clusters and members thereof. Cluster Departments and Municipal Entities Human and Social Development Cluster Health Social Development Community Development City Parks and Zoo Joburg Theatre Public safety Sustainable Services Cluster Environment and Infrastructure Services Development Housing Pikitup Joburg Water City Power Johannesburg Social Housing Company Good Governance Cluster Group Finance Group Forensic Investigations GSPCR GRAS GCSS Legislature 22 Private Office of the Mayor Group Communications CRUM Group Legal & Contracts Economic Growth Cluster Department of Economic Development Development Planning & Urban Management Department of Transport Joburg Market Joburg Road Agency Joburg Development Agency Metrobus Joburg Property Company Joburg Tourism Table 1: Breakdown of cluster composition. 2.5. Monitoring and Evaluation There are ongoing debates about the origins of monitoring and evaluation. Western literature characterised as the ‘Modernists’ suggests that M&E is a fairly new concept, and that it was introduced by the World Bank in the early 2000s (Kusek & Rist, 2004). However, some African scholars characterised as the ‘traditionalists’ hold a different view and contend that M&E is not new in Africa. Whereas ‘Modernists’ suggest that M&E originates in the global North, ‘traditionalists’ contend that in fact M&E originates in Africa, specifically in Egypt as “ancient Egyptians regularly monitored their country’s outputs in grain and livestock production more than 5000 years ago” (Masuku & Ijeoma, 2015, p. 10). Against the background of these contestations, current literature and empirical evidence suggest that developed countries have achieved some remarkable successes in institutionalizing and using M&E effectively for over 30 years (Kusek & Rist, 2004). Developing countries have in the recent past followed suit, drawing lessons from developed countries about mainstreaming M&E in their governance 23 framework. This is the case with South Africa as, prior to the development of GWM&E, the Country deployed senior government officials and specialists to visit countries such as Canada, Chile, Australia, among others to draw lessons from these countries (Goldman et al., 2012). Monitoring and Evaluation are two interrelated and complementary concepts and functions, and they are defined below; Monitoring refers to a “continuous function that uses the systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds” (OECD, 2002 p. 27, cited in Kusek and Rist, 2004, p. 12). Evaluation on the other hand refers to “the systematic, periodic and objective assessment of an ongoing or completed project, program, or policy, including its design, implementation, and results” (OECD, 2002 p. 21, cited in Kusek & Rist, 2004, p. 12). Whereas these two concepts have been practised in different forms over decades, they have become more popular and more intentional as a management tool in recent years and have come to be known through the acronym, M&E. The rapid resurgence of M&E as a management tool is largely a result of new challenges and pressures under which governments find themselves, in respect of transparency, accountability and good governance, among others (Kusek & Rist, 2004). The levels of accountability, good governance and transparency have dropped significantly over the years and as such, as regular management and governance systems had become less and less effective in dealing with complex societal issues (Farrell & Commonwealth of Learning, 2009). Thus, it had become necessary to find more effective and additional tools or governance system through which these key 24 principles of accountability, transparency and good governance would find expression in government institutions and development organizations (Masuku & Ijeoma, 2015). It is for this reason that M&E as a management tool gained prominence. Formulated into a system, M&E can, among others “provide the information needed to assess and guide project strategy, ensure effective operations, meet internal and external reporting requirements, and inform future programming” (Chaplowe, 2008, p. 3). The backlash against the challenges of poor accountability, lack of transparency and poor governance as well as internal and external pressures directed at governments and development organizations have prompted these institutions to account to their citizens and stakeholders by demonstrating the actual results of their interventions (policies, programmes and projects) on people and communities (Kusek & Rist, 2004). Hence, many governments and institutions have been persuaded to design, implement and mainstream Monitoring and Evaluation systems in their governance structures. Broadly speaking, managing for results refers to the function of properly defining the anticipated project outcomes, monitoring, and evaluating performance and ensuring that the necessary changes are made where it is required, in order to accomplish project effectiveness (Lahey, 2015). 2.6. The emergence of M&E in South Africa There are reports which suggest that M&E in South Africa has been known since the early 1990s, however, it was only known by few government employees who became familiar with it through aid agencies as well as international practice (Masuku & Ijeoma, 2015). Hence, initial attempts of M&E practice in South Africa, are reported to date 25 back to the late 1990’s, although those attempts were generally unsuccessful as there was no centrally driven system (Goldman et al., 2012, p. 12). Following its global popularity, the interest in the practice resurfaced in South Africa around 2004, a period during which the Presidency was given a task of leading the effort of mainstreaming M&E in the public sector (Goldman et al., 2012). The Presidency led this effort together with other role players in the South African government including the National Treasury, the Department of Public Service and Administration, the Department of Provincial and Local government, the Public Service Commission as well as Statistics South Africa (Bester 2009 in Masuku & Ijeoma, 2015, p 12). Subsequent to these efforts, and as a results of multiple pressures occasioned by persistent levels of poverty and inequality, corruption as well as high rates of service delivery demonstrations, among others, the South African government adopted a plan for the establishment and institutionalization of M&E in its governance as a mechanism for planning, accountability and good governance (Goldman et al., 2012, p. 3). Following the adoption of this plan, a policy framework; the Government-Wide monitoring and evaluation system was drafted and adopted in 2007 (Presidency, 2007). The Government-wide M&E framework provides policy guidance and technical support to all the three spheres of government; the National, Provincial and Local Government respectively (Presidency, 2007). The policy imperative rests with the Ministry which was created in 2009 and strategically located in the Office of the Presidency, following which the National Department of Planning, Monitoring and Evaluation (DPME) was created in 2010 respectively (Goldman et al., 2012). 26 Considering the semi-federal nature of the South African government system, the GWM&E led by the DPME had to be cascaded down to the nine Provinces and their respective departments as well as to Local government municipalities (Presidency, 2007). To that effect, the Presidency states it explicitly that every accounting officer of a department or a municipality should, as a matter of statutory requirement establish an M&E system for their respective institution, be it a department, state entity or a municipality (Presidency, 2007, p 4). In so far as the M&E system is concerned, the government adopted the outcomes approach to M&E, and accordingly, adopted 12 key outcomes for the 2009 administration, which included a focus on health, education and crime, among others (Goldman et al., 2012). Hence, ministers had to sign performance contracts with the Presidency and their departments had to report on the achievement of their respective outcomes (Goldman et al., 2012). The introduction of the Government-Wide M&E system was met with some unwillingness and resistance from the bureaucracy and the broader body politic (Goldman et al., 2012). Given the persistent culture of corruption, maladministration, and overall lack of transparency in the public sector, this resistance was to be expected (Manyaka and Nkuna, 2014). Over and above the unwillingness was the fact that government did not have adequate capacity, both human and institutional capacity to implement the M&E system (Presidency, 2007). Bearing in mind everything that has been said about M&E in the preceding sections, it is paramount that one provides a critical reflection as a way of concluding this section. Central to the said reflections is that M&E is not a panacea for poor governance, nor challenges relating to, but not limited to poor service delivery and 27 corruption, which plague many countries, including South Africa as stated by Manyaka and Nkuna (2014). The year 2020 marked exactly ten years since the DPME was established and M&E was mainstreamed in the governance structures (Goldman et al., 2012). Despite their well canvassed benefits, M&E cannot be ‘copied’ from one country and ‘pasted’ in another (Kusek & Rist, 2004; Mackay, 2007). It is crucial that the tools be used and implement in accordance with the context of a specific country or institution and therefore acknowledge complex challenges with which developing countries as an example, are compounded. Many developing countries have relatively ineffective governance and organizational structures, and as such, it is nearly impossible for a system as complex and detailed as an M&E system to fit without making necessary reforms before attempting to institutionalise M&E (Abrahams, 2015) South Africa as an example has been widely reported to have some of the most progressive policies, however, the country is often seen to lack in implementation of those policies (Tebele, 2016). The lack or inability to implement the policies has widely been reported to be caused by, among others, lack of skills, knowledge, experience and expertise (McLaughlin, 1987, p.172; Barrett, 2004, p. 159; Lasswell, 2003, p. 85 in Tebele, 2016, p. 15) Hence, an M&E system may not function effectively as the policies and programmes for which the system is set up to monitor and ultimately evaluate quiet often do not get to be implemented (Tebele, 2016). Where there is implementation, frequently one finds that there is lack of ‘proper’ data systems to enable the M&E system to function, considering that data is the backbone of M&E (Görgens & Kusek, 2009). 28 In such instances as described above, M&E gets reduced to malicious compliance exercise, instead of meaningful outcomes-oriented or results-based monitoring, evaluation, and reporting (Mataka, 2015). It is worth noting, against this background that implementation of results-based M&E is in itself a “process of continuous improvement” (Farrell and the commonwealth of Learning, 2009, p. 7), thus, it takes time for some institutions to get it right. The success of an M&E system is marked by, among others, the creation of a sustainable, well- functioning M&E system in an institution whereby M&E information of good quality is utilised (Mackay, 2007). 2.7. Theoretical and Conceptual Frameworks 2.7.1. Results-Based M&E Whereas many institutions, including governments and development organizations have over the years designed and implemented Monitoring and Evaluation systems, many of them have not been able to derive much benefit from these systems (Farrell & Commonwealth of Learning, 2009). Some of these institutions have been accused of, among others, “reporting more about programme activities and processes than the results achieved” (Farrell & Commonwealth of Learning, 2009, p. 7). This type of a system is characteristic of implementation-focused M&E, whose focus is limited to inputs, activities and outputs (Kusek & Rist, 2004). In the context of governments, Citizens demand government officials to account for the use of their allocated resources, and therefore expect these officials to demonstrate the actual results or impact of their interventions (Kusek & Rist, 2004). This demand for results and accountability has prompted many institutions to attempt to demonstrate results and therefore adopt models of Results-Based Management 29 (Farrell & Commonwealth of Learning, 2009, p 7). Farrell and Commonwealth of Learning (2009, p 7) posit that Results-Based Management (RMB) requires that intended results of an intervention be described in a “sequential hierarchy, beginning with shorter-term results that, when achieved, lead to achievement of broader long- term results”. Results-Based M&E (RB-M&E) is derived from Results-Based Management (Farrell & Commonwealth of Learning, 2009). Kusek and Rist (2004, p. 1) define results-based M&E as a “powerful public management tool” that can be used to measure progress and evaluate outcomes of an intervention and subsequently, give feedback to the ongoing routine management system. Measuring progress includes the tasks of tracking progress of an intervention from inputs, activities to outputs through their defined indicators. Inputs are defined as the needed resources (human, financial etc) which need to be in place in order for certain tasks or duties of an intervention to take place (Caldwell, 2002 in Chaploe, 2008). Activities are the actual duties or regular efforts needed to produce the outputs and when completed, some changes occur (Caldwell, 2002 in Chaploe, 2008). Outputs are the immediate products or services needed to achieve the outcomes Caldwell, 2002 in Chaploe, 2008). Finally, indicators are tracking mechanisms through which to measure progress or completion of stated activities (Caldwell, 2002 in Chaploe, 2008). The above are typically what characterise monitoring. Evaluating outcomes on the other hand involves a periodic assessment of a programme conducted at different intervals of the intervention to ascertain its relevance, efficiency, effectiveness, impact and sustainability (Kusek & Rist, 2004, p. 30 114). There are many different types of evaluations, which typically include formative evaluations, summative evaluations, midterm evaluations and impact evaluations as well as final evaluations (International Federation of Red Cross, 2011). Some methods are more prevalent than others, and these are implementation evaluation, process evaluation, outcome evaluation as well as impact evaluation (Clifton, 2003; Kusek & Rist, 2004). The choice of an evaluation method depends on a number of factors which include, but not limited to the purpose of the evaluation, the objective of the evaluation as well as the specific evaluation domain (Greene, Mark & Shaw, 2006). The literature reviewed in the City of Johannesburg, and the current appetite to accomplish the 2040 GDS objectives suggest that the City is equally attempting to implement a Results-Based M&E system through its City-wide M&E framework. Evidence, however, point to the fact that the City has not been able to effectively make use of its M&E system, thus far, owing to numerous reasons, with human capacity being one of them (Ndhlovu et al., 2017). This corroborates the view that strengths and capacities of institutions differ from one institution to the next, and as such, some institutions with limited capacity are still ‘trapped’ on implementation-focused M&E and have not moved towards results-based M&E (Masuku & Ijeoma, 2015). 2.7.2. Elements of Results-Based Monitoring and Evaluation According to Kusek and Rist (2004, p. 17) as adapted from Fukuda-Parr, Lopes, and Malik (2002, p. 11), the following are key elements of a Results-Based Monitoring system: 31 ➢ Baseline data to describe the problem or situation before the intervention, used as the first critical measures, Kusek & Rist (2004, p. 81) define baseline data as qualitative or quantitative information that provides data at the beginning of or just prior to, the monitoring period. The desktop review of literature on the City suggests that this element of RB-M&E finds expression in relevant documents such as the IDP and SDBIPs (COJ Service Delivery Budget and Implementation Plan 2020/21; COJ Integrated Development Plan Review, 2019/20; Annual Report 2018/19?). The City attempts to demonstrate its starting point in terms of its planned interventions, with intentions to show results once interventions are complete. ➢ Indicators for outcomes. Indicators are defined as the “quantitative or qualitative variables that provide a simple and reliable means to measure achievement, to reflect the changes connected to an intervention, or to help assess the performance of an organization against the stated outcomes” (Kusek & Rist, 2004, p. 65). Outcome indicators are therefore critical for answering the questions of “how will we know success or achievement when we see it and, are we moving towards our desired outcomes?” (Kusek & Rist, 2004, p. 65). The review of literature shows that this element of RB-M&E finds expression in relevant City documents such as the IDP and SDBIPs. Hence, the City’s system and employees are expected to report on Key Performance Indicators as well as Outcome indicators (COJ SDBIP, 2020/21; COJ IDP Review, 2019/20; Annual Report 2018/19?). 32 However, it seems like the indicators in these documents are primarily on output level, as opposed to outcomes. ➢ Data collection on outputs and how and whether they contribute towards achievement of outcomes. This element equally finds expression in the City, as shown in relevant documents, especially in the integrated annual reports of the City (COJ SDBIP, 2020/21; COJ IDP Review, 2019/20; Annual Report 2018/19?). However, there is a fairly minimal indication of a “contribution or attribution” (Rogers, 2008) of these output to the broader outcomes of the City as enshrined in the 2040GDS. ➢ Systematic reporting with more qualitative and quantitative information on the progress toward outcomes. This element equally finds expression in the City. There is a clearly defined reporting system in the City from programme implementers to data capturers, M&E officials in delivery agents as well as cluster officials all the way to sub-mayoral structures and ultimately, council (COJ SDBIP, 2020/21; COJ IDP Review, 2019/20; Annual Report 2018/19?) All indications point to the fact that the City’s M&E framework attempts to take the shape of a Results-based M&E system, and therefore mirror the Government-wide M&E system which is referred to as outcomes-based (COJ M&E framework, 2012; Presidency, 2007). 2.7.3. 12 components of effective M&E systems Having outlined that the City of Johannesburg has adopted a Results-based approach in its M&E framework, it is pivotal to hone in on the critical components for an effective 33 Results-based M&E system. Making effective use of results-based M&E system is a challenging task as reported by several researchers, and it requires a great deal of institutional reforms to be in place as well as different forms of institutional capacity (Kusek & Rist, 2004). To that effect, Görgens and Kusek (2009) outline 12 components to an effective M&E system which are depicted below in a diagram. 1. Structures & Organizational alignment for M&E system 2. Human capacity for M&E system 3. M&E partnerships 4. M&E plans 5. Costed M&E workplans 6. Advocacy, communication and culture for M&E 7. Routine monitoring 8. Periodic surveys 9. Databases useful for M&E 10. Supporting supervision and data auditing 11. Evaluation and research 12. Using data to improve results Figure 2: The 12 components of an effective M&E system According to Görgens and Kusek (2009), these components are interlocking and interdependent parts of a larger whole. These components link together to form three sub-sets, the first being components relating to people, partnership and planning {outer ring} which is composed of (1) organizational structure & Alignment for M&E (2), human capacity for M&E systems, (3) M&E partnerships, (4) M&E plans, (5) Costed M&E workplans, (6) M&E advocacy, communication and culture (Görgens & Kusek, 2009)The second subset includes components relating to ‘collecting, capturing and verifying data’ {middle ring} which are (7) routine programme monitoring, (8) surveys & surveillance, (9) M&E database, (10) supervision & data auditing, (11) evaluation & research (Görgens & Kusek, 2009). Lastly, component relating to using 34 data for decision making {inner ring} includes (12) data dissemination and use (Görgens & Kusek, 2009). Although all these components are important, this chapter pays attention to the first two components, the structures and organizational alignment for M&E as well as second component, human capacity for M&E system for their peculiar relevance to the paper. 2.7.3.1. Component 1: Structures and organizational alignment for M&E Görgens and Kusek (2009) suggest that it is crucial for an organisation to have an established organizational structure which can be clearly identified. An Organizational structure provides for a clear definition of the hierarchy and reporting lines within the organization (Görgens & Kusek, 2009). The layout of these structures differ from one organization to another. There are several organizational structures and they include; traditional structure, divisional structure, team structure, matrix structure as well as hybrid structure (Görgens & Kusek, 2009). Reviewed literature suggests that the COJ has adopted a hybrid structure (COJ SDBIP 2020/21). It is crucial for M&E to find fit within the structure, and for the role and mandate of M&E to be clearly spelt out. Görgens and Kusek (2009) further amplify the need to properly locate the M&E unit within the organizational structure. As previously stated, M&E finds expression within the COJ organizational structure in a decentralised manner, however, the principal coordination office is located in the strategic hub of the City (COJ M&E Framework, 2012). Within the organizational structure, Görgens and Kusek (2009) submit that M&E roles and duties ought to be officially allocated to specific individual posts, and this evident in the City as articulated by the M&E specialist’s job descriptions. 35 2.7.3.2. Component 2: Human Capacity for M&E systems Human capacity for M&E systems is an important component for it speaks directly to the focus of this study, an assessment of existing and ideal human capacity to coordinate the M&E framework. This component looks at capacity development from three levels; system capacity, organizational capacity and human capacity (Görgens & Kusek, 2010). At the human capacity level, it emphasizes the importance of having technical skilled personnel to discharge M&E functions effectively and efficiently (Görgens & Kusek, 2009). For conceptual clarity, it is prudent that the concept of capacity be put in context. As Peisah (2016, p. 5) puts it, capacity is not a unitary concept, but rather, a contextual or domain specific concept. Thus, when one suggests, for example that an individual or institution is “lacking capacity”, they should qualify the assertion by specifying the context or domain under which they purport that individual or institution to lack capacity (Peisah, 2016). In the context of this paper, human capacity refers to the presence of employees who possess specific skills and or competence in a specific area of work, monitoring and evaluation (Görgens & Kusek, 2009). Against this brief background, human capacity can be measured both in terms of quality and quantity. Whereas quantity can be determined in terms of the number of M&E officials in a department, quality can be determined by the “existence of properly and highly skilled personnel who perform their M&E functions effectively, efficiently, and sustainably” (Görgens & Kusek, 2009 as cited in Maphunye, 2013, p. 22). It is well canvased that the M&E system or framework cannot function effectively nor efficiently without skilled people who effectively perform M&E tasks for which they are responsible. 36 The number of M&E officials required in an M&E department or unit is not a normative nor prescriptive issue, it largely depends on, among others, the size of the organisation, the structure of the organisation as well as the objectives of the system. As stated earlier, the City of Johannesburg, for example has a decentralised M&E system that is overseen by only four M&E specialists in a cluster governance system (COJ M&E Framework, 2012) Kusek and Rist (2004, p. 22) postulate that an M&E system has to include, at a minimum, “the ability to successfully construct indicators; means to collect, aggregate, analyse and report on performance data in relation to the indicators and their baseline; and mangers with the skill and understanding to know what to do with information once it arrives”. The monitoring and evaluation functions in the City of Johannesburg are decentralised and therefore, different departments and entities conduct their own M&E in line with the City-wide M&E framework under the tutelage of the GSPCR – M&E unit (COJ M&E Framework, 2012). The M&E skills and competencies required in the GSPCR are therefore largely supervisory. 2.7.4. Skills and Competencies for M&E For conceptual clarity, a skill is defined as “an ability which can be developed, not necessarily inborn, and which is manifested in performance, and not necessarily potential” (Moore and Ruud, 2004, p. 23). Competence on the other hand is defined as an “ability of an individual to perform a task using his/her knowledge, education, skills and experience” (Moore and Ruud, 2004, p. 23) 37 In the main, there is no universal consensus about the required specific skills and competencies of an M&E specialist (Morkel and Ramasobana, 2017), however, some of the common essential skills and competencies include; Ability to conduct quantitative and qualitative research, competence in collection, analysis and interpretation of data, ability to compile evaluation reports, ability to manage projects, ability to supervise others conducting evaluations, ability to serve intended users of M&E reports, application of evaluation standards, ability to develop recommendations from evaluation reports, ability to manage evaluation reports (Stevahn, King, Ghere and Minnema, 2005 as cited in Maphunye, 2013, p. 23). Within this frame, Görgens and Kusek (2009, p. 111) argue that the skills profile of an M&E specialist is centred around four critical domains which include: institutional analysis, systems design and application, methodological tools; and information knowledge utilization. Beyond the technical skills which alone are not sufficient (Segone, 2010), at a bare minimum, an M&E specialist should possess interpersonal skills including, but not limited to the ability to write fluently, work independently, use of sound and sensitive negotiation skills, demonstrate cultural/gender sensitivity as well as the ability to nurture professional relationships (Görgens & Kusek, 2009, p. 121). Although not exhaustive, these skills and competencies, both technical and interpersonal form the yardstick against which M&E human capacity levels are assessed in the study. The findings of this study provide an elaborate description of the skills and competence prevalent in the Unit of analysis. 38 2.7.5. M&E Capacity Building and Development It is insufficient to point out that an institution or an individual displays a shortfall in capacity to perform certain tasks if not also highlighting some of the ways in which that capacity can be developed. Developing capacity, in this case includes ways and mechanisms in which M&E specialists can advance their skills to undertake some of their duties more effectively (Maphunye, 2013). Capacity building (and development) is defined as a “process by which employees (at an individual level) and organizations attain or improve their existing skills and knowledge to work better in a sustainable environment with proper tools and equipment needed to complete their jobs” (McKegg, Weihipeihana & Pipi, 2016 in Matshiliza, 2019, p. 494). Segone (2010) contends that capacity development implies intentionality to strengthen capacities. Typically, capacity development is preceded by human capacity assessment, which should indicate capacity gaps and in response implement capacity development interventions (Segone, 2010; Simister and Smith, 2010). Görgens and Kusek (2009, p. 11) submit that there are two broad approaches to M&E capacity assessment, which are the bottom up and top-down approaches. In the bottom-up approach, M&E specialists are asked to list the areas of their capacity (skills) which need to be developed for them to perform their M&E functions. Although it is seemingly less favoured by the authors, this approach views stakeholders, in this case M&E specialists as ‘experts’ who have the ability to “gauge their own level of knowledge and capacity development needs” (Görgens and Kusek, 2009, p. 100) At the outset, one of the crucial ways through which one can develop their technical M&E capacity is through training such as external formal M&E qualification from a 39 higher education institution (Presidency, 2007). There are several reputable and accredited higher learning institutions that offer M&E qualifications at a certificate level, degree level as well as post-graduate diploma level (Presidency, 2007). Acquiring this qualification can expose one to essential skills such as being able to construct an indicator, formulate an outcome, set targets, set baseline values, conduct evaluations (both formative and summative evaluations) as stated by Kusek and Rist (2004) and Görgens and Kusek, (2009). These are very basic, yet essential skills that one can acquire through an accredited and reputable training institution. Other training modalities include attendance to on-the-job training and mentoring through which technical capacity can be increased (Görgens & Kusek, 2009; Presidency, 2007). This would be preceded by, for example a capacity assessment (bottom up or top down) through which one conducts an assessment to determine the training needs of employees, and upon conclusion, draw up a report around the common skills of which employees need to be trained (Görgens & Kusek, 2009). Thereafter, a service training provider can get appointed to respond to the identified need or skills gap by conduct the training at the premises of the employer or off-site. Beyond qualifications, there are several ways of developing capacity. The development of the South African Monitoring and Evaluation Association (SAMEA) as an example is but one platform though which M&E capacity can be developed (Presidency, 2007). Through the biannual SAMEA conferences, M&E professionals exchange experiences and contemporary knowledge regarding new developments (theoretical and practical) in the field of M&E (SAMEA Conference Report, 2009). By and large, attendance to 40 these conferences has a potential to introduce an M&E professional to different ways of approaching their duties in their respective organizations. Lastly, the South African government has engaged in the process of Evaluation Capacity Building (ECB) with an effort to enhance evaluation skills, which have been reported to come in short supply in the public sector and evidently, in the City of Johannesburg (DPME Evaluation Capacity Development Strategy, 2014). Evaluation capacity building (ECB) is defined as “an organisation’s ability to bring about, align and sustain its objectives, structure, processes, culture, human capital and technology to produce evaluative knowledge that informs on-going practices and decision making in order to improve organisational effectiveness” (MacKay, 2002, p. 83 in Maphunye, 2013). Morkel and Ramasobana (2017) argue that primarily, ECB should aim to achieve an evaluation practice that is sustainable, wherein evaluators have an appetite to ask relevant questions and use the right information and findings to inform decision making. The development of evaluation capacity has a potential of improving critical evaluation competencies such as the ability to appreciate the social and political role played by the evaluation, the ability to understand and make use of the various evaluation methods and approaches as well as the ability to use evaluation tools of data collection and analysis (Görgens & Kusek, 2009). However, without leadership support, incentives and resources, participants may find it difficult to ensure the sustainability of evaluation practice, despite the ECB efforts (Morkel & Ramasobana, 2017). 2.7.6. Theory of Change Successful implementation of Results-based M&E is hinged on the use of theoretical frameworks such as the results chain, results framework and the theory of change 41 (COJ M&E Framework, 2012). The latter, theory of change (TOC) is defined as a “pictorial description of how an intervention is supposed to deliver the desired results” (Gertler, Martinez, Premand, Rawlings, & Vermeersch, 2010, p. 22). Theory of change is one of the two components of programme theory (Rogers & Funnell, 2011). Programme theory is defined as “an explicit theory or model of how an intervention contributes to a set of specific outcomes through a series of intermediate results” (Rogers & Funnell, 2011, p. 31). Programme theory is therefore pivotal to understanding results of an intervention and how they were achieved, which is a central feature of results-based M&E. Theory of change can be perceived as both a product and a process. As product, it is a pictorial description which shows a link between activities and the intended outputs and ultimately outcomes (Rogers, 2008). Thus, as Gertler et al, (2012) purports, it depicts a sequence of events leading to specified outcomes. A TOC can be modelled in various forms. In its simplest form, a TOC can be presented in a form of a log frame or a results chain. A results chain sets out a “logical, plausible outline of how sequences of inputs, activities and outputs for which a project is directly responsible interact with behaviour to establish pathways through which impacts are achieved” (Gertler et al., 2012, p. 23). Below is a depiction of a logic model or results chain. 42 Figure 3. A simple logic model (W.K. Kellogg Foundation, 2004) 2.8. M&E professionalisation debates There is an ongoing scholarly debate around the professionalization of M&E not only in South Africa but around the world. The debate is centred around the need to have commonly agreed and “clearly stated codes and standards of practice” (Abrahams, 2015, p. 4). Abrahams submits that professionalization involves “the development of skills, identities, norms and values associated with being part of a professional group” (Abrahams, 2015, p. 4). It is difficult to agree on common practice and competencies in M&E because it is a diverse field which attracts experts from other established fields with varied professional competencies relevant to their respective fields (Stevahn et al., 2015). Thus, M&E experts, especially evaluators in different sectors have varied concerns and therefore, it is difficult to find common ground on which M&E as a profession would hold. 43 Smith and Morkel (2018) argue that the challenges in the professionalisation discourse include the divergent views on the specific competencies to standardise in order to cater for the different transdisciplinary M&E professionals. Other practitioners are evaluators, others are evaluation managers, others specialise in data management and as such, there is no agreement about which competencies to standardise in the course of M&E professionalisation. According to Abrahams (2015), there have been some attempts to professionalise and build M&E capacity in South Africa, and they include; the availability of M&E courses at different institutions, availability of M&E credit bearing courses by private training providers, as well as the establishment of the South African Monitoring and Evaluation Association (SAMEA). SAMEA holds biennial conferences wherein experiences are shared and the discourse around professional and ethical standards for M&E find expression (SAMEA conference report, 2019). Other efforts to strengthen capacity and professionalise M&E include the government’s documents such as the national evaluation policy framework (NEPF) and the establishment of the African Evaluation Journal (Abrahams, 2015, p. 4). This chapter commenced with the presentation and discussion of the physical research context where the City of Johannesburg and particularly the GSPCR was described. This description was followed by an elaborate description of the City’s long terms strategic document, the Joburg 2040 GDS as an important strategy which gave rise to the demand for M&E tools and system in the City. Following this description, the City-wide M&E framework was discussed as an important tool for tracking and measuring the success and achievement of the Joburg-2040 GDS. 44 Subsequently, the chapter discussed human capacity for M&E in the City of Johannesburg. This discussion put in context the knowledge gap that this research aims to fill. The discussion was followed by a brief description of the concepts of monitoring and evaluation, as well as the journey of M&E in the South African government. Furthermore, the chapter discussed theoretical as well as conceptual framework which inform this study. Importantly, this chapter provided a thorough discussion of the concepts such as the results-based monitoring and evaluation, the elements thereof, the key components for the effective implementation of the M&E system as well as the required skills and competencies required for effective discharge of M&E functions in this research’s unit of analysis. 45 CHAPTER THREE: RESEARCH PROCEDURE, METHODS AND DESIGN 3. Introduction This chapter is tailored to describe the research strategy, procedures and methods used in conducting this study. The chapter starts by defining the research strategy or method chosen for the study, followed by the suitable research design to support the research strategy and a justification thereof. Hence, the study used a qualitative case study research design. The chapter also discusses the research procedure and methods. In this section, the paper outlines the different data collection instruments and indicate the instruments used by this research, in this case, semi-structured interviews for the collection of primary data as well as documents for the collection of secondary data. The subsequent section identifies and discusses the target population and the sample thereof, followed by the chosen sampling method. The study used purposive non- probability sampling in the selection of research participants, together with the selection of the unit of analysis. The next section describes in detail, the process of data analysis as well as the specific data analysis method used, in this case, the thematic content analysis. Penultimately, the chapter discusses the limitations of this research as well as the ethical principles which had to be considered in the process of undertaking this research. 3.1. Research strategy Research can be undertaken through various methods, approaches or strategies. Research strategy refers to the general orientation around the conduct of social research (Bryman, 2016). Creswell (2003, p. 3) uses the concept of research approach 46 and defines it as a “plan and the procedure for research that span the steps from broad assumptions to detailed methods of data collection, analysis and interpretation”. Broadly speaking, there are three approaches or strategies through which social research can be conducted and these include; a quantitative, qualitative as well as mixed strategy (Creswell, 2003; Bryman, 2016). Several authors, including Leavy (2017) and Bryman (2016) refer to a quantitative research strategy as one that emphasizes quantification of data collection and analysis. The authors further go on to describe that quantitative research strategy is characterized by deductive, as opposed to inductive approach, which is aimed at, among others; measuring variables, testing relationships between variables, correlation and causal relationships, as well as testing theories (Leavy, 2017; Bryman 2016). Contrary to a quantitative strategy, a qualitative research strategy places an emphasis on words and narratives in the collection and analysis of research data, using inductive approach to research in which theory is created rather than tested (Bryman, 2016). The differences go beyond data collection and analysis, to include paradigms and philosophical assumptions. Qualitative research is defined as an “approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem” (Creswell, 2003, p. 3). Lastly, mixed research strategy is a combination and an integration of quantitative and qualitative research methods in the collection and analysis of data in a single research project (Bryman, 2016 and Leavy, 2017). Leavy (2017) suggests that Mixed research strategy is generally appropriate when the purpose of the research project is to describe, explain, or evaluate. 47 This study sought to explore and assess human capacity levels for monitoring and evaluation in the City of Johannesburg’s Group Strategy, Policy Coordination and Relations’ M&E-unit. To achieve this exploratory function, a qualitative case study research design was used. A case study is described as a research design that provides a detailed and intensive analysis of one or more robust cases (Johnson & Christensen, 2013). A qualitative case study method was chosen as it would enable the researcher to collect rich and in-depth data with a “strong potential for revealing complexity” and provide “thick descriptions” that are “nested in real context” (Miles, Huberman & Saldana, 2013, p. 11). Furthermore, the choice of this research approach was influenced by the fact that when collecting data, the researcher asked participants to narrate their subjective views as they relate to human capacity for M&E in the GSPCR, and that is an important characteristic of qualitative research. Another important consideration for qualitative research, which made it suitable for this study is its focus on learning the meaning that the participants hold about the problem under study, in this case, the problem of human capacity for M&E in the COJ’s GSPCR (Creswell, 2003) This qualitative case study sought to hone in on the GSPCR-M&E unit as a single case by exploring underlying context-specific factors in detail so as to ‘lift up the voices of the participants’ and subsequently, obtain a thorough understanding of their perceptions of the required as well as prevalent skills and competencies for coordinating the M&E framework (Wagner et al., 2012). The key feature of the GSPCR-M&E unit as a case is that it is a principal M&E unit which oversees the implementation of the COJ M&E framework, and therefore 48 employs M&E specialists who are well positioned to share narrative views about human capacity levels for discharging M&E functions (GSPCR-M&E Organizational Structure, 2016). As stated, this qualitative research relied on participants' perspectives of their context and the different meanings they construct about this context. These meanings are multifaceted, and they lead the researcher to “look for complexity of views”, hence the study leaned towards a social constructivist paradigm (Creswell, 2014, p. 8). This paradigm allowed the researcher to understand the world or research subject as others experienced it (Wagner et al., 2012). This study benefitted from using a qualitative research strategy. Using qualitative strategy enabled the participants to provide thick descriptions of their context in detail from their own subjective perspectives. This was particularly important because often when assessments are made, they are facilitated by outsiders who determine the required capacity and barely give room for participants to define and assess their capacity from their vantage point. 3.2. Research design Research design is quiet often confused with research strategy. Research design refers to a framework for the generation of evidence that is chosen to answer the research question in which the investigator is interested (Bryman 2016, p. 39). Five prominent research designs have been outlined, and they include; experimental design, cross-sectional design, longitudinal design, comparative as well as case study design (Bryman, 2016). As stated, this research project used a case study as its primary design, combined with qualitative research strategy to make a qualitative case study design. This is 49 explained by the fact that the study chose the City of Johannesburg’s GSPCR-M&E unit as its unit of analysis. A case study is described as a research design that provides a detailed and intensive analysis of one or more robust cases (Johnson and Christensen, 2016). Congruent to the definition, conducting M&E human capacity assessment in the GSPCR-M&E unit as a single case enabled the researcher to provide and establish an intensive analysis of the unit. 3.3. Research procedure and methods 3.3.1. Data collection instruments Data collection is the hallmark of any research project. Data collection is described as a process of gathering data from the sample so that the research question can be answered (Bryman, 2012). Like Bryman (2012), Johnson and Christensen (2016, p. 338) describe data collection as a procedure that a researcher uses to obtain data from research participants. Broadly speaking, there are several methods through which data can be collected. Typically, data can be collected using two types of data collection processes, which are; structured and semi-structured data collection processes (Bryman, 2016). The type of data collection instruments to use is generally determined by; research strategy, research design, purpose of research, theoretical framework etc (Bryman, 2012; Neuman, 2014). In a structured data collection process, the researcher “establishes in advance the broad contours of what he or she needs to find out about and designs the research instruments to implement what needs to be known” (Bryman, 2012, p. 12). In such a case, the researcher does not leave room for additional data outside the structured guideline. 50 Contrary to structured data collection process, semi structured data collection “emphasizes a more open-minded view of the research process so that there is less restriction on the kind of things that can be found about” (Bryman, 2012, p. 12). In semi-structured data collection, the researcher leaves room for additional and important data which may emerge outside of the structured guideline. There are several examples of data collection instruments through which data collection can be undertaken, and these generally fall within the broad categories of structured and semi-structured data collection processes (Brancati, 2018). A questionnaire is a popular example of a structured data collection instrument predominantly used in quantitative research methods (Wagner et al., 2012). An interview on the other hand is a popular example of a semi-structured data collection instrument predominantly, although not exclusively used in qualitative research methods (Leavy, 2017). Wotela (2017) suggests the following basic selection criteria for choosing an appropriate data collection instrument. “Use fully structured data collection instrument when there is sufficient knowledge on the research topic; semi-structured instrument when there is sufficient knowledge on the topic, but the researcher has an inclination to accommodate new data that may emerge during data collection” (Wotela, 2017, p 230). Because this research was conducted using qualitative case study method, semi- structured interview instruments were used in the collection of primary data. A detailed description of a semi-structured interview is provided below. 51 3.3.1.1. Interviews Several social research authors such as Brancati (2018) and Bryman (2016) have defined an interview as a purposeful conversation and a method of collecting both qualitative and quantitative data through a series of questions typically asked to individuals either face-to-face, by telephone, or other means to collect data about the ideas, experiences, beliefs, views and behaviours of participants (Wagner et al., 2012). Interviews are predominantly used in qualitative research, but they can be used in quantitative research as well (Bryman, 2016). As previously stated, this research used semi-structured interview instruments in the collection of primary data. The aim of using interviews was to obtain rich descriptive data that would help the researcher to “see the world through the eyes of the participants” (Wagner et al., 2012). Bryman (2016) suggests that a semi-structured interview is an interview type which contains a list of questions that can be asked in any sequence without following any specific order. Furthermore, Wotela (2017) submits that semi-structured interviews are predominantly used where a lot is known about the subject, but the researcher or interviewer chooses to leave room for interviewees to share additional views. Hence, the researcher used semi-structured interviews to direct the flow of the conversations, but still allowed room for the participants to share important views beyond and outside of what was prepared. Moreover, semi-structured interviews are especially appropriate when there is less known about the contemporary phenomenon, hence they are structured to leave room for the researcher to “probe and explore new emerging lines of inquiry that are relevant to the unit of analysis” (Brancati, 2018, p. 147). 52 There are several methods through which interviews can be conducted, of which face to face and telephone interviews are predominant (Wagner et al., 2012). The study’s primary aim was to use face to face interviews for their relevance in the research strategy (qualitative) and design (case study) as well as the advantages thereof, which include; the highest response rate, the opportunity to create rapport with the interviewees as well as opportunity for the interviewer to probe for responses and seek clarity when the need arises (Neuman, 2014; Wagner et al., 2012). The nature of the study required that in-depth information be collected so that widespread