4. Electronic Theses and Dissertations (ETDs) - Faculties submissions
Permanent URI for this communityhttps://hdl.handle.net/10539/37773
Browse
11 results
Search Results
Item The factors influencing the adoption of Machine Learning for regulation by central banks in SADC(University of the Witwatersrand, Johannesburg, 2024) Kunene, Sibusiso; Totowa, JacquesThe study investigates SADC central banks' readiness to adopt machine learning technologies with raw data collected through an online survey. Subsequently, the raw data was transformed into modellable data using principal component analysis and further fitted into the proposed logistic regression model design. The data underwent reliability and validity tests, which confirmed that the measurements of the constructs were consistent, reliable, and appropriately represented the intended constructions. Correlation analysis was employed to examine the hypotheses of the model, and multiple and stepwise regression were performed as additional tests of the model. The results show that IT infrastructure is instrumental in enabling SADC central banks to implement machine learning capabilities. Top management is crucial for implementing ML, but adequate IT infrastructure is also essential. The regulatory environment and IT infrastructure indirectly influence SADC central banks' readiness to adopt ML capabilities, despite top management's direct impact. The derivable policy implication from these results is that working groups among the sampled SADC central banks need to be formed to address the noted shortcomings within IT infrastructure and regulatory-related aspects of this adoption holisticallyItem Imputation of missing values and the application of transfer machine learning to predict water quality in acid mine drainage treatment plants(University of the Witwatersrand, Johannesburg, 2024) Hasrod, TaskeenAccess to clean water is one of the most difficult challenges of the 21st century. Natural unpolluted water bodies are becoming one of the most dramatically declining resources due to environmental pollution. In countries like South Africa which has a mining-centred economy, toxic pollution from mine tailing dumps and unused mines leach into the underground water table and contaminate it. This is known as Acid Mine Drainage (AMD) and poses a grave threat to humans, animals and the environment due to its toxic element and acidic content. It is, therefore, imperative that sustainable wastewater treatment procedures be put in place in order to decrease the toxicity of the AMD such that clean water may be recovered. An efficient circular economy is created in the process since original wastewater can be recycled to not only provide clean water, but also valuable byproducts such as sulphur (from the elevate sulphate content) and other important minerals. Traditional analytical chemistry methods used to measure sulphate are usually time-consuming, expensive and inefficient, thereby, leading to incomplete analytical results being reported. To address this, this study aimed at imputing missing values for sulphate concentrations in one AMD treatment plant dataset and then using that to conduct transfer learning to predict concentrations in two other AMD treatment plants datasets. The approach involved using historical water data and applying geochemical modelling as a thermodynamical tool to assess the water chemistry and conduct preliminary data cleaning. Based on this, Machine Learning (ML) was then used to predict the sulphate concentrations, thus, addressing limited data on this parameter in the datasets. With complete and accurate sulphate concentrations, it is possible to conduct further modelling and experimental work aimed at recovering important minerals such as octathiocane, S8 (a commercial form of sulphur), gypsum and metals. Historical data obtained from the three AMD treatment plants in Johannesburg, South Africa (viz., Central Rand, East Rand and West Rand) were obtained and the larger Central Rand dataset was split into smaller untreated AMD (Pump A and Pump B) subsets. Thermodynamic and solution equilibria aspects of the water were assessed using the PHREEQC geochemical modelling code. This served as a preliminary data cleanup step. Eight baseline as well as three ensemble machine learning regression models were trained on the Central Rand subsets and compared to each other to find the best performing model that was then used to conduct Transfer Learning (TL) onto the East Rand and West Rand datasets to predict their sulphate levels. The findings pointed to a high correlation of sulphate to temperature (°C), Total Dissolved Solids (mg/L) and most importantly, iron (mg/L). The linear correlation between iron and sulphate substantiated pyrite (FeS2) as their source following weathering. Water quality parameters were found to be dependent on factors such as weather and geography this was evident in the treated water that had quite different chemistry to that of the untreated AMD. Neutralisation agents used were based on those parameters, thus, further delineating the chemistry of the treated and untreated water. The best performing ML model was the Stacking Ensemble (SE) regressor trained on Pump B’s data and combined the best performing models namely, Linear Regressor (LR), Ridge Regressor (RD), K-Nearest Neighbours Regressor (KNNR), Decision Tree Regressor (DT), Extreme Gradient Boosting Regressor (XG), Random Forest Regressor (RF) and Multi-Layer Perceptron Artificial Neural Network Regressor (MLP) as the level 0 models and LR as the level 1 model. Level 0 consisted of training heterogenous base models to obtain the crucial features from the dataset. These individual predictions and features were then fed to a single meta-learner model in in the next layer (level 1) to generate a final prediction. The stacking ensemble model performed well and achieved Mean Squared Error (MSE) of 0.000011, Mean Absolute Error (MAE) of 0.002617 and R2 of 0.999737 in under 2 minutes. This model was selected to be used for TL to the East Rand and West Rand datasets. Ensemble methods (bagging, boosting and stacking) outperformed individual baseline models. However, when comparing stacking ensemble ML that combined all the baseline models with stacking ensemble ML that only combined the best performing models, it was found that there was no significant improvement in excluding bad models from the stack as long as the good models were included. In one case, it was actually beneficial to include the bad performing models. All models were trained in under 2 minutes which proved the benefit of using ML approaches compared to traditional approaches. The treated water data was highly uncorrelated such that model training was unsuccessful with the highest achievable R2 value being 0.14, thus, no treated water model was available for TL. TL was successfully conducted on the cleaned and modelled East Rand AMD dataset using the Central Rand (Pump B) stacking regressor and a high level of accuracy with respect to Mean Square Error (MSE), Mean Absolute Error (MAE) and R2 (MSE:0.00124, MAE:0.0290 and R2:0.963) between the predicted and true sulphate values was achieved. This was achieved despite a marked difference in the distributions between the Central Rand and East Rand datasets which further proved the power of utilizing ML for water data. TL was successful in imputing missing values in the West Rand dataset following prediction of sulphate levels in the cleaned and modelled West Rand AMD and treated water datasets. No true values for sulphate levels in the West Rand dataset were given, as such, accuracy comparisons could not be made. However, a general baseline idea of the amount of sulphate present in the West Rand treatment plant could now be understood. The sulphate levels in all three treatment plants (Central Rand, East Rand and West Rand) were found to greatly differ from each other with the Central Rand having the most normal distribution, the East Rand having the most precise distribution and the West Rand having the most variable distribution. Whilst the sulphate levels in the treated effluent waters could not be reliably predicted due to inherent issues (e.g., analytical inaccuracies and inconsistences) and poor correlations within the treated water datasets, sulphate levels in all three of the untreated AMD datasets were successfully predicted with a high degree of accuracy. This underpinned the observation made previously about the discrepancies between treated and untreated water. The study has shown that it is possible to impute missing values in one water dataset and use transfer learning to complete and consolidate another similar, but scarce, dataset(s). This approach has been lacking in the water industry, resulting in the reliance and use of traditional methods that are expensive and inadequate. This has caused water practitioners to abandon scarce datasets, thus, losing potentially valuable information that could be useful for water remediation and recovery of valuable resources from the water. As a spin off from the study, it has been indicated that automation of such data analysis is possible. This was achieved by developing a Graphical User Interface (GUI) for ease of use of the SE-ML model by those with little to no programming background nor ML knowledge e.g., the laboratory staff at the AMD treatment plants. This can also be used for teaching purposesin academia.Item Regularized Deep Neural Network for Post-Authorship Attribution(University of the Witwatersrand, Johannesburg, 2024) Modupe, Abiodun; Celik, Turgay; Marivate, VukosiPost-authorship attribution is the computational process of determining the legitimate author of an online text snippet, such as an email, blog, forum post, or chat log, by employing stylometric features. The process consists of analysing various linguistic and writing patterns, such as vocabulary, sentence structure, punctuation usage, and even the use of specific words or phrases. By comparing these features to a known set of writing pieces from potential authors, investigators can make educated hypotheses about the true authorship of a text snippet. Additionally, post-authorship attribution has applications in fields like forensic linguistics and cybersecurity, where determining the source of a text can be crucial for investigations or identifying potential threats. Furthermore, in a verification procedure to proactively uncover misogynistic, misandrist, xenophobic, and abusive posts on the internet or social networks, finding a suitable text representation to adequately symbolise and capture an author’s distinctive writing from a computational linguistics perspective is typically known as a stylometric analysis. Additionally, most of the posts on social media or online are generally rife with ambiguous terminologies that could potentially compromise and influence the precision of the early proposed authorship attribution model. The majority of extracted stylistic elements in words are idioms, onomatopoeias, homophones, phonemes, synonyms, acronyms, anaphora, and polysemy, which are fundamentally difficult to interpret by most existing natural language processing (NLP) systems. These difficulties make it difficult to correctly identify the true author of a given text. Therefore, further advancements in NLP systems are necessary to effectively handle these complex linguistic elements and improve the accuracy of authorship attribution models. In this thesis, we introduce a regularised deep neural network (RDNN) model to solve the challenges that come with figuring out who wrote what after the fact. The proposed method utilises a convolutional neural network, a bidirectional long short-term memory encoder, and a distributed highway network to effectively address the post-authorship attribution problem. The neural network was utilised to generate lexical stylometric features, which were then fed into the bidirectional encoder to produce a syntactic feature vector representation. The feature vector was then fed into the distributed high-speed networks for regularisation to reduce network generalisation errors. The regularised feature vector was then given to the bidirectional decoder to learn the author’s writing style. The feature classification layer is made up of a fully connected network and a SoftMax function for prediction. The RDNN method outperformed the existing state-of-the-art methods in terms of accuracy, precision, and recall on the majority of the benchmark datasets. These results highlight the potential of the proposed method to significantly improve classification performance in various domains. Again, the introduction of an interactive system to visualise the performance of the proposed method would further enhance its usability and effectiveness in quantifying the contribution of the author’s writing characteristics in both online text snippets and literary documents. It is useful in processing the evidence that is needed to support claims or draw conclusions about the author’s writing style or intent during the pre-trial investigation by the law enforcement agent in the court of law. The incorporation of this method into the pretrial stage greatly strengthens the credibility and validity of the findings presented in court and has the potential to revolutionise the field of authorship attribution and enhance the accuracy of forensic investigations. Furthermore, it ensures a fair and just legal process for all parties involved by providing concrete evidence to support or challenge claims. We are also aware of the limitations of the proposed methods and recognise the need for additional research to overcome these constraints and improve the overall reliability and applicability of post-authorship attribution of online text snippets and literary documents for forensic investigations. Even though the proposed methods have revealed some unusual differences in author writing style, such as how influential writers, regular people, and suspected authors use language, the evidence from the results with the features extracted from the texts has shown promise for identifying authorship patterns and aiding in forensic analyses. However, much work remains to be done to validate the methodologies’ usefulness and dependability as effective authorship attribution procedures. Further research is needed to determine the extent to which external factors, such as the context in which the text was written or the author’s emotional state, may impact the identified authorship patterns. Additionally, it is crucial to establish a comprehensive dataset that includes a diverse range of authors and writing styles to ensure the generalizability of the findings and enhance the reliability of forensic analyses. Furthermore, the dataset used in this thesis does not include a diverse variety of authors and writing styles, such as impostors attempting to impersonate another author, which limits the generalizability of the conclusions and undermines the credibility of forensic analysis. More studies can be conducted to broaden the proposed strategy for detecting and distinguishing impostors’ writing styles from those of authentic authors when committing crimes on both online and literary documents. It is conceivable for numerous criminals to collaborate to perpetrate a crime, which could aid in improving the proposed methods for detecting the existence of multiple impostors or the contribution of each criminal writing style based on the person or individual they are attempting to mimic. The likelihood of numerous offenders working together complicates the investigation and necessitates advanced procedures for identifying their individual contributions, as well as both authentic and manufactured impostor contents within the text. This is especially difficult on social media, where fake accounts and anonymous profiles can make it difficult to determine the true identity of those involved, which can come from a variety of sources, including text, WhatsApps, chat images, videos, and so on, and can lead to the spread of misinformation and manipulation. As a result, promoting a hybrid approach that goes beyond text as evidence could help address some of the concerns raised above. For example, integrating audio and visual data may provide a more complete perspective of the scenario. As a result, such an approach exacerbates the restrictions indicated in the distribution of data and may necessitate more storage and analytical resources. However, it can also lead to a more accurate and nuanced analysis of the situationItem Using Machine Learning to Estimate the Photometric Redshift of Galaxies(University of the Witwatersrand, Johannesburg, 2023-08) Salim, Shayaan; Bau, Hairong; Komin, NukriMachine learning has emerged as a crucial tool in the field of cosmology and astrophysics, leading to extensive research in this area. This research study aims to utilize machine learning models to estimate the redshift of galaxies, with a primary focus on utilizing photometric data to obtain accurate results. Five machine learning algorithms, including XGBoost, Random Forests, K-nearest neighbors, Artificial Neural Networks, and Polynomial Regression, are employed to estimate the redshifts, trained on photometric data derived from the Sloan Digital Sky Survey (SDSS) Data Release 17 database. Furthermore, various input parameters from the SDSS database are explored to achieve the most accurate redshift values. The research incorporates a comparative analysis, utilizing different evaluation metrics and statistical tests to determine the best-performing algorithm. The results indicate that the XGBoost algorithm achieves the highest accuracy, with an R2 value of 0.94, a Root Mean Square Error (RMSE) of 0.03, and a Mean Absolute Average Percentage (MAPE) of 12.04% when trained on the optimal feature subset. In comparison, the base model achieved an R2 of 0.84, a RMSE of 0.05, and a MAPE of 20.89%. The study contributes to the existing literature by utilizing photometric data during model training and comparing different high-performing algorithms from the literature.Item Business model innovation in South African companies under the changing post-COVID-19 world of work(University of the Witwatersrand, Johannesburg, 2021) Hlabathi, Katekani; Mzyece, MjumoBusinesses that have survived pandemics and other major global disruptions have demonstrated the importance of continually re-evaluating their business models. Implementing business model innovation has been shown to significantly enhance a business's chances of surviving major global disruptions. This study aims to determine how the application of business model innovation, particularly in South African enterprises, has enabled these businesses to survive and remain profitable in a changing work environment, especially during the COVID-19 pandemic. In this context, business model innovation refers to the creative introduction of new ways of the business providing value to their customers through the products they sell or services they provide. A qualitative study with ten (10) respondents from South African enterprises was conducted to test the proposition that businesses who apply business model innovation in pandemics, such as the COVID-19 pandemic, will survive and become even more profitable. The study was conducted in several enterprises from different industries, using interviews and questionnaires. The study aims to provide a possible framework to be used by businesses during pandemics and to provide a basis for further research on the subject. The study's key findings show that there are both internal and external factors that influence the implementation of an innovative business model. COVID-19 was rated highly as an influence that is top of mind, affecting how firms conducted their businesses today. The study also revealed that customers and stakeholders are key to developing an innovative business model. The limitations of the study relate to the number of respondents and their location. This was a direct effect of the qualitative nature of the study and the physical and other restrictions due to COVID-19; thus, the results may not be widely representative or fully replicable. Nevertheless, overall, the study indicates that business model innovation could give businesses the competitive advantage and the differentiation needed to succeed during times of uncertainty.Item A Data Science Framework for Mineral Resource Exploration and Estimation Using Remote Sensing and Machine Learning(University of the Witwatersrand, Johannesburg, 2023-08) Muhammad Ahsan, Mahboob; Celik, Turgay; Genc, BekirExploring mineral resources and transforming them into ore reserves is imperative for sustainable economic growth, particularly in low income developing economy countries. Limited exploration budgets, inaccessible areas, and long data processing times necessitate the use of advanced multidisciplinary technologies for minerals exploration and resource estimation. The conventional methods used for mineral resources exploration require expertise, understanding and knowledge of the spatial statistics, resource modelling, geology, mining engineering and clean validated data to build accurate estimations. In the past few years, data science has become increasingly important in the field of minerals exploration and estimation. This study is a step forward in this field of data science and its integration with minerals exploration and estimation. The research has been conducted to develop a state-of-the-art data science framework that can effectively use limited field data with remotely sensed satellite data for efficient mineral exploration and estimation, which was validated through case studies. Satellite remote sensing has emerged as a powerful modern technology for mineral resources exploration and estimation. This technology has been used to map and identify minerals, geological features, and lithology. Using digital image processing techniques (band ratios, spectral band combinations, spectral angle mapper and principal component analysis), the hydrothermal alteration of potential mineralization was mapped and analysed. Advanced machine learning and geostatistical models have been used to evaluate and predict the mineralization using field based geochemical samples, drillholes samples, and multispectral satellite remote sensing based hydrothermal alteration information. Several machine learning models were applied including the Convolutional Neural Networks (CNN), Random Forest (RF), Support Vector Machine (SVM), Support Vector Regression (SVR), Generalized Linear Model (GLM), and Decision Tree (DT). The geostatistical models used include the Inverse Distance Weighting (IDW) and Kriging with different semivariogram models. IDW was used to interpolate data points to make a prediction on mineralization, while Kriging used the spatial autocorrelation to make predictions. In order to assess the performance of machine learning and geostatistical models, a variety of predictive accuracy metrics such as confusion matrix, a receiver operating characteristic (ROC) curve, and a success-rate curve were used. In addition, Mean Absolute Error, Mean Square Error, and root mean square prediction error were also used. The results obtained based on the 10 m spatial resolution show that Zn is best predicted with RF with significant R2 values of 0.74 (p < 0.01) and 0.7 (p < 0.01) during training and testing. However, for Pb, the best prediction is made by SVR with significant R2 values of 0.72 (p < 0.01) and 0.64 (p < 0.01) for training and testing, respectively. Overall, the performance of SVR and RF outperforms the other machine learning models with the highest testing R2 values. The experimental results also showed that there is no single method that can be used independently to predict the spatial distribution of geochemical elements in streams. Instead, a combinatory approach of IDW and kriging is advised to generate more accurate predictions. For the case study of copper prediction, the results showed that the RF model exhibited the highest predictive accuracy, consistency and interpretability among the three ML models evaluated in this study. RF model also achieved the highest predictive efficiency in capturing known copper (Cu) deposits within a small prospective area. In comparison to the SVM and CNN models, the RF model outperformed them in terms of predictive accuracy and interpretability. The evaluation results have showed that the data science framework is able to deliver highly accurate results in minerals exploration and estimation. The results of the research were published through several peer reviewed journal and conference articles. The innovative aspect of the research is the use of machine learning models to both satellite remote sensing and field data, which allows for the identification of highly prospective mineral deposits. The framework developed in this study is cost-effective and time-saving and can be applied to inaccessible and/or new areas with limited ground-based knowledge to obtain reliable and up- to-date mineral information.Item Mean-Variance Optimisation of A South African Index Based Portfolio Using Machine Learning(University of the Witwatersrand, Johannesburg, 2021) Makgoale, Katlego; Jakubose, SibandaThis study embarked on a comparison of the effectiveness of the Markowitz Mean- Variance Portfolio Optimisation against utilising a Machine Learning Technique to construct an optimal portfolio. The study aimed to: Construct an optimal portfolio using the Mean-Variance Analysis Framework, Construct an optimal portfolio using a Machine Learning Technique (Support Vector Regression), Contrast the results of the Minimum-Variance Portfolio and the Machine Learning Portfolio. The stocks of the FTSE JSE FIN15 index were chosen to construct the portfolio. The historical returns of the stocks in the index were used to trained (December 2014 to June 2019) and test the models(June 2019 to December 2020). The Mean-Variance Analysis and Minimum-Variance Portfolio were constructed using Python code that the author compiled. Similarly, the Support Vector Regression model was built in Python. The weights for the Machine Learning portfolio were calculated using the pseudo-inverse matrix and the predicted value of the Regression Model. It was found that the Minimum-Variance and Machine Learning portfolio produced different portfolios, but both containing fewer holdings than the original index. The performance of the Minimum-Variance Portfolio exceeded that of the index and the Machine Learning Portfolio with regards to relative(excess) returns and total returns in the out of sample period. It was found that the Machine Learning portfolio performs well at replicating the index returns but fails to exceed them and typically has a higher risk associated with it. It was concluded that the Minimum-Variance portfolio would be the most attractive to a risk-averse investor and the Machine Learning portfolio underperforms the Minimum variance and the index. Therefore confirming the effectiveness of Mean-variance Optimisation in a South African context against a Machine Learning TechniqueItem Investigate the role of skills development hubs in equipping disadvantaged communities in South Africa to gain competencies required for the Fourth Industrial Revolution (4IR)(University of the Witwatersrand, Johannesburg, 2020) Desai, Mohsin; Sibanda, TonderaiSouth Africa’s participation in the global trend of the Fourth Industrial Revolution (4IR), has grown to include almost every business segment and is set to influence every conceivable aspect of all industries. This 4IR era, which is blurring the lines between the digital, physical, and biological spheres, began as an initiative to combat challenges faced by the manufacturing sector. Today, however, it is characterized by a blend of technologies and can be somewhat daunting to many organisations, not to mention individuals in general. South Africa’s National Development Plan (NDP) highlights the fact that together with social development, there is a dire need for bridging the gap of skills shortages, especially in disadvantaged communities (Kraak, 2004). This social entrepreneurship research investigates the extent that skills development hubs in disadvantaged communities can assist in the alleviation of poverty, by bridging the gap of skills in 4IR areas that will be essential for equipping Africans to be at the forefront of technological advancements. The research focused on the development of Africa 4IR training hubs, targeting initially, the main economic hubs of Gauteng province and then expanding throughout South Africa. Technological skills are deemed to be in short supply in South Africa and filling this skills gap could invariably alleviate unemployment and poverty, especially amongst disadvantaged communities. The projections and proposal for the need of training hubs through this research is based on findings drawn from existing literature and from interviewing young professionals, university students, corporate managers and entrepreneurs. Using institutional theory as a lens, this research aimed at investigating the role of skills development hubs in equipping disadvantaged communities in South Africa. Additionally, it provided a suitable collaborative framework that involved all relevant stakeholders from the context of social entrepreneurship. Also, to start low cost training hubs and develop competencies required in the era of the Fourth Industrial Revolution through public-private partnershipsItem The perceived impact of Emerging Technologies on Cybersecurity in the South African financial sector(University of the Witwatersrand, Johannesburg, 2022) Philips, Denzil; Pillay, KiluThis study is based on the investigation of what is the perceived impact of emerging technologies on cybersecurity in South African financial institutions. New and emerging technologies have made significant advancements in many industries that can be very disruptive in nature, and the majority of these technologies have changed the cyber threat landscape as well. These include, among other things, cloud computing, artificial intelligence, and machine learning. The study offers insight into how these emerging technologies affect the cybersecurity of financial institutions in South Africa. The study consisted of Information technology risk and cybersecurity individuals. The sample size of 11 individuals was seen as sufficient based on the spread across the financial sector and the experience within the various industries. The individuals were from banks, insurers and market infrastructures within the South African financial sector. The sample focused on key financial institutions specifically banks, insurers, and market infrastructures, based in different provinces in South Africa such as Johannesburg and Cape Town where the impact could be systemic in the country. A qualitative study was adopted by the researcher based on systems theory to determine the relationship between the adoption of emerging or new technologies and the impact it has on cybersecurity. There were various responses from the different institutions, focusing on the adoption of emerging technologies, the effects of this adoption on the cybersecurity environment, the risk and vulnerability management processes, and the ability to adapt and respond to new cybersecurity risks introduced by emerging technologies. The results of the study found that there is a clear link between the adoption of emerging technologies and the increase in cybersecurity requirements with emerging technologies significantly impacting the cybersecurity domain/functioItem Use of Artificial Intelligence, Machine Learning and Autonomous Technologies in Mining Industry, South Africa(University of the Witwatersrand, Johannesburg, 2023) Nong, Setshaba; Sethibe, TebogoThe mining industry plays a convincing role globally in driving various industries and contributing to economic prosperity. Locally, South Africa is known for having some of the largest minerals reserves in the world, although it is burdened with challenges inhibiting its progress and competitiveness. It is, however, expected that with application of AI, ML and AT will be able to revolutionise the industry, changing its fortunes, which will increase its competitiveness globally in the process attract investment and contribute to its longevity. As a result of these benefits, this research sought to investigate implication of AI, ML, AT technologies implementation in the mining industry of South Africa. The technologies are considered novel, especially in the mining industry, making employing qualitative study appropriate to assess how the implementation is received by the industry including perceptions and its potential impacts. Key findings of the study indicate that these technologies have the capacity to change the trajectory of the South African mining industry by dealing with issues of safety, costs, labour and efficiency. There is also an opportunity to pursue additional resources locked in pillars, by depth and dangerous working conditions due to geological complexities. However, capital costs, the nature of narrow tabular ore bodies and variability of various conditions are found to be some of the inhibiting factors for implementations of these technologies. As a result, there is no mine that has implemented any of these technologies as a primary means of production. This research will measure current perceptions of industry stakeholders and insights, role of government, mining companies, and equipment manufacturing response. The research highlight areas of impact and challenges that will contribute to strategy development in the process contributing to its sustainability. It is important to consider application of theory of constraint which is a detailed analysis which can assist mining companies in identification of inherent challenges so as to be able to respond appropriately with solutions offered by AI, ML and AT