ETD Collection

Permanent URI for this collectionhttps://wiredspace.wits.ac.za/handle/10539/104


Please note: Digitised content is made available at the best possible quality range, taking into consideration file size and the condition of the original item. These restrictions may sometimes affect the quality of the final published item. For queries regarding content of ETD collection please contact IR specialists by email : IR specialists or Tel : 011 717 4652 / 1954

Follow the link below for important information about Electronic Theses and Dissertations (ETD)

Library Guide about ETD

Browse

Search Results

Now showing 1 - 6 of 6
  • Item
    Effective impact prediction: how accurate are predicted impacts in EIAs?
    (2017) Molefe, Noella Madalo
    An Environmental Impact Assessment (EIA) is an instrument used to limit unexpected and negative effects of proposed developments on the environment. Much experience has been gained internationally but the lack of follow-up after the EIA is prepared is one of the major weak spots of the assessments. It is therefore very important to follow up on development projects and observe their effects on the environment after the go-ahead has been given, so that the EIA quality may be improved. There is often a significant difference between predicted impacts and actual impacts. Sometimes the predicted impacts do not occur, or new impacts which were not predicted in the Environmental Impacts Assessment Reports (EIRs) arise. The aim of this study was to assess the accuracy of the impacts predicted in the EIRs compiled for three large-scale Eskom projects currently under execution situated in the Mpumalanga, Limpopo and KwaZulu-Natal provinces by comparing them to the actual impacts that occurred on site. The EIA follow-up process was used to assess the influence that the EIA may have on large-scale projects and ultimately assess the effectiveness of the EIA process as a whole. A procedure developed by Wilson (1998) was used to follow up on the selected projects because the method allowed for comparisons between the actual and predicted impacts to be made and for discrepancies in the EIRs to be identified. Recent audit reports, aerial photographs and interviews were all used to identify actual impact occurrence. Of the impacts which actually occurred, 91% occurred as predicted (OP) and 9% occurred but were not predicted (ONP). The majority of impacts omitted from the reports were hydrological (27%) and air quality impacts (25%). These unexpected impacts were most probably overlooked because they are site-specific, temporary in nature and would not cause any significant environmental damage. Of all the impacts predicted in the reports, 85% were accurately predicted and 15% were not. The impacts inaccurately predicted were hydrological impacts (27%), flora and fauna impacts (7%) and 30% other impacts which included soil pollution, fires and loss of agricultural potential. The inaccuracies could be a result of Environmental Impact Assessment Practitioners (EAPs) predicting a large number of impacts with the hopes of lowering the risk of omitting impacts. However, sometimes the impacts predicted do not occur in reality. Overall it can be concluded that the impact prediction accuracy of the three EIRs compiled for Eskom exceeds previous studies conducted nationally. Eskom EIRs are highly accurate with regards to impact prediction with minor discrepancies which can easily be rectified. Key words: Environmental Impacts Assessment (EIA) Environmental Impacts Assessment Reports (EIRs), Environmental Impact Assessment Practitioners (EAPs), EIA follow-up, discrepancies.
  • Item
    Modelling and forecasting volatility of JSE sectoral indices: a Model Confidence Set exercise
    (2014-07-29) Song, Matthew;
    Volatility plays an important role in option pricing and risk management. It is crucial that volatility is modelled as accurately as possible in order to forecast with confidence. The challenge is in the selection of the ‘best’ model with so many available models and selection criteria. The Model Confidence Set (MCS) solves this problem by choosing a group of models that are equally good. A set of GARCH models were estimated for several JSE indices and the MCS was used to trim the group of models to a subset of equally superior models. Using the Mean Squared Error to evaluate the relative performance of the MCS, GARCH (1,1) and Random Walk, it was found that the MCS, with an equally weighted combination of models, performed better than the GARCH (1,1) and Random Walk for instances where volatility in the returns data was high. For instances of low volatility in the returns, the GARCH (1,1) had superior 5-day forecasts but the MCS had better performance for 10-days and greater. The EGARCH (2,1) volatility model was selected by the MCS for 5 out of the 6 indices as the most superior model. The Random Walk was shown to have better long term forecasting performance.
  • Item
    Explaining returns in property markets using Taylor rule fundamentals: Evidence from emerging markets
    (2014-07-15) Gumede, Ofentse;
    This study set out to investigate the relationship between returns in the residential property markets and two key economic variables of output and interest rates. The main focus was on the short-term rates path and how it is influenced by the Taylor rule fundamentals and in turn, its effect on the returns in the property markets within the developing countries of South Africa, Bulgaria, Lithuania and Czech Republic. A secondary focus was on building a model that can be further developed into a full forecasting model of returns in the residential property markets. Output was found to be a strong driver of returns in the residential property markets across all four countries. Real changes in the economic activity feed into the residential property markets and drives returns. Output can be incorporated into a forecasting framework for returns in the residential property markets within these countries The short-term rate paths within the countries studied were found to be consistent with the Taylor rule but with heavy short run deviations from the rule. Short-term rates deviated from the rule in the short run, but showed a tendency to revert to the rule in subsequent periods. Returns and prices in the property markets were driven by the short-term rates only in two of the emerging markets. For these countries, this link between rate and returns mean there was also a link between monetary policy and returns in the property sector. Similar to the Taylor rule process, property returns in the two emerging markets were found to have short run deviations which could not be explained by interest rates and output. For the purposes of building a fully fledged forecasting model, this model must be expanded to include other explanatory factors. Adding the risk premium as an explanatory variable could be the starting point.
  • Item
    An empirical evaluation of the Altman (1968) failure prediction model on South African JSE listed companies
    (2013-03-18) Rama, Kavir D.
    Credit has become very important in the global economy (Cynamon and Fazzari, 2008). The Altman (1968) failure prediction model, or derivatives thereof, are often used in the identification and selection of financially distressed companies as it is recognized as one of the most reliable in predicting company failure (Eidleman, 1995). Failure of a firm can cause substantial losses to creditors and shareholders, therefore it is important, to detect company failure as early as possible. This research report empirically tests the Altman (1968) failure prediction model on 227 South African JSE listed companies using data from the 2008 financial year to calculate the Z-score within the model, and measuring success or failure of firms in the 2009 and 2010 years. The results indicate that the Altman (1968) model is a viable tool in predicting company failure for firms with positive Z-scores, and where Z-scores do not fall into the range of uncertainty as specified. The results also suggest that the model is not reliable when the Z–scores are negative or when they are in the range of uncertainty (between 2.99 and 1.81). If one is able to predict firm failure in advance, it should be possible for management to take steps to avert such an occurrence (Deakin, 1972; Keasey and Watson, 1991; Platt and Platt, 2002).
  • Item
    Can forward interest rates predict future spot rates in South Africa? A test of the pure expectations hypothesis and market efficiency in the South African government bond market
    (2012-07-04) Loukakis, Andrea
    The pure expectations hypothesis says that forward rates, implied off a yield curve, are unbiased predictors of future spot rates. Which implies forward rates, according to the pure expectations hypothesis, should provide reliable forecasts of future spot rates. This study set out to see if the theory behind the pure expectations hypothesis holds in a South African context. If it does hold, it can have an impact on real world applications such as bond trading strategies and the setting of monetary policy. To test the theory behind the pure expectations hypothesis, South African government bond data for the short end of the yield curve was used. Various regression tests were run. These regressions tested mainly for forward rate forecast accuracy, the relationship between forecast errors and changes in the spot rate, for the presence of liquidity premiums and to test for market efficiency. The results indicated that forecast accuracy and the relationship between forecast errors and changes in the spot rate were contrary to the theory behind the pure expectations hypothesis. A liquidity premium was found to exist and there appeared to be weak form market efficiency. These results led to a conclusion that there is very little evidence to support the theory behind the pure expectations hypothesis. This was mainly due to the presence of a liquidity premium. The pure expectations hypothesis does not seem to be of any significant use within real world applications.
  • Item
    Time series analysis using fractal theory and online ensemble classifiers with application to stock portfolio optimization
    (2007-10-10T07:55:20Z) Lunga, Wadzanai Dalton
    Neural Network method is a technique that is heavily researched and used in applications within the engineering field for various purposes ranging from process control to biomedical applications. The success of Neural Networks (NN) in engineering applications, e.g. object tracking and face recognition has motivated its application to the finance industry. In the financial industry, time series data is used to model economic variables. As a result, finance researchers, portfolio managers and stockbrokers have taken interest in applying NN to model non-linear problems they face in their practice. NN facilitates the approach of predicting stocks due to its ability to accurately and intuitively learn complex patterns and characterizes these patterns as simple equations. In this research, a methodology that uses fractal theory and NN framework to model the stock market behavior is proposed and developed. The time series analysis is carried out using the proposed approach with application to modelling the Dow Jones Average Index’s future directional movement. A methodology to establish self-similarity of time series and long memory effects that result in classifying the time series signal as persistent, random or non-persistent using the rescaled range analysis technique is developed. A linear regression technique is used for the estimation of the required parameters and an incremental online NN algorithm is implemented to predict the directional movement of the stock. An iterative fractal analysis technique is used to select the required signal intervals using the approximated parameters. The selected data is later combined to form a signal of interest and then pass it to the ensemble of classifiers. The classifiers are modelled using a neural network based algorithm. The performance of the final algorithm is measured based on accuracy of predicting the direction of movement and also on the algorithm’s confidence in its decision-making. The improvement within the final algorithm is easily assessed by comparing results from two different models in which the first model is implemented without fractal analysis and the second model is implemented with the aid of a strong fractal analysis technique. The results of the first NN model were published in the Lecture Notes in Computer Science 2006 by Springer. The second NN model incorporated a fractal theory technique. The results from this model shows a great deal of improvement when classifying the next day’s stock direction of movement. A summary of these results were submitted to the Australian Joint Conference on Artificial Intelligence 2006 for publishing. Limitations on the sample size, including problems encountered with the proposed approach are also outlined in the next sections. This document also outlines recommendations that can be implemented as further steps to advance and improve the proposed approach for future work.