ETD Collection
Permanent URI for this collectionhttps://wiredspace.wits.ac.za/handle/10539/104
Please note: Digitised content is made available at the best possible quality range, taking into consideration file size and the condition of the original item. These restrictions may sometimes affect the quality of the final published item. For queries regarding content of ETD collection please contact IR specialists by email : IR specialists or Tel : 011 717 4652 / 1954
Follow the link below for important information about Electronic Theses and Dissertations (ETD)
Library Guide about ETD
Browse
11 results
Search Results
Item Prediction of Blast Vibrations from Quarries using Machine Learning Algorithms and Empirical Formulae(2019) Morena, Badisheng IsaacThe aim of this study was to, firstly, use machine learning algorithms to predict Peak Particle Velocity (PPV) in order to optimise blasting layouts and reduce the risk of damaging surface structures. Empirical models developed by the United States Bureau of Mines (USBM) (1963) and Ambraseys and Hendron (1968) were compared to the machine learning algorithms. The tests conducted were interpolation and extrapolation. Most of the data used in this report was obtained from the USBM’s Bulletin 656. The data was analysed using a qualitative and quantitative research methods. The Cubist machine learning model (Kuhn, 2018) performed the best in the interpolation test with a coefficient of determination (R2) of 83.39 % and a root mean squared error (RMSE) and mean absolute error (MAE) of 10.64 and 7.30 respectively. The empirical models performed the best with the extrapolation test with an average R2 of 88 % and RMSE and MAE of 9.17 and 6.59 respectively. This research has shown the effectiveness of machine algorithms in predicting PPV and empirical formulae using historical data from different sites. However, explosive and geotechnical information was not available in the dataset and it is therefore recommended that further research be conducted with this data.Item Parallel algorithms for querying spatial properties in the protein data bank(2019) Selvan, JoshuaSearching large protein databases for proteins with certain structural properties is expensive. This research explored the use of GPGPUs (General Purpose Graphical Processing Units) in speeding up such structural queries. Brute force and kd-tree spatial data structure algorithms were compared and benchmarked against non-GPU parallel algorithms to assess the e ectiveness of using GPGPUs. This was done with the aim of increasing the speed at which queries against large protein databases can be completed to help mitigate the e ect of increasing data set sizes of current protein databases [56]. A set of parallel variations of range search algorithms were developed and imple- mented in the GPU programming language CUDA and their performances times in completing batch range search jobs were compared against other parallel approach types such as multi-threading and message passing to see if the GPU approaches completed notably faster or slower than more traditional parallelised approaches. The results showed GPGPUs can construct kd-trees far faster than other parallelised implementations can achieve and that in most scenarios (excluding speci c cases such as very low or zero result searches) the GPGPU approaches either matched or performed far better than the other parallelised approaches. While comparing di erent GPU algorithms, the complex GPU based kd-tree algo- rithm performed similarly to a simple GPU brute force range search. This high- lighted the bene ts of writing code which made the most of the GPU's parallel architecture as opposed to modifying e cient (recursive) algorithms to adequately t into those same GPU architectures. This implied that even though spatial data structures are e ective ways of dealing with protein data, there are better returns on e ort in writing code speci cally for the GPU's inherently parallel architecture for initiatives which require algorithms to be developed from scratch.Item Improving the convergence rate of the iterative parity check transformation algorithm decoder for Reed-Solomon codes(2018) Brookstein, Peter C.This masters by research dissertation contributes to research in the field of Telecommunications, with a focus on forward error correction and improving an iterative Reed-Solomon decoder known as the Parity-check Transformation Algorithm (PTA). Previous work in this field has focused on improving the runtime parameters and stopping conditions of the algorithm in order to reduce its computational complexity. In this dissertation, a di↵erent approach is taken by modifying the algorithm to more e↵ectively utilise the soft-decision channel information provided by the demodulator. Modifications drawing inspiration from the Belief Propagation (BP) algorithm used to decode Low-Density Parity-Check (LDPC) codes are successfully implemented and tested. In addition to the selection of potential codeword symbols, these changes make use of soft channel information to calculate dynamic weighting values. These dynamic weights are further used to modify the intrinsic reliability of the selected symbols after each iteration. Improvements to both the Symbol Error Rate (SER) performance and the rate of convergence of the decoder are quantified using computer simulations implemented in MATLAB and GNU Octave. A deterministic framework for executing these simulations is created and utilised to ensure that all results are reproducible and can be easily audited. Comparative simulations are performed between the modified algorithm and the PTA in its most e↵ective known configuration (with =0 .001). Results of simulations decoding half-rate RS(15,7) codewords over a 16-QAM AWGN channel show a more than 50-fold reduction in the number of operations required by the modified algorithm to converge on a valid codeword. This is achieved while simultaneously observing a coding gain of 1dB for symbol error rates between 102 and 104.Item Threshold based multi-bit flipping decoding of binary LDPC codes(2017) Masunda, Kennedy Tohwechipi FuduThere has been a surge in the demand of high speed reliable communication infrastructure in the last few decades. Advanced technology, namely the internet has transformed the way people live and how they interact with their environment. The Internet of Things (IoT) has been a very big phenomenon and continues to transform infrastructure in the home and work place. All these developments are underpinned by the availability of cost-effective, reliable and error free communication services. A perfect and reliable communication channel through which to transmit information does not exist. Telecommunication channels are often characterised by random noise and unpredictable disturbances that distort information or result in the loss of information. The need for reliable error-free communication has resulted in advanced research work in the field of Forward Error Correction (FEC). Low density parity check (LDPC) codes, discovered by Gallager in 1963 provide excellent error correction performance which is close to the vaunted Shannon limit when used with long block codes and decoded with the sum-product algorithm (SPA). However, long block code lengths increase the decoding complexity exponentially and this problem is exacerbated by the intrinsic complexity of the SPA and its approximate derivatives. This makes it impossible for the SPA to be implemented in any practical communication device. Bit flipping LDPC decoders, whose error correction performance pales in comparison to the SPA have been devised to counter the disadvantages of the SPA. Even though, the bit flipping algorithms do not perform as well as the SPA, their exceeding low complexity makes them attractive for practical implementation in high speed communication devices. Thus, a lot of research has gone into the design and development of improved bit flipping algorithms. This research work analyses and focusses on the design of improved multi-bit flipping algorithms which converge faster than single-bit flipping algorithms. The aim of the research is to devise methods with which to obtain thresholds that can be used to determine erroneous sections of a given codeword so that they can be corrected. Two algorithms that use multi-thresholds are developed during the course of this research. The first algorithm uses multiple adaptive thresholds while the second algorithm uses multiple near optimal SNR dependant fixed thresholds to identify erroneous bits in a codeword. Both algorithms use soft information modification to further improve the decoding performance. Simulations show that the use of multiple adaptive or near optimal SNR dependant fixed thresholds improves the bit error rate (BER) and frame error rate (FER) correcting performance and also decreases the average number of iterations (ANI) required for convergence. The proposed algorithms are also investigated in terms of quantisation for practical applications in communication devices. Simulations show that the bit length of the quantizer as well as the quantization strategy (uniform or non-uniform quantization) is very important as it affects the decoding performance of the algorithms significantly.Item Soft-decision decoding of permutation codes in AWGN and fading channels(2017) Kolade, Oluwafemi IbrahimPermutation codes provide the required redundancy for error correction in a noisy communication channel. Combined with MFSK modulation, the outcome produces an e cient system reliable in combating background and impulse noise in the com- munication channel. Part of this can be associated with how the redundancy scales up the amount of frequencies used in transmission. Permutation coding has also shown to be a good candidate for error correction in harsh channels such as the Powerline Communication channel. Extensive work has been done to construct permutation code books but existing decoding algorithms become impractical for large codebook sizes. This is because the algorithms need to compare the received codeword with all the codewords in the codebook used in encoding. This research therefore designs an e cient soft-decision decoder of Permutation codes. The decoder's decision mechanism does not require lookup comparison with all the codewords in the codebook. The code construction technique that derives the codebook is also irrelevant to the decoder. Results compare the decoding algorithm with Hard-decision plus Envelope Detec- tion in the Additive White Gaussian Noise (AWGN) and Rayleigh Fading Channels. The results show that with lesser iterations, improved error correction performance is achieved for high-rate codes. Lower rate codes require additional iterations for signi cant error correction performance. The decoder also requires much less comup- tational complexity compared with existing decoding algorithms.Item An automatic controller tuning algorithm.(1991) Christodoulou, Michael, A.The report describes the design of an algorithm which can be used for automatic controller tuning purposes. It uses an on-line parameter estimator and a pole assignrnent design method. The resulting control law is formulated to approximate a proportional-integral (PI) industrial controller. The development ofthe algorithm is based on the delta-operator, Some implementation aspects such as covariance resetting, dead zone, and signal conditioning are also discussed. Robust stability and performance are two issues that govern the design approach. Additionally transient and steady state system response criteria are utilized from the time and frequency domains. The design work is substantiated with the use of simulation and real plant tests.Item Raster to vector conversion in a local, exact and near optimal manner(1991) Carter, John AndrewRemote sensing can be used to produce maps of land-cover, but to be of use to the GIS community these maps must first be vectorized in an intelligent manner. Existing algorithms suffer from the defects of being slow, memory intensive and producing vast quantities of very short vectors. Furthermore if these vectors are thinned via standard algorithms, errors are introduced. The process of vectorizing raster maps is subject to major ambiguities. Thus an infinite family of vector maps ccrresponds to each raster map. This dissertation presents an algorithm for converting raster maps in a rapid manner to accurate vector maps with a minimum of vectors. The algorithm converts raster maps to vector maps using local information only, (a two by two neighbourhood). the method is "exact" in the sense that rasterizing the resulting polygons would produce exactly the same raster map, pixel for pixel. The method is "near optimal" in that it produces, in a local sense, that "exacb" vector map having the least number of vectors. The program is built around a home-grown object oriented Programming System (OOPS) for the C programming language. The main features of the OOPS system, (called OopCdaisy), are virtual and static methods, polymorphism, generalized containers, container indices and thorough error checking, The following general purpose objects are implemented with a large number of sophistiated methods :- Stacks, LIFO lists, scannable containers with indices, trees and 2D objects like points, lines etc.Item An online adaptive learning algorithm for optimal trade execution in high-frequency markets(2016) Hendricks, DieterAutomated algorithmic trade execution is a central problem in modern financial markets, however finding and navigating optimal trajectories in this system is a non-trivial task. Many authors have developed exact analytical solutions by making simplifying assumptions regarding governing dynamics, however for practical feasibility and robustness, a more dynamic approach is needed to capture the spatial and temporal system complexity and adapt as intraday regimes change. This thesis aims to consolidate four key ideas: 1) the financial market as a complex adaptive system, where purposeful agents with varying system visibility collectively and simultaneously create and perceive their environment as they interact with it; 2) spin glass models as a tractable formalism to model phenomena in this complex system; 3) the multivariate Hawkes process as a candidate governing process for limit order book events; and 4) reinforcement learning as a framework for online, adaptive learning. Combined with the data and computational challenges of developing an efficient, machine-scale trading algorithm, we present a feasible scheme which systematically encodes these ideas. We first determine the efficacy of the proposed learning framework, under the conjecture of approximate Markovian dynamics in the equity market. We find that a simple lookup table Q-learning algorithm, with discrete state attributes and discrete actions, is able to improve post-trade implementation shortfall by adapting a typical static arrival-price volume trajectory with respect to prevailing market microstructure features streaming from the limit order book. To enumerate a scale-specific state space whilst avoiding the curse of dimensionality, we propose a novel approach to detect the intraday temporal financial market state at each decision point in the Q-learning algorithm, inspired by the complex adaptive system paradigm. A physical analogy to the ferromagnetic Potts model at thermal equilibrium is used to develop a high-speed maximum likelihood clustering algorithm, appropriate for measuring critical or near-critical temporal states in the financial system. State features are studied to extract time-scale-specific state signature vectors, which serve as low-dimensional state descriptors and enable online state detection. To assess the impact of agent interactions on the system, a multivariate Hawkes process is used to measure the resiliency of the limit order book with respect to liquidity-demand events of varying size. By studying the branching ratios associated with key quote replenishment intensities following trades, we ensure that the limit order book is expected to be resilient with respect to the maximum permissible trade executed by the agent. Finally we present a feasible scheme for unsupervised state discovery, state detection and online learning for high-frequency quantitative trading agents faced with a multifeatured, asynchronous market data feed. We provide a technique for enumerating the state space at the scale at which the agent interacts with the system, incorporating the effects of a live trading agent on limit order book dynamics into the market data feed, and hence the perceived state evolution.Item A fully three-dimensional heuristic algorithm for container packing(2016) Aspoas, Sean GrahamWe present a new three-dimensional container-packing algorithm. The algorithm is truly three-dimensional, thus, overcoming the limitations of layering algorithms, especially when a large number of parcel types is used. The algorithm sequentially places parcels into the container using localised heuristic information, and makes use of a balanced tree to store potential packing positions. The result is an algorithm with time complexity O(knlogn) where k is the number of parcel types, and n the maximum number of parcels that can be placed. Test results, including a comparative test, are very favourable, and show that the algorithms performance actually increases as the number of parcel types is increased.This is a direct result of the three-dimenslonal algorithm facilitating the utilisation of all useful packing positions using the variety of parcel sizes available.Item Modifications to the symbol wise soft input parity check transformation decoding algorithm(2016) Genga, Yuval OdhiamboReed-Solomon codes are very popular codes used in the field of forward error correction due to their correcting capabilities. Thus, a lot of research has been done dedicated to the development of decoding algorithms for this class of code. [Abbreviated Abstract. Open document to view full version]