3. Electronic Theses and Dissertations (ETDs) - All submissions
Permanent URI for this communityhttps://wiredspace.wits.ac.za/handle/10539/45
Browse
10 results
Search Results
Item Techniques to improve iterative decoding of linear block codes(2019-10) Genga, Yuval OdhiamboIn the field of forward error correction, the development of decoding algorithms with a high error correction performance and tolerable complexity has been of great interest for the reliable transmission of data through a noisy channel. The focus of the work done in this thesis is to exploit techniques used in forward error correction in the development of an iterative soft-decision decoding approach that yields a high performance in terms of error correction and a tolerable computational complexity cost when compared to existing decoding algorithms. The decoding technique developed in this research takes advantage of the systematic structure exhibited by linear block codes to implement an information set decoding approach to correct errors in the received vector outputted from the channel. The proposed decoding approach improves the iterative performance of the algorithm as the decoder is only required to detect and correct a subset of the symbols from the received vector. These symbols are referred to as the information set. The information set, which matches the length of the message, is then used decode the entire codeword. The decoding approach presented in the thesis is tested on both Reed Solomon and Low Density Parity Check codes. The implementation of the decoder varies for both the linear block codes due to the different structural properties of the codes. Reed Solomon codes have the advantage of having a row rank inverse property which enables the construction of a partial systematic structure using any set of columns in the parity check matrix. This property provides a more direct implementation for finding the information set required by the decoder based on the soft reliability information. However, the dense structure of the parity check matrix of Reed Solomon codes presents challenges in terms of error detection and correction for the proposed decoding approach. To counter this problem, a bit-level implementation of the decoding technique for Reed Solomon codes is presented in the thesis. The presentation of the parity check matrix extension technique is also proposed in the thesis. This technique involves the addition of low weight codewords from the dual code, that match the minimum distance of the code, to the parity check matrix during the decoding process. This helps add sparsity to the symbol-level implementation of the proposed decoder. This sparsity helps with the efficient exchange of the soft information during the message passing stage of the proposed decoder. Most high performance Low Density Parity Check codes proposed in literature lack a systematic structure. This presents a challenge for the proposed decoding approach in obtaining the information set. A systematic construction for a Quasi-Cyclic Low Density Parity Check code is also presented in this thesis so as to allow for the information set decoding. The proposed construction is able to match the error correction performance of a high performance Quasi-Cyclic Low Density Parity Check matrix design, while having the benefit of a low complexity construction for the encoder. In addition, this thesis also proposes a stopping condition for iterative decoding algorithms based on the information set decoding technique. This stopping condition is applied to other high performance iterative decoding algorithms for both Reed Solomon codes and Low Density Parity Check codes so as to improve the iterative performance. This improves on the overall efficiency of the decoding algorithms.Item Improving the convergence rate of the iterative parity check transformation algorithm decoder for Reed-Solomon codes(2018) Brookstein, Peter C.This masters by research dissertation contributes to research in the field of Telecommunications, with a focus on forward error correction and improving an iterative Reed-Solomon decoder known as the Parity-check Transformation Algorithm (PTA). Previous work in this field has focused on improving the runtime parameters and stopping conditions of the algorithm in order to reduce its computational complexity. In this dissertation, a di↵erent approach is taken by modifying the algorithm to more e↵ectively utilise the soft-decision channel information provided by the demodulator. Modifications drawing inspiration from the Belief Propagation (BP) algorithm used to decode Low-Density Parity-Check (LDPC) codes are successfully implemented and tested. In addition to the selection of potential codeword symbols, these changes make use of soft channel information to calculate dynamic weighting values. These dynamic weights are further used to modify the intrinsic reliability of the selected symbols after each iteration. Improvements to both the Symbol Error Rate (SER) performance and the rate of convergence of the decoder are quantified using computer simulations implemented in MATLAB and GNU Octave. A deterministic framework for executing these simulations is created and utilised to ensure that all results are reproducible and can be easily audited. Comparative simulations are performed between the modified algorithm and the PTA in its most e↵ective known configuration (with =0 .001). Results of simulations decoding half-rate RS(15,7) codewords over a 16-QAM AWGN channel show a more than 50-fold reduction in the number of operations required by the modified algorithm to converge on a valid codeword. This is achieved while simultaneously observing a coding gain of 1dB for symbol error rates between 102 and 104.Item Threshold based multi-bit flipping decoding of binary LDPC codes(2017) Masunda, Kennedy Tohwechipi FuduThere has been a surge in the demand of high speed reliable communication infrastructure in the last few decades. Advanced technology, namely the internet has transformed the way people live and how they interact with their environment. The Internet of Things (IoT) has been a very big phenomenon and continues to transform infrastructure in the home and work place. All these developments are underpinned by the availability of cost-effective, reliable and error free communication services. A perfect and reliable communication channel through which to transmit information does not exist. Telecommunication channels are often characterised by random noise and unpredictable disturbances that distort information or result in the loss of information. The need for reliable error-free communication has resulted in advanced research work in the field of Forward Error Correction (FEC). Low density parity check (LDPC) codes, discovered by Gallager in 1963 provide excellent error correction performance which is close to the vaunted Shannon limit when used with long block codes and decoded with the sum-product algorithm (SPA). However, long block code lengths increase the decoding complexity exponentially and this problem is exacerbated by the intrinsic complexity of the SPA and its approximate derivatives. This makes it impossible for the SPA to be implemented in any practical communication device. Bit flipping LDPC decoders, whose error correction performance pales in comparison to the SPA have been devised to counter the disadvantages of the SPA. Even though, the bit flipping algorithms do not perform as well as the SPA, their exceeding low complexity makes them attractive for practical implementation in high speed communication devices. Thus, a lot of research has gone into the design and development of improved bit flipping algorithms. This research work analyses and focusses on the design of improved multi-bit flipping algorithms which converge faster than single-bit flipping algorithms. The aim of the research is to devise methods with which to obtain thresholds that can be used to determine erroneous sections of a given codeword so that they can be corrected. Two algorithms that use multi-thresholds are developed during the course of this research. The first algorithm uses multiple adaptive thresholds while the second algorithm uses multiple near optimal SNR dependant fixed thresholds to identify erroneous bits in a codeword. Both algorithms use soft information modification to further improve the decoding performance. Simulations show that the use of multiple adaptive or near optimal SNR dependant fixed thresholds improves the bit error rate (BER) and frame error rate (FER) correcting performance and also decreases the average number of iterations (ANI) required for convergence. The proposed algorithms are also investigated in terms of quantisation for practical applications in communication devices. Simulations show that the bit length of the quantizer as well as the quantization strategy (uniform or non-uniform quantization) is very important as it affects the decoding performance of the algorithms significantly.Item Soft-decision decoding of permutation codes in AWGN and fading channels(2017) Kolade, Oluwafemi IbrahimPermutation codes provide the required redundancy for error correction in a noisy communication channel. Combined with MFSK modulation, the outcome produces an e cient system reliable in combating background and impulse noise in the com- munication channel. Part of this can be associated with how the redundancy scales up the amount of frequencies used in transmission. Permutation coding has also shown to be a good candidate for error correction in harsh channels such as the Powerline Communication channel. Extensive work has been done to construct permutation code books but existing decoding algorithms become impractical for large codebook sizes. This is because the algorithms need to compare the received codeword with all the codewords in the codebook used in encoding. This research therefore designs an e cient soft-decision decoder of Permutation codes. The decoder's decision mechanism does not require lookup comparison with all the codewords in the codebook. The code construction technique that derives the codebook is also irrelevant to the decoder. Results compare the decoding algorithm with Hard-decision plus Envelope Detec- tion in the Additive White Gaussian Noise (AWGN) and Rayleigh Fading Channels. The results show that with lesser iterations, improved error correction performance is achieved for high-rate codes. Lower rate codes require additional iterations for signi cant error correction performance. The decoder also requires much less comup- tational complexity compared with existing decoding algorithms.Item Analysis of bounded distance decoding for Reed Solomon codes(2017) Babalola, Oluwaseyi PaulBounded distance decoding of Reed Solomon (RS) codes involves nding a unique codeword if there is at least one codeword within the given distance. A corrupted message having errors that is less than or equal to half the minimum distance cor- responds to a unique codeword, and therefore will decode errors correctly using the minimum distance decoder. However, increasing the decoding radius to be slightly higher than half of the minimum distance may result in multiple codewords within the Hamming sphere. The list decoding and syndrome extension methods provide a maximum error correcting capability whereby the radius of the Hamming ball can be extended for low rate RS codes. In this research, we study the probability of having unique codewords for (7; k) RS codes when the decoding radius is increased from the error correcting capability t to t + 1. Simulation results show a signi cant e ect of the code rates on the probability of having unique codewords. It also shows that the probability of having unique codeword for low rate codes is close to one.Item Investigation of the use of infinite impulse response filters to construct linear block codes(2016) Chandran, AneeshThe work presented extends and contributes to research in error-control coding and information theory. The work focuses on the construction of block codes using an IIR lter structure. Although previous works in this area uses FIR lter structures for error-detection, it was inherently used in conjunction with other error-control codes, there has not been an investigation into using IIR lter structures to create codewords, let alone to justify its validity. In the research presented, linear block codes are created using IIR lters, and the error-correcting capabilities are investigated. The construction of short codes that achieve the Griesmer bound are shown. The potential to construct long codes are discussed and how the construction is constrained due to high computational complexity is shown. The G-matrices for these codes are also obtained from a computer search, which is shown to not have a Quasi-Cyclic structure, and these codewords have been tested to show that they are not cyclic. Further analysis has shown that IIR lter structures implements truncated cyclic codes, which are shown to be implementable using an FIR lter. The research also shows that the codewords created from IIR lter structures are valid by decoding using an existing iterative soft-decision decoder. This represents a unique and valuable contribution to the eld of error-control coding and information theory.Item Comparison of code rate and transmit diversity in MIMO systems(2016) Churms, DuaneIn order to compare low rate error correcting codes to MIMO schemes with transmit diversity, two systems with the same throughput are compared. A VBLAST MIMO system with (15; 5) Reed-Solomon coding is compared to an Alamouti MIMO system with (15; 10) Reed-Solomon coding. The latter is found to perform signi cantly better, indicating that transmit diversity is a more e ective technique for minimising errors than reducing the code rate. The Guruswami-Sudan/Koetter-Vardy soft decision decoding algorithm was implemented to allow decoding beyond the conventional error correcting bound of RS codes and VBLAST was adapted to provide reliability information. Analysis is also performed to nd the optimal code rate when using various MIMO systems.Item Symbol level decoding of Reed-Solomon codes with improved reliability information over fading channels(2016) Ogundile, Olanyika OlaoluReliable and e cient data transmission have been the subject of current research, most especially in realistic channels such as the Rayleigh fading channels. The focus of every new technique is to improve the transmission reliability and to increase the transmission capacity of the communication links for more information to be transmitted. Modulation schemes such as M-ary Quadrature Amplitude Modulation (M-QAM) and Orthogonal Frequency Division Multiplexing (OFDM) were developed to increase the transmission capacity of communication links without additional bandwidth expansion, and to reduce the design complexity of communication systems. On the contrary, due to the varying nature of communication channels, the message transmission reliability is subjected to a couple of factors. These factors include the channel estimation techniques and Forward Error Correction schemes (FEC) used in improving the message reliability. Innumerable channel estimation techniques have been proposed independently, and in combination with di erent FEC schemes in order to improve the message reliability. The emphasis have been to improve the channel estimation performance, bandwidth and power consumption, and the implementation time complexity of the estimation techniques. Of particular interest, FEC schemes such as Reed-Solomon (RS) codes, Turbo codes, Low Density Parity Check (LDPC) codes, Hamming codes, and Permutation codes, are proposed to improve the message transmission reliability of communication links. Turbo and LDPC codes have been used extensively to combat the varying nature of communication channels, most especially in joint iterative channel estimation and decoding receiver structures. In this thesis, attention is focused on using RS codes to improve the message reliability of a communication link because RS codes have good capability of correcting random and burst errors, and are useful in di erent wireless applications. This study concentrates on symbol level soft decision decoding of RS codes. In this regards, a novel symbol level iterative soft decision decoder for RS codes based on parity-check equations is developed. This Parity-check matrix Transformation Algorithm (PTA) is based on the soft reliability information derived from the channel output in order to perform syndrome checks in an iterative process. Performance analysis verify that this developed PTA outperforms the conventional RS hard decision decoding algorithms and the symbol level Koetter and Vardy (KV ) RS soft decision decoding algorithm. In addition, this thesis develops an improved Distance Metric (DM) method of deriving reliability information over Rayleigh fading channels for combined demodulation with symbol level RS soft decision decoding algorithms. The newly proposed DM method incorporates the channel state information in deriving the soft reliability information over Rayleigh fading channels. Analysis verify that this developed metric enhances the performance of symbol level RS soft decision decoders in comparison with the conventional method. Although, in this thesis, the performance of the developed DM method of deriving soft reliability information over Rayleigh fading channels is only veri ed for symbol level RS soft decision decoders, it is applicable to any symbol level soft decision decoding FEC scheme. Besides, the performance of the all FEC decoding schemes plummet as a result of the Rayleigh fading channels. This engender the development of joint iterative channel estimation and decoding receiver structures in order to improve the message reliability, most especially with Turbo and LDPC codes as the FEC schemes. As such, this thesis develops the rst joint iterative channel estimation and Reed- Solomon decoding receiver structure. Essentially, the joint iterative channel estimation and RS decoding receiver is developed based on the existing symbol level soft decision KV algorithm. Consequently, the joint iterative channel estimation and RS decoding receiver is extended to the developed RS parity-check matrix transformation algorithm. The PTA provides design ease and exibility, and lesser computational time complexity in an iterative receiver structure in comparison with the KV algorithm. Generally, the ndings of this thesis are relevant in improving the message transmission reliability of a communication link with RS codes. For instance, it is pertinent to numerous data transmission technologies such as Digital Audio Broadcasting (DAB), Digital Video Broadcasting (DVB), Digital Subscriber Line (DSL), WiMAX, and long distance satellite communications. Equally, the developed, less computationally intensive, and performance e cient symbol level decoding algorithm for RS codes can be use in consumer technologies like compact disc and digital versatile disc.Item Applicability of network coding with location based addressing over a simplified VANETmodel(2016) Hudson, AshtonThe design and implementation of network coding into a location based ad- dressing algorithm for VANET has been investigated. Theoretical analysis of the network coding algorithm has been done by using a simplified topology called the ladder topology. The theoretical models were shown to describe the way that network coding and standard location based addressing works over the VANET network. All tests were performed over simulation. Network coding was shown to improve performance by a factor of 1.5 to 2 times in both simulation and theoretical models. The theoretical models demonstrate a fundamental limit to how much network coding can improve performance by, and these were confirmed by the simulations. Network coding does have a susceptibility to interference, but the other benefits of the techniques are substantial despite this. Network coding demonstrates strong possibilities for future development for VANET protocols. The ladder topology is an important tool for future analysis.Item Modifications to the symbol wise soft input parity check transformation decoding algorithm(2016) Genga, Yuval OdhiamboReed-Solomon codes are very popular codes used in the field of forward error correction due to their correcting capabilities. Thus, a lot of research has been done dedicated to the development of decoding algorithms for this class of code. [Abbreviated Abstract. Open document to view full version]