Faculty of Engineering and the Built Environment (ETDs)
Permanent URI for this communityhttps://hdl.handle.net/10539/37934
Browse
Item Model Propagation for High-Parallelism in Data Compression(University of the Witwatersrand, Johannesburg, 2023-10) Lin, Shaw Chian; Cheng, LingRecent data compression research focuses on the parallelisation of existing algorithms (LZ77, BZIP2 etc.) by exploiting their inherent parallelism. Little work has been performed on parallelising highly sequential algorithms, whose slow compression speeds would benefit the most from parallelism. This dissertation presents a generalised parallelisation approach that can be potentially adopted by any compression algorithms, with model sequentiality in mind. The scheme presents a novel divide-and-conquer approach when dividing the data stream into smaller data blocks for parallelisation. The scheme, branching propagation, is implemented with prediction by partial matching (PPM), an algorithm of the statistical-modelling family known for their serial nature, which is shown to suffer from compression ratio increases when parallelised. A speedup of 5.2-7x has been achieved at 16 threads, with at most a 6.5% increase in size relative to serial performance, while the conventional approach showed up to a 7.5x speedup with an 8.0% increase. The branching propagation approach has been shown to offer better compression ratios over conventional approaches with increasing parallelism (a difference of 11% increase at 256 threads), albeit at slightly slower speeds. To quantify the speedup over ratio penalty, an alternate metric called speedup-to-ratio increase (SRI) is used. This shows that when serial dependency is maintained, branching propagation is superior in standard configurations, which offers substantial speed while minimising the compression ratio penalty relative to the speedup. However, at lower serial dependency, the conventional approach is generally preferable, with 9-16x speedup per 1% increase in compression ratio at maximal speed compared to branching propagation’s 6-13x speedup per 1%.Item Peak-to-average power ratio reduction in optical-OFDM systems using lexicographical permutations(University of the Witwatersrand, Johannesburg, 2023) Niwareeba, Roland; Cox, Mitchell A.; Cheng, LingThe work presented in this thesis extends and contributes to the research in reducing the high Peak-to-Average Power Ratio (PAPR) in optical-Orthogonal Frequency Division Multiplexing (OFDM) systems using probabilistic-based and hybrid techniques. Whereas the high PAPR problem has been extensively studied and a number of solutions provided for the conventional Radio Frequency (RF)-OFDM systems, there are only a few solutions proposed specifically for PAPR reduction in optical-OFDM systems. Although the probabilistic-based techniques such as Conventional Selected Mapping (CSLM) and Data Position Permutation (DPP) result into significant PAPR reduction performance with negligible Bit Error Rate (BER) degradation, the resulting increase in both hardware and computational complexity as a result of a large number of Inverse Fast Fourier Transform (IFFT) operations that have to be performed to generate the candidate signals is still a major drawback. In order to reduce the complexity, in this research, two techniques which are applied in opticalOFDM systems are proposed. The first technique is the hybrid method composed of a modified CSLM and µ-law companding techniques called Low Complexity Hybrid Selected Mapping (LCHSLM). The proposed method achieves almost 50% reduction in complexity compared to CSLM with less BER degradation. The second technique based on lexicographical permutations called Lexicographical Symbol Position Permutation (LSPP) works by dividing the optical-OFDM symbol into a number of sub-blocks and performing lexicographical permutations to obtain the candidate signals after the IFFT operations. In the proposed LSPP, all the candidate permutation sequences are not obtained at once unlike in the DPP where the number of candidate permutation sequences increases at a factorial rate of growth as the number of sub-blocks increases resulting in a more complex system. Additionally, the research proposes an algorithm where a threshold PAPR value is introduced and the candidate signals are generated until a candidate with a PAPR value less or equal to the threshold is obtained. The results show that the complexity in terms of IFFT operations can be reduced substantially depending on the selected threshold and the number of candidate signals. Furthermore, the research introduces a new algorithm based on the global gain (net gain) to determine the most suitable number of permutation candidate sequences to achieve a reasonable PAPR reduction performance without increasing the time and hardware complexity to levels that the systems cannot tolerate.