3. Electronic Theses and Dissertations (ETDs) - All submissions
Permanent URI for this communityhttps://wiredspace.wits.ac.za/handle/10539/45
Browse
3 results
Search Results
Item Measuring concurrency in CCS(1993) Galpin, Vashti ChristinaThis research report investigates the application of Charron-Bost's measure of currency m to Milner's Calculus of Communicating Systems (CCS). The aim of this is twofold: first to evaluate the measure m in terms of criteria gathered from the literature: and second to determine the feasiblllty of measuring concurrency in CCS and hence provide a new tool for understanding concurrency using CCS. The approach taken is to identify the differences hetween the message-passing formalism in which the measure m is defined, and CCS and to modify this formalism to-enable the mapping of CCS agents to it. A software tool, the Concurrency Measurement Tool, is developed to permit experimentation with chosen CCS agents. These experiments show that the measure m, although intuitively appealing, is defined by an algebraic expression that is ill-behaved. A new measure is defined and it is shown that it matches the evaluation criteria better than m, although it is still not ideal. This work demonstrates that it is feasible to measure concurrency in CCS and that a methodology has been developed for evaluating concurrency measures.Item Massive parallelism for combinatorial problems by hardware acceleration with an application to the label switching problem(2016) Steere, EdwardThis dissertation proposes an approach to solving hard combinatorial problems in massively parallel architectures using parallel metaheuristics. Combinatorial problems are common in many scientific fields. Scientific progress is constrained by the fact that, even using state of the art algorithms, solving hard combinatorial problems can take days or weeks. This is the case with the Label Switching Problem (LSP) in the field of Bioinformatics. In this field, prior work to solve the LSP has resulted in the program CLUMPP (CLUster Matching and Permutation Program). CLUMPP focuses solely on the use of a sequential, classical heuristic, and has had success in smaller low complexity problems. By contrast this dissertation proposes the Parallel Solvers model for the acceleration of hard combinatorial problems. This model draws on the commonalities evident in algorithms and strategies in metaheuristics. After investigating the effectiveness of the mechanisms apparent in the Parallel Solvers model with regards to the LSP, the author developed DePermute, an algorithm which can be used to solve the LSP significantly faster. Results were generated from time based testing of simulated data, as well as data freely available on the Internet as part of various projects. An investigation into the effectiveness of DePermute was carried out on a CPU (Central Processing Unit) based computer. The time based testing was carried out on a CPU based computer and on a Graphics Processing Unit (GPU) attached to a CPU host computer. The dissertation also proposes the design of an Field Programmable Gate Arrays (FGPA) based implementation of DePermute. Using Parallel Solvers, in the DePermute algorithm, the time taken for population group sizes, K, ranging from K = 5 to 20 was improved by up to two orders of magnitude using the GPU implementation and aggressive settings for CLUMPP. The CPU implementation, while slower than the GPU implementation still outperforms CLUMPP, using aggressive settings, marginally and usually with better quality. In addition it outperforms CLUMPP by at least an order of magnitude when CLUMPP is set to use higher quality settings. Combinatorial problems can be very difficult. Parallel Solvers has been effective in the field of Bioinformatics in solving the LSP. This dissertation proposes that it might assist in the reasoning and design of algorithms in other fields.Item Chunked extendible arrays and its integration with the global array toolkit for parallel image processing(2016) Nimako, GideonSeveral meetings of the Extremely Large Databases Community for large scale scientific applications have advocated the use of multidimensional arrays as the appropriate model for representing scientific databases. Scientific databases gradually grow to massive sizes of the order of terabytes and petabytes. As such, the storage of such databases requires efficient dynamic storage schemes where the array is allowed to arbitrarily extend the bounds of the dimensions. Conventional multidimensional array representations in today’s programming environments do not extend or shrink their bounds without relocating elements of the data-set. In general extendibility of the bounds of the dimensions is limited to only one dimension. This thesis presents a technique for storing dense multidimensional arrays by chunks such that the array can be extended along any dimension without compromising the access time of an element. This is done with a computed access mapping function that maps the k-dimensional index onto a linear index of the storage locations. This concept forms the basis for the implementation of an array file of any number of dimensions, where the bounds of the array dimension can be extended arbitrarily. Such a feature currently exists in the Hierarchical Data Format version 5 (HDF5). However, extending the bound of a dimension in the HDF5 array file can be unusually expensive in time. Such extensions, in our storage scheme for dense array files, can be performed while still accessing elements of the array at orders of magnitude faster than in HDF5 or conventional array-files. We also present Parallel Chunked Extendible Dense Array (PEXTA), a new parallel I/O model for the Global Array Toolkit. PEXTA provides the necessary Application Programming Interface (API) for explicit data transfer between the memory resident global array and its secondary storage counterpart but also allows the persistent array to be extended on any dimension without compromising the access time of an element or sub-array elements. Such APIs provide a platform for high speed and parallel hyperspectral image processing without performance degradation, even when the imagery files undergo extensions.