3. Electronic Theses and Dissertations (ETDs) - All submissions
Permanent URI for this communityhttps://wiredspace.wits.ac.za/handle/10539/45
Browse
6 results
Search Results
Item Semi-Hidden Markov models for visible light communication channels(2018) Holmes, Daniel GlennVisible Light Communication (VLC) is an emerging field in optical wireless communication that uses light emitting diodes (LEDs) for data transmission. LEDs are being widely adopted both indoors and outdoors due to their low cost, long lifespan and high efficiency. Furthermore, LEDs can be modulated to provide both illumination and wireless communication. There is also potential for VLC to be incorporated into future smart lighting systems. One of the current challenges in VLC is being able to deal with noise and interference; including interference from other dimmed, Pulse-Width Modulated (PWM) LEDs. Other noise includes natural light from the sun and artificial light from other non-modulating light sources. Modelling these types of channels is one of the first steps in understanding the channel and eventually designing techniques for mitigating the effects of noise and interference. This dissertation presents a semi-hidden Markov model, known as the Fritchman model, that discretely models the effects of as well as errors introduced from noise and interference in on-off keying modulated VLC channels. Models have been developed for both the indoor and outdoor environments and can be used for VLC simulations and designing error mitigation techniques. Results show that certain channels are able to be better modelled than others. Experimental error distributions shows insights into the impact that PWM interference has on VLC channels. This can be used for assisting in the development of error control codes and interference avoidance techniques in standalone VLC systems, as well as systems where VLC and smart lighting coexist. The models developed can also be used for simulations of VLC channels under different channel conditionsItem Accelerating decision making under partial observability using learned action priors(2017) Mabena, NtokozoPartially Observable Markov Decision Processes (POMDPs) provide a principled mathematical framework allowing a robot to reason about the consequences of actions and observations with respect to the agent's limited perception of its environment. They allow an agent to plan and act optimally in uncertain environments. Although they have been successfully applied to various robotic tasks, they are infamous for their high computational cost. This thesis demonstrates the use of knowledge transfer, learned from previous experiences, to accelerate the learning of POMDP tasks. We propose that in order for an agent to learn to solve these tasks quicker, it must be able to generalise from past behaviours and transfer knowledge, learned from solving multiple tasks, between di erent circumstances. We present a method for accelerating this learning process by learning the statistics of action choices over the lifetime of an agent, known as action priors. Action priors specify the usefulness of actions in situations and allow us to bias exploration, which in turn improves the performance of the learning process. Using navigation domains, we study the degree to which transferring knowledge between tasks in this way results in a considerable speed up in solution times. This thesis therefore makes the following contributions. We provide an algorithm for learning action priors from a set of approximately optimal value functions and two approaches with which a prior knowledge over actions can be used in a POMDP context. As such, we show that considerable gains in speed can be achieved in learning subsequent tasks using prior knowledge rather than learning from scratch. Learning with action priors can particularly be useful in reducing the cost of exploration in the early stages of the learning process as the priors can act as mechanism that allows the agent to select more useful actions given particular circumstances. Thus, we demonstrate how the initial losses associated with unguided exploration can be alleviated through the use of action priors which allow for safer exploration. Additionally, we illustrate that action priors can also improve the computation speeds of learning feasible policies in a shorter period of time.Item A new hybrid meta-heuristic algorithm for solving single machine scheduling problems(2017) Zlobinsky, NatashaNumerous applications in a wide variety of elds has resulted in a rich history of research into optimisation for scheduling. Although it is a fundamental form of the problem, the single machine scheduling problem with two or more objectives is known to be NP-hard. For this reason we consider the single machine problem a good test bed for solution algorithms. While there is a plethora of research into various aspects of scheduling problems, little has been done in evaluating the performance of the Simulated Annealing algorithm for the fundamental problem, or using it in combination with other techniques. Speci cally, this has not been done for minimising total weighted earliness and tardiness, which is the optimisation objective of this work. If we consider a mere ten jobs for scheduling, this results in over 3.6 million possible solution schedules. It is thus of de nite practical necessity to reduce the search space in order to nd an optimal or acceptable suboptimal solution in a shorter time, especially when scaling up the problem size. This is of particular importance in the application area of packet scheduling in wireless communications networks where the tolerance for computational delays is very low. The main contribution of this work is to investigate the hypothesis that inserting a step of pre-sampling by Markov Chain Monte Carlo methods before running the Simulated Annealing algorithm on the pruned search space can result in overall reduced running times. The search space is divided into a number of sections and Metropolis-Hastings Markov Chain Monte Carlo is performed over the sections in order to reduce the search space for Simulated Annealing by a factor of 20 to 100. Trade-o s are found between the run time and number of sections of the pre-sampling algorithm, and the run time of Simulated Annealing for minimising the percentage deviation of the nal result from the optimal solution cost. Algorithm performance is determined both by computational complexity and the quality of the solution (i.e. the percentage deviation from the optimal). We nd that the running time can be reduced by a factor of 4.5 to ensure a 2% deviation from the optimal, as compared to the basic Simulated Annealing algorithm on the full search space. More importantly, we are able to reduce the complexity of nding the optimal from O(n:n!) for a complete search to O(nNS) for Simulated Annealing to O(n(NMr +NS)+m) for the input variables n jobs, NS SA iterations, NM Metropolis- Hastings iterations, r inner samples and m sections.Item A review and application of hidden Markov models and double chain Markov models(2016) Hoff, Michael RyanHidden Markov models (HMMs) and double chain Markov models (DCMMs) are classical Markov model extensions used in a range of applications in the literature. This dissertation provides a comprehensive review of these models with focus on i) providing detailed mathematical derivations of key results - some of which, at the time of writing, were not found elsewhere in the literature, ii) discussing estimation techniques for unknown model parameters and the hidden state sequence, and iii) discussing considerations which practitioners of these models would typically take into account. Simulation studies are performed to measure statistical properties of estimated model parameters and the estimated hidden state path - derived using the Baum-Welch algorithm (BWA) and the Viterbi Algorithm (VA) respectively. The effectiveness of the BWA and the VA is also compared between the HMM and DCMM. Selected HMM and DCMM applications are reviewed and assessed in light of the conclusions drawn from the simulation study. Attention is given to application in the field of Credit Risk.Item Modelling and control of birth and death processes(2015-01-29) Getz, Wayne MarcusThis thesis treats systems of ordinary differential equations that ar*? extracted from ch-_ Kolmogorov forward equations of a class of Markov processes, known generally as birth and death processes. In particular we extract and analyze systems of equations which describe the dynamic behaviour of the second-order moments of the probability distribution of population governed by birth and death processes. We show that these systems form an important class of stochastic population models and conclude that they are superior to those stochastic models derived by adding a noise term to a deterministic population model. We also show that these systems are readily used in population control studies, in which the cost of uncertainty in the population mean size is taken into account. The first chapter formulates the univariate linear birth and death process in its most general form. T i«- prvbo'. i: ity distribution for the constant parameter case is obtained exactly, which allows one to state, as special cases, results on the simple birth and death, Poisson, Pascal, Polya, Palm and Arley processes. Control of a popu= lation, modelled by the linear birth and death process, is considered next. Particular attention is paid to system performance indecee which take into account the cost associated with non-zero variance and the cost of improving initial estimates of the size of the popula” tion under control.Item Response of dynamic systems to a class of renewal impulse process excitations : non-diffusive Markov approach(2008-12-02T12:48:54Z) Tellier, MatildeThe most suitable model that idealizes random sequences of shock and impacts on vibratory systems is that of a random train of pulses (or impulses), whose arrivals are characterized in terms of stochastic point processes. Most of the existing methods of stochastic dynamics are relevant to random impulsive excitations driven by Poisson processes and there exist some methods for Erlang renewal-driven impulse processes. Herein, two classes of random impulse processes are considered. The first one is the train of impulses whose interarrival timesare driven by an Erlang renewal process. The second class is obtained by selecting some impulses from the train driven by an Erlang renewal process. The selection is performed with the aid of the jump, zero-one, stochastic process governed by the stochastic differential equation driven by the independent Erlang renewal processes. The underlying counting process, driving the arrival times of the impulses, is fully characterized. The expressions for the probability density functions of the first and second waiting times are derived and by means of these functions it is proved that the underlying counting process is a renewal (non-Erlang) process. The probability density functions of the interarrival times are evaluated for four different cases of the driving process and the results obtained for some example sets of parameters are shown graphically. The advantage of modeling the interarrival times using the class of non-Erlang renewal processes analyzed in the present dissertation, rather than the Poisson or Erlang distributions is that it is possible to deal with a broader class of the interarrival probability density functions. The non-Erlang renewal processes considered herein, obtained from two independent Erlang renewal processes, are characterized by four parameters that can be chosen to fit more closely the actual data on the distribution of the interarrival times. As the renewal counting process is not the one with independent increments, the state vector of the dynamic system under a renewal impulse process excitation is not a Markov process. The non-Markov problem may be then converted into a Markov one at the expense of augmenting the state vector by auxiliary discrete stochastic variables driven by a Poisson process. Other than the existing in literature (Iwankiewicz and Nielsen), a novel technique of conversion is devised here, where the auxiliary variables are all zero-one processes. In a considered class of non-Erlang renewal impulse processes each of the driving Erlang processes is recast in terms of the Poisson process, the augmented state vector driven by two independent Poisson processes becomes a non-diffusive Markov process. For a linear oscillator, under a considered class of non-Erlang renewal impulse process, the equations for response moments are obtained from the generalized Ito’s differential rule and the mean value and variance of the response are evaluated and shown graphically for some selected sets of parameters. For a non-linear oscillator under both Erlang renewal-driven impulses and the considered class of non-Erlang renewal impulse processes, the technique of equations for moments together with a modified closure technique is devised. The specific physical properties of an impulsive load process allow to modify the classical cumulant-neglect closure scheme and to develop a more efficient technique for the class of excitations considered. The joint probability density of the augmented state vector is expressed as sum of contributions conditioned on the ‘on’ and ‘off’ states of the auxiliary variables. A discrete part of the joint probability density function accounts for the fact that there is a finite probability of the system being in a deterministic state (for example at rest) from the initial time to the occurrence of the first impulse. The continuous part, which is the conditional probability given that the first impulse has occurred, can be expressed in terms of functions of the displacement and velocity of the system. These functions can be viewed as unknown probability densities of a bi-variate stochastic process, each of which originates a set of ‘conditional moments’. The set of relationships between unconditional and conditional moments is derived. The ordinary cumulant neglect closure is then performed on the conditional moments pertinent to the continuous part only. The closure scheme is then formulated by expressing the ‘unconditional’ moments of order greater then the order of closure, in terms of unconditional moments of lower order. The stochastic analysis of a Duffing oscillator under the the random train of impulses driven by an Erlang renewal processes and a non-Erlang renewal process R(t), is performed by applying the second order ordinary cumulant neglect closure and the modified second order closure approximation and the approximate analytical results are verified against direct Monte Carlo simulation. The modified closure scheme proves to give better results for highly non-Gaussian train of impulses, characterized by low mean arrival rate.