Sejeso, Matthews Malebogo2023-11-232023-11-232022https://hdl.handle.net/10539/37152A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy to the Faculty of Science, University of the Witwatersrand, 2022Many applications in emerging technologies call for efficient sensing systems to acquire and process high-resolution signals. This task is impractical for the traditional sampling scheme, the Nyquist-Shannon sampling theory. Compressed sensing was developed as the new signal acquisition scheme to address this issue. The compressed sensing theory asserts that linear encoded measurements can be used to simultaneous acquire and compress the signal. The technique requires much less computational resources than traditional sampling schemes. In compressed sensing, an optimization problem comprised of a data fidelity term and a nonlinear sparsity enforcing term are used to recover sparse vectors from a few linear measurements. In most applications, the problems are large-scale and characterized by high-dimensional decision variables. They also require the real-time processing of data. Specialised optimization algorithms have been used to solve sparse recovery problems in compressed sensing. However, these algorithms tend to be slow and computationally intensive to perform real-time recovery. On the contrary, continuous-time dynamical systems have recently gained attention as efficient optimization problem solvers. They have the potential to yield significantly speed and power improvements over their discrete counterpart. The focus of this thesis is to understand the type of continuous-time dynamical systems that can be used to solve sparse recovery problems and analyse their performance mathematically. It is essential to understand the dynamical system’s behaviour before it can be used to solve optimization problems. First, we present a general dynamical system modelled by the subgradient of nonsmooth objective function coupled with the sparse promoting activation function. Convergence analysis of this gradient-like differential inclusion is done using the recently developed nonsmooth Lojasiewicz inequality. The trajectories of the dynamical system are shown to have finite lengths and are globally convergences to equilibrium points. The equilibrium points of the dynamical system correspond to the critical point of the sparse optimization problem. An estimate of convergence rate, which depends on the Lojasiewicz exponent, is obtained. Second, the Bregman integrated dynamical system for solving the `1-minimization problem is presented. The dynamical system integrates the Bregman distance in the design, resulting in an improved convergence rate. The proposed dynamical system fits well within the framework differential inclusion presented and analysed early. Thus the Bregman integrated dynamical system is well suited to solve the `1-minimizer problem. We show that the proposed dynamical system takes an efficient path towards the optimal solution and recovers the expected support set of the sparse solution. The Bregman integrated dynamical system yields the exponential convergence rate, which significantly improves the convergence of the previously proposed dynamical system of Locally Competitive Algorithm. Computational results are presented to support the developed theory and the good performance of the proposed dynamical system. Several comparative experiments on sparse recovery problems demonstrate that the proposed dynamical system approach is efficient and effectiveenDynamical Trajectory-Based MethodSparse RecoveryEmerging technologiesA dynamical trajectory-based method for sparse recoveryThesis