On sparsity in deep learning: the benefits and pitfalls of sparse neural networks and how to learn their architectures

dc.contributor.authorTessera, Kale-ab
dc.date.accessioned2022-08-10T08:08:03Z
dc.date.available2022-08-10T08:08:03Z
dc.date.issued2021
dc.descriptionA research report submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science, 2021en_ZA
dc.description.abstractOverparameterization in deep learning has led to many breakthroughs in the field. However, overparameterized models also have various limitations, such as high computational and storage costs, while also being prone to memorization. To address these limitations, the field of sparse neural networks has gained a renewed focus. Training sparse neural networks to converge to the same performance as dense neural architectures has proved to be elusive. Recent work suggests that initialization is the key. However, while this research direction has had some success, focusing on initialization alone appears to be inadequate. In this work, we take a broader view of training sparse networks and consider the role of regularization, optimization, and architecture choices on sparse models. We propose a simple experimental framework — Same Capacity Sparse vs Dense Comparison (SC-SDC) — that allows for a fair comparison of sparse and dense networks. Furthermore, we propose a new measure of gradient flow — Effective Gradient Flow(EGF) — that better correlates to performance in sparse networks. Using top-line metrics, SC-SDC and EGF, we show that the default choices of optimizers, activation functions and regularizers used for dense networks can disadvantage sparse networks. Another issue with sparse networks is the lack of efficient, flexible methods for learning their architectures. Most current approaches only focus on learning convolutional architectures. This limits their application to Convolutional Neural Networks (CNNs) and results in a large search space, since each convolutional layer requires learning hyperparameters such as the padding, kernel, and stride size. To address this, we use techniques that leverage Neural Architecture Search (NAS) methods to learn sparse architectures in a simple, flexible, and efficient manner. We propose a simple NAS algorithm — Sparse Neural Architecture Search (SNAS) — and a flexible NAS search space that we use to learn layer-wise density levels (percentage of active weights). Due to the simplicity of our approach, we can learn most architecture types, while also having a smaller search space. Our results show that we can consistently learn sparse Multilayer Perceptrons (MLPs) and sparse CNNs that outperform their dense counterparts, with considerably fewer weights. Furthermore, we also show that the learned architectures are competitive with state-of-the-art architectures and pruning methods. Based upon these findings, we show that reconsidering aspects of sparse architecture design and the training regime, combined with simple search methods, yields promising resultsen_ZA
dc.description.librarianCK2022en_ZA
dc.facultyFaculty of Scienceen_ZA
dc.identifier.urihttps://hdl.handle.net/10539/33097
dc.language.isoenen_ZA
dc.schoolSchool of Computer Science and Applied Mathematicsen_ZA
dc.titleOn sparsity in deep learning: the benefits and pitfalls of sparse neural networks and how to learn their architecturesen_ZA
dc.typeThesisen_ZA
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
dissertation_kaleab_tessera_final_submission (1).pdf
Size:
22.89 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections