University of Twente Student Theses

Login

Speeding Up Convergence For Sparse Training Using Feature Extraction Behaviour

Radu, A. (2021) Speeding Up Convergence For Sparse Training Using Feature Extraction Behaviour.

[img] PDF
424kB
Abstract:Deep learning is a powerful subset of machine learning algorithms, using multiple layers to learn complex patterns from large amounts of data. As deep neural networks require huge amounts of computations, sparsity addresses this problem by having removed a proportion of the connections in the network. Sparse Evolutionary Training (SET) is an approach to sparsity that involves removing connections based on the magnitude and adding new random ones, allowing for sparse-to-sparse training. A method to add connections to the network in a less random manner is introduced. With the help of skip layers connections and the linear features learned by them, during training, LiSET will add connections based on important linear features. While this method offers trade-offs, we argue that this approach leads to a faster convergence on certain datasets, especially when training on high levels of sparsity. An evaluation metric is proposed to estimate the speed up in convergence. On training the algorithm using stochastic gradient descent on two datasets, MNIST and ISOLET, it outperformed SET on the proposed evaluation metric and reached greater accuracy. This work presents a contribution to the research in sparse neural networks and speeding up the convergence of such models.
Item Type:Essay (Bachelor)
Faculty:EEMCS: Electrical Engineering, Mathematics and Computer Science
Subject:54 computer science
Programme:Business & IT BSc (56066)
Link to this item:https://purl.utwente.nl/essays/87349
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page