Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Optimally - A&O (Apprentissage et Optimisation) Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Optimally

Résumé

Machine learning tasks are generally formulated as optimization problems, where one searches for an optimal function within a certain functional space. In practice, parameterized functional spaces are considered, in order to be able to perform gradient descent. Typically, a neural network architecture is chosen and fixed, and its parameters (connection weights) are optimized, yielding an architecture-dependent result. This way of proceeding however forces the evolution of the function during training to lie within the realm of what is expressible with the chosen architecture, and prevents any optimization across architectures. Costly architectural hyper-parameter optimization is often performed to compensate for this. Instead, we propose to adapt the architecture on the fly during training. We show that the information about desirable architectural changes, due to expressivity bottlenecks when attempting to follow the functional gradient, can be extracted from %the backpropagation. To do this, we propose a mathematical definition of expressivity bottlenecks, which enables us to detect, quantify and solve them while training, by adding suitable neurons when and where needed. Thus, while the standard approach requires large networks, in terms of number of neurons per layer, for expressivity and optimization reasons, we are able to start with very small neural networks and let them grow appropriately. As a proof of concept, we show results~on the CIFAR dataset, matching large neural network accuracy, with competitive training time, while removing the need for standard architectural hyper-parameter search.
Fichier principal
Vignette du fichier
Hal_version_Manon_Expressivity_Bottlenecks.pdf (1.42 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
licence

Dates et versions

hal-04591472 , version 1 (29-05-2024)

Licence

Identifiants

  • HAL Id : hal-04591472 , version 1

Citer

Manon Verbockhaven, Sylvain Chevallier, Guillaume Charpiat. Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Optimally. 2024. ⟨hal-04591472⟩
0 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More