On The Concurrence of Layer-wise Preconditioning Methods and Provable Feature Learning
Title: | On The Concurrence of Layer-wise Preconditioning Methods and Provable Feature Learning |
---|---|
Authors: | Zhang, Thomas T., Moniri, Behrad, Nagwekar, Ansh, Rahman, Faraz, Xue, Anton, Hassani, Hamed, Matni, Nikolai |
Publication Year: | 2025 |
Collection: | Computer Science Mathematics Statistics |
Subject Terms: | Computer Science - Machine Learning, Mathematics - Optimization and Control, Statistics - Machine Learning |
More Details: | Layer-wise preconditioning methods are a family of memory-efficient optimization algorithms that introduce preconditioners per axis of each layer's weight tensors. These methods have seen a recent resurgence, demonstrating impressive performance relative to entry-wise ("diagonal") preconditioning methods such as Adam(W) on a wide range of neural network optimization tasks. Complementary to their practical performance, we demonstrate that layer-wise preconditioning methods are provably necessary from a statistical perspective. To showcase this, we consider two prototypical models, linear representation learning and single-index learning, which are widely used to study how typical algorithms efficiently learn useful features to enable generalization. In these problems, we show SGD is a suboptimal feature learner when extending beyond ideal isotropic inputs $\mathbf{x} \sim \mathsf{N}(\mathbf{0}, \mathbf{I})$ and well-conditioned settings typically assumed in prior work. We demonstrate theoretically and numerically that this suboptimality is fundamental, and that layer-wise preconditioning emerges naturally as the solution. We further show that standard tools like Adam preconditioning and batch-norm only mildly mitigate these issues, supporting the unique benefits of layer-wise preconditioning. |
Document Type: | Working Paper |
Access URL: | http://arxiv.org/abs/2502.01763 |
Accession Number: | edsarx.2502.01763 |
Database: | arXiv |
Description not available. |