Bibliographic Details
Title: |
$\mu$LO: Compute-Efficient Meta-Generalization of Learned Optimizers |
Authors: |
Thérien, Benjamin, Joseph, Charles-Étienne, Knyazev, Boris, Oyallon, Edouard, Rish, Irina, Belilovsky, Eugene |
Publication Year: |
2024 |
Collection: |
Computer Science |
Subject Terms: |
Computer Science - Machine Learning |
More Details: |
Learned optimizers (LOs) can significantly reduce the wall-clock training time of neural networks, substantially reducing training costs. However, they can struggle to optimize unseen tasks (meta-generalize), especially when training networks much larger than those seen during meta-training. To address this, we derive the Maximal Update Parametrization ($\mu$P) for two popular learned optimizer architectures and propose a simple meta-training recipe for $\mu$-parameterized LOs ($\mu$LOs). Our empirical evaluation demonstrates that LOs meta-trained with our recipe substantially improve meta-generalization to wider unseen tasks when compared to LOs trained under standard parametrization (e.g., as they are trained in existing work). When applying our $\mu$LOs, each trained for less than 250 GPU-hours, to large-width models we are often able to match or exceed the performance of pre-trained VeLO, the most performant publicly available learned optimizer, meta-trained with 4000 TPU-months of compute. We also observe that learned optimizers trained with our $\mu$LO recipe also exhibit substantially improved meta-generalization to deeper networks ($5\times$ meta-training) and remarkable generalization to much longer training horizons ($25\times$ meta-training). |
Document Type: |
Working Paper |
Access URL: |
http://arxiv.org/abs/2406.00153 |
Accession Number: |
edsarx.2406.00153 |
Database: |
arXiv |