LOSSGRAD: automatic learning rate in gradient descent

Bibliographic Details
Title: LOSSGRAD: automatic learning rate in gradient descent
Authors: Wójcik, Bartosz, Maziarka, Łukasz, Tabor, Jacek
Source: Schedae Informaticae, 2018, Volume 27
Publication Year: 2019
Collection: Computer Science
Mathematics
Statistics
Subject Terms: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Mathematics - Optimization and Control, Statistics - Machine Learning
More Details: In this paper, we propose a simple, fast and easy to implement algorithm LOSSGRAD (locally optimal step-size in gradient descent), which automatically modifies the step-size in gradient descent during neural networks training. Given a function $f$, a point $x$, and the gradient $\nabla_x f$ of $f$, we aim to find the step-size $h$ which is (locally) optimal, i.e. satisfies: $$ h=arg\,min_{t \geq 0} f(x-t \nabla_x f). $$ Making use of quadratic approximation, we show that the algorithm satisfies the above assumption. We experimentally show that our method is insensitive to the choice of initial learning rate while achieving results comparable to other methods.
Comment: TFML 2019
Document Type: Working Paper
DOI: 10.4467/20838476SI.18.004.10409
Access URL: http://arxiv.org/abs/1902.07656
Accession Number: edsarx.1902.07656
Database: arXiv
More Details
DOI:10.4467/20838476SI.18.004.10409