nGPT: Normalized Transformer with Representation Learning on the Hypersphere

Bibliographic Details
Title: nGPT: Normalized Transformer with Representation Learning on the Hypersphere
Authors: Loshchilov, Ilya, Hsieh, Cheng-Ping, Sun, Simeng, Ginsburg, Boris
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Machine Learning, Computer Science - Artificial Intelligence
More Details: We propose a novel neural network architecture, the normalized Transformer (nGPT) with representation learning on the hypersphere. In nGPT, all vectors forming the embeddings, MLP, attention matrices and hidden states are unit norm normalized. The input stream of tokens travels on the surface of a hypersphere, with each layer contributing a displacement towards the target output predictions. These displacements are defined by the MLP and attention blocks, whose vector components also reside on the same hypersphere. Experiments show that nGPT learns much faster, reducing the number of training steps required to achieve the same accuracy by a factor of 4 to 20, depending on the sequence length.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2410.01131
Accession Number: edsarx.2410.01131
Database: arXiv
More Details
Description not available.