Transformers Use Causal World Models in Maze-Solving Tasks

Bibliographic Details
Title: Transformers Use Causal World Models in Maze-Solving Tasks
Authors: Spies, Alex F., Edwards, William, Ivanitskiy, Michael I., Skapars, Adrians, Räuker, Tilman, Inoue, Katsumi, Russo, Alessandra, Shanahan, Murray
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, I.2
More Details: Recent studies in interpretability have explored the inner workings of transformer models trained on tasks across various domains, often discovering that these networks naturally develop highly structured representations. When such representations comprehensively reflect the task domain's structure, they are commonly referred to as "World Models" (WMs). In this work, we identify WMs in transformers trained on maze-solving tasks. By using Sparse Autoencoders (SAEs) and analyzing attention patterns, we examine the construction of WMs and demonstrate consistency between SAE feature-based and circuit-based analyses. By subsequently intervening on isolated features to confirm their causal role, we find that it is easier to activate features than to suppress them. Furthermore, we find that models can reason about mazes involving more simultaneously active features than they encountered during training; however, when these same mazes (with greater numbers of connections) are provided to models via input tokens instead, the models fail. Finally, we demonstrate that positional encoding schemes appear to influence how World Models are structured within the model's residual stream.
Comment: Main paper: 9 pages, 9 figures. Supplementary material: 10 pages, 17 additional figures. Code and data will be available upon publication. Corresponding author: A. F. Spies (afspies@imperial.ac.uk)
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2412.11867
Accession Number: edsarx.2412.11867
Database: arXiv
More Details
Description not available.