Sparse, Geometric Autoencoder Models of V1
Title: | Sparse, Geometric Autoencoder Models of V1 |
---|---|
Authors: | Huml, Jonathan, Tasissa, Abiy, Ba, Demba |
Publication Year: | 2023 |
Collection: | Computer Science |
Subject Terms: | Computer Science - Artificial Intelligence, Computer Science - Machine Learning |
More Details: | The classical sparse coding model represents visual stimuli as a linear combination of a handful of learned basis functions that are Gabor-like when trained on natural image data. However, the Gabor-like filters learned by classical sparse coding far overpredict well-tuned simple cell receptive field (SCRF) profiles. A number of subsequent models have either discarded the sparse dictionary learning framework entirely or have yet to take advantage of the surge in unrolled, neural dictionary learning architectures. A key missing theme of these updates is a stronger notion of \emph{structured sparsity}. We propose an autoencoder architecture whose latent representations are implicitly, locally organized for spectral clustering, which begets artificial neurons better matched to observed primate data. The weighted-$\ell_1$ (WL) constraint in the autoencoder objective function maintains core ideas of the sparse coding framework, yet also offers a promising path to describe the differentiation of receptive fields in terms of a discriminative hierarchy in future work. Comment: Symmetry and Geometry in Neural Representations (NeurIPS) 2022 |
Document Type: | Working Paper |
Access URL: | http://arxiv.org/abs/2302.11162 |
Accession Number: | edsarx.2302.11162 |
Database: | arXiv |
Be the first to leave a comment!