Neural Embedding Allocation: Distributed Representations of Topic Models.

Bibliographic Details
Title: Neural Embedding Allocation: Distributed Representations of Topic Models.
Authors: Keya, Kamrun Naher1 (AUTHOR) kkeya1@umbc.edu, Papanikolaou, Yannis2 (AUTHOR) yannis.papanikolaou@healx.io, Foulds, James R.1 (AUTHOR) jfoulds@umbc.edu
Source: Computational Linguistics. Dec2022, Vol. 48 Issue 4, p1021-1052. 32p.
Subject Terms: *VOCABULARY
Company/Entity: NATIONAL Endowment for the Arts
Abstract: We propose a method that uses neural embeddings to improve the performance of any given LDA-style topic model. Our method, called neural embedding allocation (NEA), deconstructs topic models (LDA or otherwise) into interpretable vector-space embeddings of words, topics, documents, authors, and so on, by learning neural embeddings to mimic the topic model. We demonstrate that NEA improves coherence scores of the original topic model by smoothing out the noisy topics when the number of topics is large. Furthermore, we show NEA's effectiveness and generality in deconstructing and smoothing LDA, author-topic models, and the recent mixed membership skip-gram topic model and achieve better performance with the embeddings compared to several state-of-the-art models. [ABSTRACT FROM AUTHOR]
Copyright of Computational Linguistics is the property of MIT Press and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Academic Search Complete
Full text is not displayed to guests.
More Details
ISSN:08912017
DOI:10.1162/coli_a_00457
Published in:Computational Linguistics
Language:English