Revisiting Neural Retrieval on Accelerators

Bibliographic Details
Title: Revisiting Neural Retrieval on Accelerators
Authors: Zhai, Jiaqi, Gong, Zhaojie, Wang, Yueming, Sun, Xiao, Yan, Zheng, Li, Fu, Liu, Xing
Publication Year: 2023
Collection: Computer Science
Subject Terms: Computer Science - Machine Learning, Computer Science - Information Retrieval
More Details: Retrieval finds a small number of relevant candidates from a large corpus for information retrieval and recommendation applications. A key component of retrieval is to model (user, item) similarity, which is commonly represented as the dot product of two learned embeddings. This formulation permits efficient inference, commonly known as Maximum Inner Product Search (MIPS). Despite its popularity, dot products cannot capture complex user-item interactions, which are multifaceted and likely high rank. We hence examine non-dot-product retrieval settings on accelerators, and propose \textit{mixture of logits} (MoL), which models (user, item) similarity as an adaptive composition of elementary similarity functions. This new formulation is expressive, capable of modeling high rank (user, item) interactions, and further generalizes to the long tail. When combined with a hierarchical retrieval strategy, \textit{h-indexer}, we are able to scale up MoL to 100M corpus on a single GPU with latency comparable to MIPS baselines. On public datasets, our approach leads to uplifts of up to 77.3\% in hit rate (HR). Experiments on a large recommendation surface at Meta showed strong metric gains and reduced popularity bias, validating the proposed approach's performance and improved generalization.
Comment: To appear in the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2023)
Document Type: Working Paper
DOI: 10.1145/3580305.3599897
Access URL: http://arxiv.org/abs/2306.04039
Accession Number: edsarx.2306.04039
Database: arXiv
More Details
DOI:10.1145/3580305.3599897