QUOTA: Quantifying Objects with Text-to-Image Models for Any Domain

Bibliographic Details
Title: QUOTA: Quantifying Objects with Text-to-Image Models for Any Domain
Authors: Sun, Wenfang, Du, Yingjun, Liu, Gaowen, Snoek, Cees G. M.
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning
More Details: We tackle the problem of quantifying the number of objects by a generative text-to-image model. Rather than retraining such a model for each new image domain of interest, which leads to high computational costs and limited scalability, we are the first to consider this problem from a domain-agnostic perspective. We propose QUOTA, an optimization framework for text-to-image models that enables effective object quantification across unseen domains without retraining. It leverages a dual-loop meta-learning strategy to optimize a domain-invariant prompt. Further, by integrating prompt learning with learnable counting and domain tokens, our method captures stylistic variations and maintains accuracy, even for object classes not encountered during training. For evaluation, we adopt a new benchmark specifically designed for object quantification in domain generalization, enabling rigorous assessment of object quantification accuracy and adaptability across unseen domains in text-to-image generation. Extensive experiments demonstrate that QUOTA outperforms conventional models in both object quantification accuracy and semantic consistency, setting a new benchmark for efficient and scalable text-to-image generation for any domain.
Comment: 12 pages, 6 figures
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2411.19534
Accession Number: edsarx.2411.19534
Database: arXiv
More Details
Description not available.