Unveiling Provider Bias in Large Language Models for Code Generation

Bibliographic Details
Title: Unveiling Provider Bias in Large Language Models for Code Generation
Authors: Zhang, Xiaoyu, Zhai, Juan, Ma, Shiqing, Bao, Qingshuang, Jiang, Weipeng, Shen, Chao, Liu, Yang
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Software Engineering, Computer Science - Artificial Intelligence, Computer Science - Cryptography and Security
More Details: Large Language Models (LLMs) have emerged as the new recommendation engines, outperforming traditional methods in both capability and scope, particularly in code generation applications. Our research reveals a novel provider bias in LLMs, namely without explicit input prompts, these models show systematic preferences for services from specific providers in their recommendations (e.g., favoring Google Cloud over Microsoft Azure). This bias holds significant implications for market dynamics and societal equilibrium, potentially promoting digital monopolies. It may also deceive users and violate their expectations, leading to various consequences. This paper presents the first comprehensive empirical study of provider bias in LLM code generation. We develop a systematic methodology encompassing an automated pipeline for dataset generation, incorporating 6 distinct coding task categories and 30 real-world application scenarios. Our analysis encompasses over 600,000 LLM-generated responses across seven state-of-the-art models, utilizing approximately 500 million tokens (equivalent to \$5,000+ in computational costs). The study evaluates both the generated code snippets and their embedded service provider selections to quantify provider bias. Additionally, we conduct a comparative analysis of seven debiasing prompting techniques to assess their efficacy in mitigating these biases. Our findings demonstrate that LLMs exhibit significant provider preferences, predominantly favoring services from Google and Amazon, and can autonomously modify input code to incorporate their preferred providers without users' requests. Notably, we observe discrepancies between providers recommended in conversational contexts versus those implemented in generated code. The complete dataset and analysis results are available in our repository.
Comment: 21 pages, 15 figures
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2501.07849
Accession Number: edsarx.2501.07849
Database: arXiv
More Details
Description not available.