Combinatorial Optimization for All: Using LLMs to Aid Non-Experts in Improving Optimization Algorithms

Bibliographic Details
Title: Combinatorial Optimization for All: Using LLMs to Aid Non-Experts in Improving Optimization Algorithms
Authors: Sartori, Camilo Chacón, Blum, Christian
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning, Computer Science - Software Engineering
More Details: Large Language Models (LLMs) have shown notable potential in code generation for optimization algorithms, unlocking exciting new opportunities. This paper examines how LLMs, rather than creating algorithms from scratch, can improve existing ones without the need for specialized expertise. To explore this potential, we selected 10 baseline optimization algorithms from various domains (metaheuristics, reinforcement learning, deterministic, and exact methods) to solve the classic Travelling Salesman Problem. The results show that our simple methodology often results in LLM-generated algorithm variants that improve over the baseline algorithms in terms of solution quality, reduction in computational time, and simplification of code complexity, all without requiring specialized optimization knowledge or advanced algorithmic implementation skills.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2503.10968
Accession Number: edsarx.2503.10968
Database: arXiv
More Details
Description not available.