Learning to Learn to Predict Performance Regressions in Production at Meta

Bibliographic Details
Title: Learning to Learn to Predict Performance Regressions in Production at Meta
Authors: Beller, Moritz, Li, Hongyu, Nair, Vivek, Murali, Vijayaraghavan, Ahmad, Imad, Cito, Jürgen, Carlson, Drew, Aye, Ari, Dyer, Wes
Publication Year: 2022
Collection: Computer Science
Subject Terms: Computer Science - Software Engineering, Computer Science - Machine Learning
More Details: Catching and attributing code change-induced performance regressions in production is hard; predicting them beforehand, even harder. A primer on automatically learning to predict performance regressions in software, this article gives an account of the experiences we gained when researching and deploying an ML-based regression prediction pipeline at Meta. In this paper, we report on a comparative study with four ML models of increasing complexity, from (1) code-opaque, over (2) Bag of Words, (3) off-the-shelve Transformer-based, to (4) a bespoke Transformer-based model, coined SuperPerforator. Our investigation shows the inherent difficulty of the performance prediction problem, which is characterized by a large imbalance of benign onto regressing changes. Our results also call into question the general applicability of Transformer-based architectures for performance prediction: an off-the-shelve CodeBERT-based approach had surprisingly poor performance; our highly customized SuperPerforator architecture initially achieved prediction performance that was just on par with simpler Bag of Words models, and only outperformed them for down-stream use cases. This ability of SuperPerforator to transfer to an application with few learning examples afforded an opportunity to deploy it in practice at Meta: it can act as a pre-filter to sort out changes that are unlikely to introduce a regression, truncating the space of changes to search a regression in by up to 43%, a 45x improvement over a random baseline. To gain further insight into SuperPerforator, we explored it via a series of experiments computing counterfactual explanations. These highlight which parts of a code change the model deems important, thereby validating the learned black-box model.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2208.04351
Accession Number: edsarx.2208.04351
Database: arXiv
More Details
Description not available.