TL;DR: We propose a method that combines conformal prediction with in-context learning to generate prediction intervals with guaranteed coverage using a single transformer forward pass. Our approach matches oracle-level accuracy, is robust to distribution shifts, and is computationally efficient, enabling scalable uncertainty quantification for regression with transformers.
Authors
Affiliations

Zhe Huang

PSL Research University

Simone Rossi

EURECOM

Rui Yuan

Stellantis

Thomas Hannagan

Stellantis

Published

March 27, 2025

Publication Poster

Abstract

Transformers have become a standard architecture in machine learning, demonstrating strong in-context learning (ICL) abilities that allow them to learn from the prompt at inference time. However, uncertainty quantification for ICL remains an open challenge, particularly in noisy regression tasks. This paper investigates whether ICL can be leveraged for distribution-free uncertainty estimation, proposing a method based on conformal prediction to construct prediction intervals with guaranteed coverage. While traditional conformal methods are computationally expensive due to repeated model fitting, we exploit ICL to efficiently generate confidence intervals in a single forward pass. Our empirical analysis compares this approach against ridge regression-based conformal methods, showing that conformal prediction with in-context learning (CP with ICL) achieves robust and scalable uncertainty estimates. Additionally, we evaluate its performance under distribution shifts and establish scaling laws to guide model training. These findings bridge ICL and conformal prediction, providing a theoretically grounded and new framework for uncertainty quantification in transformer-based models.

Back to top