Learning to defer (L2D) allows prediction tasks to be allocated to a human or ma- chine decision maker, thus getting the best of both's abilities. Yet this allocation decision depends on a 'rejector' function, which could be poorly fit or otherwise mis-specified. In this work, we perform uncertainty quantification for the rejector sub-component of the L2D framework. In particular, we use con- formal prediction to allow the reject to out- put sets, instead of just the binary outcome of 'defer' or not. On tasks ranging from object to hate speech detection, we demonstrate that the uncertainty in the rejector translates to safer decisions via two forms of selective prediction.
@inproceedings{fang2024learning,
title={Learning to Defer with an Uncertain Rejector via Conformal Prediction},
author={Fang, Yizirui and Nalisnick, Eric},
booktitle={NeurIPS 2024 Workshop on Bayesian Decision-making and Uncertainty}
}