← Back to Feed
Research Papers llm reasoning uncertainty evaluation

This study examines how uncertainty estimation scales with parallel sampling in reasoning language models, finding that

This study examines how uncertainty estimation scales with parallel sampling in reasoning language models, finding that combining self-consistency and verbalized confidence yields up to +12 AUROC improvement with just two samples. The hybrid estimator outperforms either signal alone across math, STEM, and humanities tasks.
How Uncertainty Estimation Scales with Sampling in Reasoning Models Uncertainty estimation is critical for deploying reasoning language models, yet remains poorly understood under extended chain-of-thought reasoning. We study parallel sampling as a fully black-box approach using verbalized confidence and self-consistency. Across three reasoning models and 17 tasks spanning mathematics, STEM, and humanities, we characterize how these signals scale. Both self-consistency and verbalized confidence scale in reasoning models, but self-consistency exhibits lower initial discrimination and lags behind verbalized confidence under moderate sampling. Most uncertainty gains, however, arise from signal combination: with just two samples, a hybrid estimator improves AUROC by up to $+12$ on average and already outperforms either signal alone even when scaled to much larger budgets, after which returns diminish. These effects are domain-dependent: in mathematics, the native domain of RLVR-style post-training, reasoning models achieve higher uncertainty quality and exhibit both stronger complementarity and faster scaling than in STEM or humanities.

View Original Post ↗