ChatPaper.aiChatPaper

Investigating Human-Aligned Large Language Model Uncertainty

March 16, 2025
Authors: Kyle Moore, Jesse Roberts, Daryl Watson, Pamela Wisniewski
cs.AI

Abstract

Recent work has sought to quantify large language model uncertainty to facilitate model control and modulate user trust. Previous works focus on measures of uncertainty that are theoretically grounded or reflect the average overt behavior of the model. In this work, we investigate a variety of uncertainty measures, in order to identify measures that correlate with human group-level uncertainty. We find that Bayesian measures and a variation on entropy measures, top-k entropy, tend to agree with human behavior as a function of model size. We find that some strong measures decrease in human-similarity with model size, but, by multiple linear regression, we find that combining multiple uncertainty measures provide comparable human-alignment with reduced size-dependency.

Summary

AI-Generated Summary

PDF42March 18, 2025