arXiv:2410.02805v2 Announce Type: replace-cross Abstract: Reliable uncertainty quantification is critical in high-stakes applications, such as medical diagnosis, where confidently incorrect predictions can erode trust in automated decision-making systems. Traditional uncertainty quantification methods rely on a predefined confidence threshold to classify predictions as confident or uncertain. However, this approach assumes that predictions exceeding the threshold are trustworthy, while those below it are uncertain, without explicitly assessing the correctness of high-confidence predictions. As a result, confidently incorrect predictions may still occur, leading to misleading uncertainty assessments. To address this limitation, this study proposed an uncertainty-aware stacked neural network, which extends conventional uncertainty quantification by learning when predictions should be trusted. The framework consists of a two-tier model: the base model generates predictions with uncertainty estimates, while the meta-model learns to assign a trust flag, distinguishing confidently correct cases from those requiring expert review. The proposed approach is evaluated against the traditional threshold-based method across multiple confidence thresholds and pre-trained architectures using the COVIDx CXR-4 dataset. Results demonstrate that the proposed framework significantly reduces confidently incorrect predictions, offering a more trustworthy and efficient decision-support system for high-stakes domains.
