Estimation of Confidence Using Auxiliary Models PROJECT TITLE : Confidence Estimation via Auxiliary Models ABSTRACT: Quantifying the degree of confidence that deep neural classifiers have in their predictions in a way that is both accurate and reliable is a difficult but fundamental requirement for using such models in safety-critical applications. In this article, we present a novel target criterion for measuring the confidence of a model, which is the true class probability (TCP). Our research demonstrates that TCP provides better properties for confidence estimation than the standard maximum class probability does (MCP). We propose to learn the TCP criterion from data using an auxiliary model, introducing a specific learning scheme that is adapted to this context. This is because the true class is, by its very nature, unknown when the test is being performed. We evaluate our method using the task of failure prediction as well as the task of self-training with pseudo-labels for domain adaptation. Both of these tasks require accurate confidence estimates, and we use them to evaluate our method. For the purpose of demonstrating that the proposed method is applicable to each task, a large number of experiments are carried out. For the purposes of image classification and semantic segmentation, we investigate a variety of network architectures and conduct experiments using both small and large datasets. Our methodology outperforms other strong baselines in every metric that has been evaluated. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Cross-Domain Semantic Segmentation Model with Confidence-and-Refinement Adaptation An Anchor Free Object Detector for Point Cloud, CenterNet3D