Abstract
We discuss in this paper the conceptual and computational frameworks of information theory for decision making in speaker verification. The proposed approach departs itself from other conventional scoring models for speaker verification as the first approach takes into account the quantity of `surprise' or information content. We compare the new approach with a widely used log-likelihood normalization method for speaker verification. Experimental results on a commercial speech corpus validate the theoretical foundation of the proposed method. Furthermore, we introduce the unique entropic measure of uncertainty in the verification scoring.