Pattern Recognition, International Conference on
Download PDF

Abstract

Independence between individual classifiers is typically viewed as an asset in classifier fusion. We study the limits on the majority vote accuracy when combining dependent classifiers. Q statistics are used to measure the dependence between classifiers. We show that dependent classifiers could offer a dramatic improvement over the individual accuracy. However, the relationship between dependency and accuracy of the pool is ambivalent. A synthetic experiment demonstrates the intuitive result that; in general, negative dependence is preferable.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!