Abstract
This paper presents a general strategy for automated generation of efficient representations in vision. The approach is highly task oriented and what constitutes the relevant information is defined by a set of examples. The examples are pairs of situations that are dependent through the chosen feature but are otherwise independent. Particularly important concepts in the work are mutual information and canonical correlation. How visual operators and representations can be generated from examples is presented for a number of features, e.g. local orientation, disparity and motion. Interesting similarities to biological vision functions are observed. The results clearly demonstrate the potential of combining advanced filtering techniques and learning strategies based on canonical correlation analysis (CCA).