Neural Networks, IEEE - INNS - ENNS International Joint Conference on
Download PDF

Abstract

In our contribution, we investigate how structured information processing within a neural net can emerge because of unsupervised learning from data. Our model consists of input neurons and hidden neurons which are recurrently connected and which represent the thalamus and the cortex, respectively. Based on a maximum likelihood framework the task is to generate given input data using the code of the hidden units. Hidden neurons are fully connected allowing different roles to play within the unfolding time-dynamics of this data generation process. One parameter, which is related to the sparsity of neuronal activation, varies across the hidden neurons. Because of training, the net captures the structure of the data generation process. Trained on data which are generated by different mechanisms acting in parallel, the more active neurons will code for the more frequent input features. Trained on hierarchically generated data, the more active neurons will code on the higher level where each feature integrates several lower level features. The results imply that the division of the cortex into laterally and hierarchically organized areas can evolve to a certain degree as an adaptation to the environment.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!