Neural Networks, IEEE - INNS - ENNS International Joint Conference on
Download PDF

Abstract

Networks of Goal Seeking Neurons (GSN) [1] are weightless models that were designed to overcome several problems with PLN networks [2]. Weightless (Boolean) models are well known by their ease to implement on hardware and software. Some hardware implementations and general improvements have been proposed to these models [3, 4, 5] specially to perform better with their main target task: classification. In this work, an improvement is proposed and tested on classifiers that use GSN networks as their building blocks on a hand-written digit recognition task. More specifically, the improvement is related to the expansion of GSN, decreasing saturation problems during learning and therefore allowing the use of more examples during training. GSN uses a fast one-time learning algorithm and it is not allowed to modify used positions. Therefore, during the training, several examples are refused by the net since it cannot be learned. This behavior indicates when a GSN network is saturated. By using the idea of expanding the network when it is needed, it is possible to overcome this limitation. The main issue becomes where and when to expand it. A determinist method for training the network is proposed, allowing it to expand where it is necessary. Results of simulations show that the proposed improvement is promising, leading to significant gains of performance. Moreover, some ideas can be applied on many other neural models.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!