Neural Networks, Brazilian Symposium on
Download PDF

Abstract

We have studied the performance of the Alopex algorithm [12], and propose modifications that improve the training time, and simplify the algorithm. We have tested different variations. Here we will describe the best cases and we will summarize the conclusions we arrived at. One of the proposed variations (99/B) performs slightly faster than the Alopex algorithm described in [12], showing less unsuccessful training attempts, while being simpler to implement. Like Alopex, our versions are based on local correlations between changes in individual weights and changes in the global error measure. Our algorithm is also stochastic, but it differs from Alopex in the fact that no annealing scheme is applied during the training process and hence it uses less parameters.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!