Abstract
In this paper, we present the results of a comparison among six different weight initialization methods with two training algorithms and six databases. Measuring the three following aspects performs the comparison: speed of convergence, generalization and probability of convergence. The two training algorithms are Backpropagation (BP) and another one that uses conjugate gradient and dynamical learning rate adaptation (NE). We found the best weight initialization scheme for the (BP) algorithm. The speed of convergence can be improved with respect to the usual initialization, but the two other aspects are similar. For the NE algorithm, it is concluded that its-performance depends on the initialization much more than BP. Its generalization and probability of convergence can be considered lower than BP and the different weight initialization schemes could not improve this drawback. On the other hand, it is faster.