Abstract
We propose a new neural approach for approximating function using reinforcement-type learning: each time the network generates an output, the environment responds with the scalar distance between the delivered output and the expected one. Thus, this distance is the only information the network can use to modify the estimation of the multi-dimensional output. This reinforcement feature is embedded in a neural-gas method, taking advantages of the different facilities it offers. We detail the global algorithm and we present some simulation results in order to show the behavior of the developed method.