Neural Networks, IEEE - INNS - ENNS International Joint Conference on
Download PDF

Abstract

A variety of properties for neural approximation follows from considerations of convexity. For example, if n and d are positive integers and \math and if \math is any given positive constant, no matter how large, then it is not possible to have a continuous function \math which associates to each element in X an input-output function of a one-hidden-layer neural network with n hidden units and one linear output units unless for some f in X the error \math exceeds the minimum possible error by more than \math. It is also shown that the additional multiplicative factor introduced into Barron's bound by Kurkov?, Savick?, and Hlav?ckov? has an expected value of one half.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!