Abstract
Various theoretical results show that learning in conventional feedforward neural networks such as multilayer perceptrons is NP-complete. In this paper, we show that learning in min-max modular (M 3) neural networks is tractable. The key to coping with NP-complete problems in M 3 networks is to decompose a large-scale problem into a number of manageable, independent subproblems and to make the learning of a large-scale problem emerge from the learning of a number of related small subproblems.