Personal tools
You are here: Home Publications Automatic Capacity Tuning of Very Large VC-Dimension Classifiers
Document Actions

I. Guyon, B. Boser, and V. Vapnik (1993)

Automatic Capacity Tuning of Very Large VC-Dimension Classifiers

In: Advances in Neural Information Processing Systems, ed. by Stephen Jos\'e Hanson and Jack D. Cowan and C. Lee Giles, vol. 5, pp. 147–155, Morgan Kaufmann, San Mateo, CA.

Large VC-dimension classifiers can learn difficult tasks, but are usually impractical because they generalize well only if they are trained with huge quantities of data. In this paper we show that even high-order polynomial classifiers in high dimensional spaces can be trained with a small amount of training data and yet generalize better than classifiers with a smaller VC-dimension. This is achieved with a maximum margin algorithm (the Generalized Portrait). The technique is applicable to a wide variety of classifiers, including Perceptrons, polynomial classifiers (sigma-pi unit networks) and Radial Basis Functions. The effective number of parameters is adjusted automatically by the training algorithm to match the complexity of the problem. It is shown to equal the number of those training patterns which are closest patterns to the decision boundary (supporting patterns). Bounds on the generalization error and the speed of convergence of the algorithm are given. Experimental results on handwritten digit recognition demonstrate good generalization compared to other algorithms.

by admin last modified 2007-01-31 11:07

Powered by Plone CMS, the Open Source Content Management System