Personal tools
You are here: Home papers Support Vector Tutorial
Document Actions

Support Vector Tutorial

by admin last modified 2008-05-13 10:52

Support Vector Learning

The Support Vector (SV) learning algorithm provides a method for solving Pattern Recognition, Regression Estimation and Operator Inversion problems. The method is based on results in the statistical theory of learning with finite sample sizes developed by Vapnik and co-workers. Crucial to SV learning are two ideas: automatic capacity control, and nonlinear maps into feature spaces given via kernel functions.

The tutorial will introduce elements of statistical learning theory (learning as risk minimization, risk bounds, VC-dimension and other capacity concepts) and of functional analysis (Mercer kernels, reproducing kernel Hilbert spaces) which are beneficial for understanding the above ideas. It will then cover SV machines in detail, including the derivation of the algorithm, theoretical and empirical results, and a survey of the latest developments. Moreover, it will describe connections to other learning techniques, e.g. regularization networks and nonlinear principal component analysis using SV kernels.

Some previous knowledge of linear algebra is required.

Bernhard Schölkopf received a Ph.D. in Computer Science from the University of Technology Berlin in 1997. He wrote his thesis on SV machines at AT&T Bell Labs and at the Max-Planck-Institut für biologische Kybernetik. He has an M.Sc. in mathematics from the University of London and a Diplom in physics from the Eberhard-Karls-Universität Tübingen. He is currently at GMD FIRST Berlin, and he has recently co-organized a NIPS workshop on SV machines.


This document was generated on 5 March 1998 using the texi2html translator version 1.51.


Powered by Plone CMS, the Open Source Content Management System