Personal tools
You are here: Home Publications Pending Duplicate Bibliography Entries
Document Actions

Pending Duplicate Bibliography Entries

by admin last modified 2007-01-31 11:08
Properties of Support Vector Machines by admin — last modified 2007-01-31 11:08
 
Regularization Networks and Support Vector Machines by admin — last modified 2007-01-31 11:08
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case.
Data Discrimination via Nonlinear Generalized Support Vector Machines by admin — last modified 2007-01-31 11:08
 
Generalized Discriminant Analysis Using a Kernel Approach by admin — last modified 2007-01-31 11:08
We present a new method that we call Generalized Discriminant Analysis (GDA) to deal with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the Support Vector Machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high dimensional feature space. In the transformed space, linear properties make it easy to extend and generalize the classical Linear Discriminant Analysis (LDA) to non linear discriminant analysis. The formulation is expressed as an eigenvalue problem resolution. Using a different kernel, one can cover a wide class of nonlinearities. For both simulated data and alternate kernels, we give classification results as well as the shape of the separating function. The results are confirmed using a real data to perform seed classification.
Mercer Kernel Based Clustering in Feature Space by admin — last modified 2007-01-31 11:09
This paper presents a method for both the unsupervised partitioning of a sample of data and the estimation of the possible number of inherent clusters which generate the data. This work exploits the notion that performing a nonlinear data transformation into some high dimensional feature space increases the probability of the linear separability of the patterns within the transformed space and therefore simplifies the associated data structure. It is shown that the eigenvectors of a kernel matrix which defines the implicit mapping provides a means to estimate the number of clusters inherent within the data and a computationally simple iterative procedure is presented for the subsequent feature space partitioning of the data.
Gaussian Processes for Machine Learning by admin — last modified 2008-05-13 10:24
 
A Tutorial on Support Vector Regression by admin — last modified 2008-05-13 10:24
This tutorial gives an overview of the basic ideas underlying Support Vector (SV) machines for regression and function estimation. Furthermore, it includes a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, some modifications and extensions that have been applied to the standard SV algorithm are mentioned, and it discusses the aspect of regularization and capacity control from a SV point of view.
A Tutorial on Support Vector Machines for Pattern Recognition by admin — last modified 2008-05-13 10:24
A detailed tutorial on Support Vector machines for the classification task, from background material (e.g. VC dimension, structural risk minimization) through notes on training algorithms. Many examples are given.

Powered by Plone CMS, the Open Source Content Management System