Analysis of Regularized Linear Functions for Classification Problems

        Recently, sample complexity bounds have been derived for problems involving linear functions such as neural networks and support vector machines. In this paper, we extend some theoretical results in this area by providing convergence analysis for regularized linear functions with an emphasis on classification problems. The class of methods we study in this paper generalize support vector machines and are conceptually very simple. To analyze these methods within the traditional PAC learning framework, we derive dimensional independent covering number bounds for linear systems under certain regularization conditions, and obtain relevant generalization bounds. We also present an analysis for these methods from the asymptotic statistical point of view. We show that this technique provides better description for large sample behaviors of these algorithms. Furthermore, we shall investigate numerical aspects of the proposed methods, and establish their relationship with ill-posed problems studied in numerical mathematics.

By: Tong Zhang

Published in: RC21572 in 1999

This Research Report is not available electronically. Please request a copy from the contact listed below. IBM employees should contact ITIRC for a copy.

Questions about this service can be mailed to reports@us.ibm.com .