Learning on Graph with Normalized Laplacian Regularization

We consider a general form of transductive learning on graphs with Laplacian regularization for multi-category classification, and obtain margin-based generalization bounds. Using this analysis, we establish the connection between graph cuts and margin, which enables us to express the generalization behavior of graph learning based on appropriate geometric properties of the graph. In particular, using a learning theoretical definition of normalized cut, we show the importance of normalizing the graph Laplacian matrix. Under appropriate assumptions, the optimal normalization factors can be derived. The analysis reveals the limitations of the standard normalization method, and provides a remedy. Experiments confirm the superiority of the learning theoretically motivated normalization scheme on artificial and real-world datasets.

By: Rie Kubota Ando; Tong Zhang

Published in: RC24005 in 2006


This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.


Questions about this service can be mailed to reports@us.ibm.com .