Sparse MRF Learning with Priors on Regularization Parameters

In this paper, we consider the sparse inverse covariance selection problem which is equivalent to structure recovery of a Markov Network over Gaussian variables. The problem of regularization parameter(s) selection is addressed in a Bayesian way, by assuming a prior on the parameter(s) and by using MAP optimization to find both the inverse covariance matrix and the unknown parameters. Our general formulation extends prior art by allowing a vector of regularization parameters and is well-suited for learning structured graphs such as scale-free networks where the sparsity of nodes varies significantly. We also introduce a novel and efficient approach to solving the sparse inverse covariance problem that compares favorably to the state-of-art. Our empirical results demonstrate advantages of our approach on structured (scale-free) networks.

By: Katya Scheinberg; Narges Bani Asadi; Irina Rish

Published in: RC24812 in 2009

LIMITED DISTRIBUTION NOTICE:

This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.

rc24812.pdf

Questions about this service can be mailed to reports@us.ibm.com .