TY - GEN

T1 - Tightness results for local consistency relaxations in continuous MRFs

AU - Wald, Yoav

AU - Globerson, Amir

PY - 2014

Y1 - 2014

N2 - Finding the MAP assignment in graphical models is a challenging task that generally requires approximations. One popular approximation approach is to use linear programming relaxations that enforce local consistency. While these are commonly used for discrete variable models, they are much less understood for models with continuous variables. Here we define local consistency relaxations of MAP for continuous pairwise Markov Random Fields (MRFs), and analyze their properties. We begin by providing a characterization of models for which this relaxation is tight. These turn out to be models that can be reparameterized as a sum of local convex functions. We also provide a simple formulation of this relaxation for Gaussian MRFs. Next, we show how the above insights can be used to obtain optimality certificates for loopy belief propagation (LBP) in such models. Specifically, we show that the messages of LBP can be used to calculate upper and lower bounds on the MAP value, and that these bounds coincide at convergence, yielding a natural stopping criterion which was not previously available. Finally, our results illustrate a close connection between local consistency relaxations of MAP and LBP. They demonstrate that in the continuous case, whenever LBP is provably optimal so is the local consistency relaxation.

AB - Finding the MAP assignment in graphical models is a challenging task that generally requires approximations. One popular approximation approach is to use linear programming relaxations that enforce local consistency. While these are commonly used for discrete variable models, they are much less understood for models with continuous variables. Here we define local consistency relaxations of MAP for continuous pairwise Markov Random Fields (MRFs), and analyze their properties. We begin by providing a characterization of models for which this relaxation is tight. These turn out to be models that can be reparameterized as a sum of local convex functions. We also provide a simple formulation of this relaxation for Gaussian MRFs. Next, we show how the above insights can be used to obtain optimality certificates for loopy belief propagation (LBP) in such models. Specifically, we show that the messages of LBP can be used to calculate upper and lower bounds on the MAP value, and that these bounds coincide at convergence, yielding a natural stopping criterion which was not previously available. Finally, our results illustrate a close connection between local consistency relaxations of MAP and LBP. They demonstrate that in the continuous case, whenever LBP is provably optimal so is the local consistency relaxation.

UR - http://www.scopus.com/inward/record.url?scp=84923314333&partnerID=8YFLogxK

M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???

AN - SCOPUS:84923314333

T3 - Uncertainty in Artificial Intelligence - Proceedings of the 30th Conference, UAI 2014

SP - 839

EP - 848

BT - Uncertainty in Artificial Intelligence - Proceedings of the 30th Conference, UAI 2014

A2 - Zhang, Nevin L.

A2 - Tian, Jin

PB - AUAI Press

T2 - 30th Conference on Uncertainty in Artificial Intelligence, UAI 2014

Y2 - 23 July 2014 through 27 July 2014

ER -