Explaining a Counterexample Using Causality

When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. While this service is important and useful, the analysis of the counterexample in order to understand the problem it demonstrates, can be difficult and time-consuming. In this paper, we address the problem of analyzing a counterexample. Using the notion of causality, introduced by Halpern and Pearl, we formally define a set of pairs hsignal, locationi, which cause the failure of the specification on the counterexample trace. We prove that computation of the exact set of causes is intractable and provide a polynomial-time model checking based algorithm that approximates it. Experimental results, conducted on real-life examples suggest that the approximate algorithm computes causes that match the user’s intuition on all examples. Our approach is independent of the tool that produced the counterexample and can be applied as an external layer to any model checking tool.

By: Ilan Beer, Shoham Ben-David, Hana Chockler, Avigail Orni, Richard Trefler

Published in: H-0266 in 2009


This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.


Questions about this service can be mailed to reports@us.ibm.com .