Learning the boundary of inductive invariants

Yotam M.Y. Feldman, Mooly Sagiv, Sharon Shoham, James R. Wilcox

Research output: Contribution to journalArticlepeer-review


We study the complexity of invariant inference and its connections to exact concept learning. We define a condition on invariants and their geometry, called the fence condition, which permits applying theoretical results from exact concept learning to answer open problems in invariant inference theory. The condition requires the invariant's boundary-the states whose Hamming distance from the invariant is one-to be backwards reachable from the bad states in a small number of steps. Using this condition, we obtain the first polynomial complexity result for an interpolation-based invariant inference algorithm, efficiently inferring monotone DNF invariants with access to a SAT solver as an oracle. We further harness Bshouty's seminal result in concept learning to efficiently infer invariants of a larger syntactic class of invariants beyond monotone DNF. Lastly, we consider the robustness of inference under program transformations. We show that some simple transformations preserve the fence condition, and that it is sensitive to more complex transformations.

Original languageEnglish
Article number15
JournalProceedings of the ACM on Programming Languages
Issue numberPOPL
StatePublished - Jan 2021


  • Hamming geometry
  • complexity
  • exact learning
  • interpolation
  • invariant inference


Dive into the research topics of 'Learning the boundary of inductive invariants'. Together they form a unique fingerprint.

Cite this