NeuroSAT is an experimental SAT solver that is learned using single-bit supervision only. We train it as a classifier to predict satisfiability of random SAT problems and it learns to search for satisfying assignments to explain that bit of supervision. When it guesses sat, we can usually decode the satisfying assignment it has found from its activations. It can often find solutions to problems that are bigger, harder, and from entirely different domains than those it saw during training.
Specifically, we train it as a classifier to predict satisfiability on random problems that look like this:
When making a prediction about a new problem, it guesses unsat with low confidence (light blue) until it finds a satisfying assignment, at which point it guesses sat with very high confidence (red) and converges:
Iteration →
At convergence, the literals cluster according to the solution it finds:
At test time it can often find solutions to
- bigger random problems:
- graph coloring problems:
- clique detection problems:
- dominating set problems:
- and vertex cover problems:
- The graph problems (color, clique, domset, and cover) are over small random graphs (~10 nodes, ~17 edges on average).
- NeuroSAT is vastly less efficient and less reliable than even the simplest traditional SAT solver.
- This is early-stage research. It is only a scientific curiosity right now and is far from being useful technology.
As many readers know too well, facilitating exact reproducibility in machine learning can require a lot of work. NeuroSAT is no exception. We regret that we do not currently provide a push-button way to retrain our exact model on the exact same training data we used in our experiments, though we may provide such functionality in the future depending on the level of interest. For now, we settle for providing our model code, a generator for the distribution of problems we trained on, and enough scaffolding to easily train and validate it on small datasets. More utilities will be added in the coming weeks. We hope users will adapt our code to their own infrastructures, improve upon our model, and train it on a greater variety of problems.
More information about NeuroSAT can be found in the paper https://arxiv.org/abs/1802.03685.
- Daniel Selsam, Stanford University
- Matthew Lamm, Stanford University
- Benedikt Bünz, Stanford University
- Percy Liang, Stanford University
- Leonardo de Moura, Microsoft Research
- David L. Dill, Stanford University
This work was supported by Future of Life Institute grant 2017-158712.