We present experiments exploring the notions of global correctness and global robustness defined in our research paper:
Nathanaël Fijalkow and Mohit Kumar Gupta
Verification of Neural Networks: Specifying Global Robustness using Generative Models
The experiments are in Jupter notebook format:
- Random walk
- Analysis of an image classifier using a generative model
- Evaluating the global correctness
- Searching for Realistic Adversarial Examples: black-box approach
- Searching for Realistic Adversarial Examples: white-box approach
- Dependence on the generative model: disjoint training sets
All experiments use Tensorflow, and pre-trained models can be used (see /Models).