Just as program execution has hotspots that we can use execution profiling to identify, (automatic) verification of code has hotspots that we would like to find using verification profilers.
galea:arxiv:2018 identifies the following requirements: generality (of profiling model), explainable (independent of internal details), and actionable (finds root causes).
Profilers in existing systems
KLEE dumps around 10 statistics about each line in the format used by KCachegrind: covered, uncovered, time, solver time, successful solver queries, unsuccessful solver queries, forks, etc.
This is based on some of the same metrics as KLEE but it adds more detail about the fan-in and fan-out of the graph to get more insight into splits and joins. It also tracks the number of terms created but never used.
Jalanji also implements the SymPro profiler API.
Profiling seems to be a bit like profiling lazy functional languages: you are often more interested in where a symbolic term was constructed than in where it was evaluated and, like lazy execution, just looking at where a verifier spends time will tend to show you where it was evaluated (in a solver).
Notes related to Verification profiling of code
Papers related to Verification profiling of code
- SymNav: Visually assisting symbolic execution [angelini:vizsec:2019]
- The Axiom profiler: Understanding and debugging SMT quantifier instantiations [becker:tacas:2019]
- Finding code that explodes under symbolic evaluation [bornholt:oopsla:2018]
- Evaluating manual intervention to address the challenges of bug finding with KLEE [galea:arxiv:2018]