Swarm verification uses parallelism and (most importantly) diversity to hunt for violations using model checkers. The goal is to find violations within some limited time budget, not to prove absence of violations using an exhaustive search.
Using SPIN, they run a large number of small jobs (number*size chosen to fit in physical RAM) and create diversity by
- varying hash polynomials
- varying the number of hashes (in a Bloom filter??)
- varying random seeds for process scheduling decisions
- varying random seeds for transition selection within processes
The result is dramatically better at achieving coverage or at finding known problems in a synthetic example.
The benefits come from a mixture of greater search velocity in the early stages of the search and from avoiding the need to coordinate runs and largely avoiding repeated work in separate runs.
There is no need for a parallelized algorithm or to run on an SMP machine because there is no need to coordinate runs if they are sufficiently independent that they are not repeating (much) work. (I think it can also be combined with parallel algorithms if you want.)
swarm tool uses just a few parameters to generate a verification script
to control all the separate runs. For example, SPIN has a predictable memory
consumption over time so, given a time limit it calculates the maximum memory
needed per process and, from that and the physical memory, the number of
processes to run. This makes it easy to apply
swarm without needing to
understand SPIN or swarm in detail.