Issues in deploying software defect detection tools

David R. Cok
[Google Scholar] [DBLP] [Citeseer]
Read: 27 August 2020

Proceedings of BUGS 2005 (PLDI 2005 Workshop on the Evaluation of Software Defect Detection Tools)
Chicago, IL, USA
June 2005
Note(s): under-approximation

This one-page, position paper talks about what makes a bug-finding tool useful and usable in an industrial setting. The slides are worth reading since they say things not present in the paper.

Usefulness is affected by three things

  • Accuracy. In the slides the conventional true/false positive/negative metrics are recast as follows

    • TP/(TP+FN) = recall = sensitivity = Pr(detect the bug there is a bug)
    • TP/(TP+FP) = precision = predictive value positive = Pr(real bug detected bug)
    • TN/(TN+FP) = specificity = Pr(no detection no bug)
    • FN/(FN+TN) = predictive value negative = Pr(no bug no detection)
  • Handling the bulk of the important defect areas for a user
  • Time and space performance consistent with the benefit provided

Usability has several components

  • Out-of-the-box experience
  • Resource investment profile
  • Perspicuity

Finally, there is a need for confidence that the tool will continue to exist and be supported.