QuantifyML: How Good is my Machine Learning Model?

Muhammad Usman
(University of Texas at Austin, USA)
Divya Gopinath
(KBR Inc., CMU, Nasa Ames)
Corina S. Păsăreanu
(KBR Inc., CMU, Nasa Ames)

The efficacy of machine learning models is typically determined by computing their accuracy on test data sets. However, this may often be misleading, since the test data may not be representative of the problem that is being studied. With QuantifyML we aim to precisely quantify the extent to which machine learning models have learned and generalized from the given data. Given a trained model, QuantifyML translates it into a C program and feeds it to the CBMC model checker to produce a formula in Conjunctive Normal Form (CNF). The formula is analyzed with off-the-shelf model counters to obtain precise counts with respect to different model behavior. QuantifyML enables i) evaluating learnability by comparing the counts for the outputs to ground truth, expressed as logical predicates, ii) comparing the performance of models built with different machine learning algorithms (decision-trees vs. neural networks), and iii) quantifying the safety and robustness of models.

In Marie Farrell and Matt Luckcuck: Proceedings Third Workshop on Formal Methods for Autonomous Systems (FMAS 2021), Virtual, 21st-22nd of October 2021, Electronic Proceedings in Theoretical Computer Science 348, pp. 92–100.
Published: 21st October 2021.

ArXived at: https://dx.doi.org/10.4204/EPTCS.348.6 bibtex PDF
References in reconstructed bibtex, XML and HTML format (approximated).
Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org