Verifying Machine Learning Interpretability Requirements through Provenance
Lynn Vonderhaar, Juan Couder, Daryela Cisneros, Omar Ochoa
TLDR
This paper introduces a method using ML provenance to verify interpretability requirements, transforming immeasurable NFRs into quantifiable FRs.
Key contributions
- Identifies the challenge of verifying immeasurable ML interpretability requirements.
- Proposes using ML provenance data to make model behavior transparent and interpretable.
- Details how saving provenance data forms quantifiable Functional Requirements (FRs).
- Explains that verifying these FRs effectively confirms the interpretability NFR.
Why it matters
Verifying interpretability is a major hurdle in ML engineering. This paper offers a concrete approach to make this crucial NFR measurable and verifiable, enhancing the rigor of ML development. This is vital for building trustworthy and accountable AI systems.
Original Abstract
Machine Learning (ML) Engineering is a growing field that necessitates an increase in the rigor of ML development. It draws many ideas from software engineering and more specifically, from requirements engineering. Existing literature on ML Engineering defines quality models and Non-Functional Requirements (NFRs) specific to ML, in particular interpretability being one such NFR. However, a major challenge occurs in verifying ML NFRs, including interpretability. Although existing literature defines interpretability in terms of ML, it remains an immeasurable requirement, making it impossible to definitively confirm whether a model meets its interpretability requirement. This paper shows how ML provenance can be used to verify ML interpretability requirements. This work provides an approach for how ML engineers can save various types of model and data provenance to make the model's behavior transparent and interpretable. Saving this data forms the basis of quantifiable Functional Requirements (FRs) whose verification in turn verifies the interpretability NFR. Ultimately, this paper contributes a method to verify interpretability NFRs for ML models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.