A data audit is a step-by-step process that examines every step of the data science process. Problems can be introduced at any step of this process, so a full audit requires close examination at each step. In another post I'll talk about tests given to uncover problems and possible remediation for problems found at each step. For now I'll assume we have full access to the model in question, although many of these questions can be addressed even when there are limits to access.
At the highest level a data audit has four phases:
- DATA
- DEFINE
- BUILD
- MONITOR
In order to audit a given algorithm we delve into phase-specific questions.
DATA-related questions:
- What data have you collected? Is it relevant and do you have enough and the right kind?
- What is the integrity of this data? Does it have bias? Is some of the data more or less accurate? How do you test this?
- Is your data systematically missing important types of data? Is it under- or over-representing certain types of events, behaviors, or people?
- How are you cleaning the data, dealing with missing data, outlying data, or unreasonable data? What is your ground truth for dealing with this kind of question?
DEFINE-related questions:
- How do you define "success" for your algorithm? Are there other related definitions of success, and what do you think would happen if you tweaked that definition?
- What attributes do you choose to search through to potentially associate with success or failure? To what extent are your attributes proxies instead of directly relevant to the definition of success, and what could go wrong?
BUILD-related questions:
- What kind of algorithm should you use?
- How do you calibrate the model?
- How do you decide when the algorithm has been optimized?
MONITOR-related questions:
- To what extent is the model working in production?
- Does it need to be updated over time?
- How are the errors distributed?
- Is the model creating unintended consequences?
- Is the model playing a part in a larger feedback loop?