Model Robustness

Model Robustness

Perhaps we can describe the robustness or stability of a model as how much its performance varies under noise.

A model that is sensitive to noise or small perturbations might be overly optimized to its training dataset and in turn fragile to a change in data, i.e., overfit.

We assume that small changes to the model and data result in proportionally small changes to model performance on the test harness.

Is there evidence that the model is robust to perturbations?

Checks

  1. Is the variance in model performance modest when Gaussian noise is added to input features?
  2. Is the variance in model performance modest when Gaussian noise is added to target variable (regression only)?
  3. Is the variance in model performance modest when class labels are flipped in target variable (classification only)?
  4. Is the variance in model performance modest when noise is added to the learning algorithm (e.g. vary random number generator seeds for algorithms that use randomness)?
  5. Is the variance in model performance modest when Gaussian noise is added to key learning algorithm hyperparameter(s)?