Typically, an AI system is learned offline. You collect your data, then train and evaluate your model. All offline. You then deploy and cross your fingers that the live data is similar to the offline data and your model is able to be useful.

When a model is wrong, it could be because of data distribution shift, an outlier, or some artifact of the training process. But we can’t say that the model has changed. It is still the same model.

The model can be re-trained, or augmented with more data, etc. Importantly, when you provide new information, the model changes. Even if the prediction is the same, under the hood the internals have shifted.

This is not necessarily true with humans. It could be that priors are so strong that even compelling new evidence does not elicit new behaviors. This can be frustrating if you are used to fixing models with more data. You can’t often “fix” people with new data.