Hey, very astute question.
And yes, you’re exactly right…if we ran the evaluations against new data, we could see a difference in precision (either + or -).
There are two ways to look at it:
– In terms of overall precision, it works out to needing around 15 straight days of 100% inaccurate predictions to slip overall precision by 1%.
– But if you looked at precision of that 15 days only, you would have a very low value.
We run training and evaluations throughout the month in order to feed in new scenarios and hopefully avoid this sort of thing. Practically speaking, we see the precision hold fairly constant.
But then we get curve balls like COVID-19. So it’s always an adventure. 🙂