Hi @joe1pratt, this is such a big departure from the current models, we’re really putting it through the rigor and testing in an expanding set of scenarios.
To be straight forward, we’re finding it is extremely capable–meaning it *can* generate results that are multiples better. However, it is also less dependable–meaning there are scenarios in which it really falters.
As an analogy, if this was autonomous driving, we have something that can blast through freeway traffic at 120 MPH. But when it goes to park, it keeps denting its fender on light poles. Unacceptable on a high-end car.
To address this, we’re generating our biggest data set ever. It is time consuming to get some of the historical data points at the frequency we need. And it’s also a massive amount of data to process.
But the expectation is this will train our model in a sufficiently broad set of scenarios…and we should avoid those darn bumper dents.
If I step back and look at our current models, they are a little more cautious and prefer to stick a little closer to the speed limits. But they’re dependable and get us where we want to go in a broad set of conditions. As a side bonus, this massive new data set we are generating will, at minimum, make our current model stronger and more robust.
To sum…4.0 anomaly detection is our future and our focus. Whether it makes up the core of our models or it becomes another tool for our current models to leverage is yet to be seen. I expect we are weeks (not months) away from getting something live.