- April 27, 2020 at 1:29 pm #8710adminKeymaster
The ETH Neural Network has undergone retraining and also gained four new inputs. These are new volatility vectors designed to help the NN better differentiate behavior between sideways and trending markets. The BTC model underwent this same upgrade last week.
This achieves an increase in precision from 0.88 to 0.96 (learn more). That is the highest value we’ve ever been able to achieve.
When the predictions are plugged into our trading system, they generate:
▸4.5x better results for ETH-Aggressive
No action is needed on your part as this is an in-place upgrade.
- May 1, 2020 at 2:02 am #8768matthew55Participant
too bad we missed the recent bull run of ETH 🙁
do you have any idea about why the system missed it at all?
also, what would it practically mean when/if the Precision parameter will reach 1.00?
thank you as always
- May 1, 2020 at 7:35 am #8769CirkeParticipant
Yes I am interested to know the meaning of 1.0 ?
- May 2, 2020 at 5:02 pm #8771JustinModerator
The ETH model issued 4 trades in April locking in +36%, which is pretty exceptional. It has sat out for this last upswing, but I assume it’s looking for confirmation to break above these levels. We want to get most of the upswings and minimize losses on the downswings.
In terms of precision, that is referencing R-Squared, which measures the amount of error removed by the model. We use that measure because it correlates very well with how our models do once their output is handed off to the trading system.
It will never hit 1. And each incremental improvement is going to be much more challenging to achieve.
Here’s a link to read more: https://crypto-ml.com/blog/machine-learning-upgrade-to-5-0-deep-neural-networks/#R-Squared
And an image that shows why we use it as one of our main measures:
From the post…
By doubling the R-Squared value, we were able to achieve results 79 times better.
The following chart shows the difference between the 4.x and 5.0 models. By using Deep Neural Networks, we are making a large jump in precision.
- This reply was modified 2 months ago by Justin.
- May 4, 2020 at 9:49 am #8775Radu43Participant
You are only looking at past performance when looking at precision value of 1. So while you maybe close to 1 now. A month from now, the same model could be 0.7 for the month of May. Your models are only as good as your data and you don’t have future data :). Am I right?
- May 5, 2020 at 9:12 pm #8795JustinModerator
Hey, very astute question.
And yes, you’re exactly right…if we ran the evaluations against new data, we could see a difference in precision (either + or -).
There are two ways to look at it:
– In terms of overall precision, it works out to needing around 15 straight days of 100% inaccurate predictions to slip overall precision by 1%.
– But if you looked at precision of that 15 days only, you would have a very low value.
We run training and evaluations throughout the month in order to feed in new scenarios and hopefully avoid this sort of thing. Practically speaking, we see the precision hold fairly constant.
But then we get curve balls like COVID-19. So it’s always an adventure. 🙂Community Advocate
- You must be logged in to reply to this topic.