- October 24, 2019 at 4:56 pm #6192jessica.cryptoModerator
We’ve just posted detail on the new ML model. The signal itself will be deployed to the Member Dashboard within 48 hours.
Let us know your thoughts and questions.
- October 24, 2019 at 7:28 pm #6195kwParticipant
I thought it was a very informative and articulate article! It really shed some light on the inner workings of the model, how it generates signals, and how that is different than human decision-making. It was much appreciated. I’m bullish on the new model. Can’t wait to try it out!
- October 25, 2019 at 8:06 am #6198martyfParticipant
Great stuff! Glad to see it in the dashboard. Looking forward to seeing how it calls this current mess.
Can’t wait for the short side of this indicator though… this bear market probably isn’t going to be too favorable for long signals!
Am I reading the blog post right – “Also, I cannot stress this enough, the anomaly model is realizing these tighter numbers without any variation in the model or parameters. In order for our existing models to achieve their numbers, they need to continuously react and evolve.” – It seems this new model doesn’t “learn on the fly” like the 3.5 model? While this seems amazing, and congrats if this holds true, does the emergence of a completely new market behavior concern you? This was one of the things that attracted me to ML for crypto-trading – that the algos would learn and adapt to emerging market phenomena. But I guess this would manifest as the new model consistently losing its edge over time, and everyone could conclude from the change in results that the market has fundamentally changed.
I mean, perhaps there is no “new market behavior” and everything under the sun has already been seen in the multi-year history of bitcoin and the other markets.
For instance, one such shift in bitcoin “fundamentals”, as it were, is the potential shift to off-chain activity. For instance, I know that blockstreams liquid has been providing inter-exchange service for some time now, and will probably continue to grow. Then there’s lightning network, if that takes off.
Then there’s the possibility of bitcoin introducing any kind of privacy technology, such that transaction amounts could be obfuscated, but that’s probably a long time coming.
I don’t me to be a doubting mustafa, and I definitely don’t want to discount your achievement… just curious is all.
Thanks for all you do!
- October 26, 2019 at 8:21 pm #6204JustinModerator
@martyf, this is a great call out. We need to update the article with a response to this because I think it’s important.
Short answer: our models will never stop learning and we (at least I) will never be satisfied with performance. We will keep learning and keep pushing the ball forward. Personally, I’d like to have a similar-level breakthrough this time next year. I love this stuff and I want that to shine through so strongly that it’s obvious to everyone who touches this platform.
I like to think about these things with the self-driving analogies. Ideally, you can train the car with a huge amount of data. With this, it should be able to drive in most conditions. As more and more cars running this software log hours, the system continues to learn, improve, and become more robust.
But that learning and improvement curve slows over time.
Unless the rules of the road drastically change, you’re going to end up with a very stable–robust–model of driving.
This chart illustrates the concept: https://i.ytimg.com/vi/ayiV7mH2_Qg/maxresdefault.jpg
Y-axis would be effectiveness. X-axis would be time.
The new model will continue to learn and we will absolutely allow it to continue to seek opportunities to improve based on new data patterns. However, I don’t expect it to change much (or at all) unless the “rules of the road” drastically change.
The new model is much farther to the right of the chart than the 3.5 model is or will ever be.
This is where the real accomplishment lies–we’re able to get this model to a point where it can drive on all sorts of roads in all sorts of places very effectively.
You may ask why our 3.5 and earlier models continue to learn and change. Why haven’t they reached this same degree of maturity? There are two reasons:
1. They are built upon a narrower set of data, which is mostly technical. No matter how much of this data there is, it probably just doesn’t paint the entire picture as well as our new, broader data set.
2. They are mathematically less capable. Creating the underlying framework for developing these systems and structuring the data sets is where the hard human work goes. We have just been able to use our experience to create an environment in which a more capable model could flourish.
With all of that said, I fully expect the crypto “rules of the road” to change. We will undoubtedly see regulation, technology, and institutional adoption changes that will upset the status quo. This is a dynamic environment.
Also, as I alluded to earlier, even though our model is robust and seems to work well in most any historical situation, there can always be a better model.
- You must be logged in to reply to this topic.