The "Latte" EA is ready to trade several symbols in the fully automated mode from 1 chart.
Love this EA? Leave a positive review and get Dark Mars as a FREE bonus!
Latte and Dark Mars are two completely different trading styles — diversify your strategy with both at no extra cost. If you are interested, write to me via the Messages to claim your bonus. Limited offer, so act fast!
Signal: https://www.mql5.com/en/signals/2323484
The EA uses a "Transformer" neural network to forecast price movements. The main advantage of the Transformer over an LSTM network is its ability to find patterns even across very long sequences of data. While LSTMs often lose information when dealing with sequences longer than 2–3 months, Transformers handle sequences as long as a year with ease.
The Transformer architecture was first introduced by Google in 2017 for language translation tasks. Since then, this type of neural network has been widely adopted for building artificial intelligence systems, including ChatGPT. The key difference with the Transformer is that it encodes each input into a high-dimensional space (tens of thousands of dimensions), allowing it to capture complex relationships between all elements in the sequence. This approach sparked a revolution in machine learning, initially discussed only among experts, but later driving major advances as AI became more mainstream. As a result, Transformer models have increasingly replaced LSTMs in many fields, including financial market forecasting.
What impressed me the most is that the Transformer is able to continue learning even when the validation data differs from the training data. In my experience, LSTM networks often require the validation set to contain similar patterns to the training set in order to make further progress. When the validation examples are too different, LSTM training does not move at all. The Transformer, however, generalizes much better and continues to improve even on unfamiliar validation data. My tests show that the Transformer significantly outperforms LSTM in binary classification tasks. I have included a comparative table of performance metrics in the images.
Another key difference is that the Transformer is a much more complex architecture. While training an LSTM took me many hours, training a Transformer can take many days. For this reason, I plan to expand the capabilities of this EA gradually. And if the idea becomes popular, the very first thing I’ll do is buy the most advanced GPU to run large-scale experiments.
The Transformer neural network processes hundreds of bars and performs extensive calculations, which is why historical testing can be time-consuming. To speed up the process, here are two recommendations: 1) Test one symbol at a time - disable other symbols in the EA's settings; 2) Use "1-minute OHLC" ticks for faster backtesting. I’m currently working on further code optimizations. That said, it’s important to note that Transformer neural networks inherently require significant computation time. This delay only affects historical testing - live trading remains unaffected since signals are calculated just once per day.
Start the EA in one chart of any symbol. The EA always trades all its symbols from one chart, regardless of your current chart.
The EA trades using daily data. So you can use both "Every tick" or "1 minute OHLC". You can substaintially decrease the testing time using the latter option.
If for any reason you do not like the purchased program, you can request a refund within 30 days from the date of purchase. You can also make an exchange for any other product at an equal cost or by paying the difference.
Simply send a request for refund or exchange with your order number by email: support@fx-market.pro.
Refund requests received more than 30 days after purchase will be rejected.