Trading with SVMs: Performance
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
To get a feeling of SVM performance in trading, I run different setups on the S&P 500 historical data from … the 50s. The main motif behind using this decade was to decide what parameters to vary and what to keep steady prior to running the most important tests. Treat it as an “in-sample” test to avoid (further;)) over-fitting. First the performance chart:
Very nice! Using the 5 lagged daily returns shows similar performance to the ARMA+GARCH strategy, which I found very promising. If you wonder why I am so excited about this fact, it’s because here we are in the area where ARMA+GARCH is best, and yet, SVMs show comparable performance.
The indicators for the tested methods are: the ARMA+GARCH indicator, the SVM with statistics indicator, the SVM on the last 5 daily returns indicator and the SVM on the last 5 daily returns with greedy feature selection indicator.
The statistics are also impressive:
Buy and Hold | ARMA+GARCH | SVM Lags | SVM Lags Greedy | SVM Stats | |
---|---|---|---|---|---|
Cumulative Return: | 95.93% | 260.97% | 218.96% | 284.04% | 274.21% |
Annualized Return: | 15.14% | 30.88% | 27.53% | 32.59% | 31.87% |
Sharpe Ratio: | 1.33 | 2.86 | 2.44 | 2.89 | 2.83 |
Winning Pct: | 52.37% | 51.4% | 54.77% | 54.93% | 54.13% |
Annualized SD: | 0.1137 | 0.1078 | 0.113 | 0.1127 | 0.1127 |
Max Drawdown: | -14.82% | -15.44% | -16.18% | -11.85% | -10.27% |
Avg Drawdown: | -3.43% | -1.61% | -2.08% | -1.77% | -1.63% |
While writing this post, I found another effort to use SVMs in trading by Quantum Financier. His approach uses RSI of different length as input to the SVM, but it also uses classification (maps the returns to two values, short or long) instead of regression. Since I was planning to try classification anyways, his post inspired me to implement it and run an additional comparison, regression vs classification:
What can I say – they both seem to work perfectly. As a reader suggested in the comments, the Classification does exhibit more consistent returns.
Regression | Classification | |
---|---|---|
Cumulative Return: | 214.47% | 214.42% |
Annualized Return: | 26.05% | 26.05% |
Sharpe Ratio: | 2.32 | 2.32 |
Winning Pct: | 56.54% | 54.93% |
Annualized SD: | 0.1124 | 0.1124 |
Max Drawdown: | -16.18% | -8.22% |
Avg Drawdown: | -2.18% | -1.72% |
Looking at the table, the classification cut in half the maximum drawdown, but interestingly, it didn’t improve the Sharpe ratio significantly. Nothing conclusive here though, it was a quick run of the fastest (in terms of running time) strategies.
There is still a long list of topics to explore, just to give you an idea, in no particular order:
- Add other features. Mostly thinking of adding some Fed-related series, this data goes back to 1960, so it’s coming soon.:)
- Try other svm parameters: other regressions, other classifications, other kerenls, etc. This is more like a stability test.
- Try other error functions. The default is to use the mean square error, but in the case of regression, why not use Sharpe Ratio (in-sample)? The regression case is simpler, since we have the actual returns – check the input of tune.control.
- Try longer periods instead of days. Weekly is a start, but ideally I’d like to implement two or three day periods.
- Vary the loopback period.
- Use more classes with classification: large days, medium days, etc.
This will take time. As always, feedback and comments are welcome.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.