Algorithmic Trading: Using Quantopian’s Zipline Python Library In R And Backtest Optimizations By Grid Search And Parallel Processing
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
We are ready to demo our new new experimental package for Algorithmic Trading, flyingfox
, which uses reticulate
to to bring Quantopian’s open source algorithmic trading Python library, Zipline
, to R. The flyingfox
library is part of our NEW Business Science Labs innovation lab, which is dedicated to bringing experimental packages to our followers early on so they can test them out and let us know what they think before they make their way to CRAN. This article includes a long-form code tutorial on how to perform backtest optimizations of trading algorithms via grid search and parallel processing. In this article, we’ll show you how to use the combination of tibbletime
(time-based extension of tibble
) + furrr
(a parallel-processing compliment to purrr
) + flyingfox
(Zipline
in R) to develop a backtested trading algorithm that can be optimized via grid search and parallel processing. We are releasing this article as a compliment to the R/Finance Conference presentation “A Time Series Platform For The Tidyverse”, which Matt will present on Saturday (June 2nd, 2018). Enjoy!
New: Business Science Labs
I (Davis) am excited to introduce a new open source initiative called Business Science Labs. A lot of the experimental work we do is done behind the scenes, and much of it you don’t see early on. What you do see is a “refined” version of what we think you need based on our perception, which is not always reality. We aim to change this. Starting today, we have created Business Science Labs, which is aimed at bringing our experimental software to you earlier so you can test it out and let us know your thoughts!
Our first initiative is to bring Quantopian’s Open Source algorithmic trading Python library, Zipline
, to R via an experimental package called flyingfox
(built using the awesome reticulate
package).
What We’re Going To Learn
Introducing Business Science Labs is exciting, but we really want to educate you on some new packages! In this tutorial, we are going to go over how to backtest algorithmic trading strategies using parallel processing and Quantopian’s Zipline infrastructure in R. You’ll gain exposure to a tibbletime
, furrr
, and our experimental flyingfox
package. The general progression is:
-
tibbletime
: What it is and why it’s essential to performing scalable time-based calculations in thetidyverse
-
furrr
: Why you need to know this package for speeding up code by processingpurrr
in parallel -
flyingfox
: The story behind the package, and how you can use it to test algorithmic trading strategies -
tibbletime
+furrr
+flyingfox
: Putting it all together to perform parallelized algorithmic trading strategies and analyze time-based performance
Here’s an example of the grid search we perform to determine which are the best combinations of short and long moving averages for the stock symbol JPM (JP Morgan).
Here’s an example of the time series showing the order (buy/sell) points determined by the moving average crossovers, and the effect on the portfolio value.
Algorithmic Trading Strategies And Backtesting
Algorithmic trading is nothing new. Financial companies have been performing algorithmic trading for years as a way of attempting to “beat” the market. It can be very difficult to do, but some traders have successfully applied advanced algorithms to yield significant profits.
Using an algorithm to trade boils down to buying and selling. In the simplest case, when an algorithm detects an asset (a stock) is going to go higher, a buy order is placed. Conversely, when the algorithm detects that an asset is going to go lower, a sell order is placed. Positions are managed by buying and selling all or part of the portfolio of assets. To keep things simple, we’ll focus on just the full buy/sell orders.
One very basic method of algorithmic trading is using short and long moving averages to detect shifts in trend. The crossover is the point where a buy/sell order would take place. The figure below shows the price of Halliburton (symbol “HAL”), which a trader would have an initial position in of say 10,000 shares. In a hypothetical case, the trader could use a combination of a 20 day short moving average and a 150 day long moving average and look for buy/sell points at the crossovers. If the trader hypothetically sold his/her position in full on the sell and bought the position back in full, then the trader would stand to avoid a delta loss of approximately $5/share during the downswing, or $50,000.
Backtesting is a strategy that is used to detect how a trading strategy would have performed in the past. It’s impossible to know what the future will bring, but using trading strategies that work in the past helps to instill confidence in an algorithm.
Quantopian is a platform designed to enable anyone to develop algorithmic trading strategies. To help its community, Quantopian provides several open source tools. The one we’ll focus on is Zipline
for backtesting. There’s one downside: it’s only available in Python.
With the advent of the reticulate
package, which enables porting any Python library to R, we took it upon ourselves to test out the viability of porting Zipline
to R. Our experiment is called flyingfox
.
RStudio Cloud Experiment Sandbox
In this code-based tutorial, we’ll use an experimental package called flyingfox
. It has several dependencies including Python that require setup time and effort. For those that want to test out flyingfox
quickly, we’ve created a FREE RStudio Cloud Sandbox for running experiments. You can access the Cloud Sandbox here for FREE: https://rstudio.cloud/project/38291
Packages Needed For Backtest Optimization
The meat of this code-tutorial is the section Backtest Optimization Using tibbletime + furrr + flyingfox . However, before we get to it, we’ll go over the three main packages used to do high-performance backtesting optimizations:
-
tibbletime
: What it is, and why it’s essential to performing scalable time-based calculations in thetidyverse
-
furrr
: Why you need to know this package for speeding up code by processingpurrr
in parallel -
flyingfox
: How to use it to test algorithmic trading strategies
Putting It All Together: tibbletime
+ furrr
+ flyingfox
for backtesting optimizations performed using parallel processing and grid search!
Install & Load Libraries
Install Packages
For this post, you’ll need to install development version of flyingfox
.
If you are on windows, you should also install the development version of furrr
.
Other packages you’ll need include tibbletime
, furrr
, and tidyverse
. We’ll also load tidyquant
mainly for the ggplot2
themes. We’ll install ggrepel
to repel overlapping plot labels. You can install these from CRAN using install.packages()
.
Load Packages
We’ll cover how a few packages work before jumping into backtesting and optimizations.
1. tibbletime
The tibbletime
package is a cornerstone for future time series software development efforts at Business Science. We have major plans for this package. Here are some key benefits:
- Time periods down to milliseconds are supported
- Because this is a
tibble
under the hood, we are able to leverage existing packages for analysis without reinventing the wheel - Scalable grouped analysis is at your fingertips because of
collapse_by()
and integration withgroup_by()
It’s best to learn now, and we’ll go over the basics along with a few commonly used functions: collapse_by()
, rollify()
, filter_time()
, and as_period()
.
First, let’s get some data. We’ll use the FANG data set that comes with tibbletime
, which includes stock prices for FB, AMZN, NFLX, and GOOG. We recommend using the tidyquant
package to get this or other stock data.
Next, you’ll need to convert thistbl_df
object to a tbl_time
object using the tbl_time()
function.
collapse_by()
Beautiful. Now we have a time-aware tibble. Let’s test out some functions. First, let’s take a look at collapse_by()
, which is used for grouped operations. We’ll collapse by “year”, and calculate the average price for each of the stocks.
rollify()
Next, let’s take a look at rollify()
. Remember the chart of Halliburton prices at the beginning. It was created using rollify()
, which turns any function into a rolling function. Here’s the code for the chart. Notice how we create two rolling functions using mean()
and supplying the appropriate window
argument.
filter_time()
Let’s check out filter_time()
, which enables easier subsetting of time-based indexes. Let’s redo the chart above, instead focusing in on sell and buy signals, which occur after February 2017. We can convert the previously stored hal_ma_tbl
to a tbl_time
object, group by the “key” column, and then filter using the time function format filter_time("2017-03-01" ~ "end")
. We then reuse the plotting code above.
as_period()
We can use the as_period()
function to change the periodicity to a less granular level (e.g. going from daily to monthly). Here we convert the HAL share prices from daily periodicity to monthly periodicity.
Next, let’s check out a new package for parallel processing using purrr
: furrr
!
2. furrr
The furrr
package combines the awesome powers of future
for parallel processing with purrr
for iteration. Let’s break these up into pieces by purrr
, future
, and furrr
.
purrr
The purrr
package is used for iteration over a number of different types of generic objects in R, including vectors, lists, and tibbles. The main function used is map()
, which comes in several varieties (e.g. map()
, map2()
, pmap()
, map_chr()
, map_lgl()
, map_df()
, etc). Here’s a basic example to get the class()
of the columns of the FANG_time
variable. Using map()
iterates over the columns of the data frame returning a list containing the contents of the function applied to each column.
future
The future
package enables parallel processing. Here are a few important points:
-
future
is a unified interface for parallel programming in R. -
You set a “plan” for how code should be executed, call
future()
with an expression to evaluate, and callvalue()
to retrieve the value. The firstfuture()
call sends off the code to another R process and is non-blocking so you can keep running R code in this session. It only blocks once you callvalue()
. -
Global variables and packages are automatically identified and exported for you!
Now, the major point: If you’re familiar with purrr
, you can take advantage of future
parallel processing with furrr
!
furrr = future + purrr
furrr
takes the best parts of purrr
and combines it with the parallel processing capabilities of future
to create a powerful package with an interface instantly familiar to anyone who has used purrr
before. All you need to do is two things:
-
Setup a
plan()
such asplan("multiprocess")
-
Change
map()
(or otherpurrr
function) tofuture_map()
(or other compatiblefurrr
function)
Every purrr
function has a compatible furrr
function. For example,
map_df()
has future_map_df()
. Set a plan, and run future_map_df()
and that is all there
is to it!
furrr Example: Multiple Moving Averages
Say you would like to process not a single moving average but multiple moving averages for a given data set. We can create a custom function, multi_roller()
, that uses map_dfc()
to iteratively map a rollify()
-ed mean()
based on a sequence of windows. Here’s the function and how it works when a user supplies a data frame, a column in the data frame to perform moving averages, and a sequence of moving averages.
We can test the function out with the FB stock prices from FANG. We’ll ungroup, filter by FB, and select the important columns, then pass the data frame to the multi_roller()
function with window = seq(10, 100, by = 10)
. Great, it’s working!
We can apply this multi_roller()
function to a nested data frame. Let’s try it on our FANG_time
data set. We’ll select the columns of interest (symbol, date, and adjusted), group by symbol, and use the nest()
function to get the data nested.
Next, we can perform a rowwise map using the combination of mutate()
and map()
. We apply multi_roller()
as an argument to map()
along with data (variable being mapped), and the additional static arguments, col and window.
Great, we have our moving averages. But…
What if instead of 10 moving averages, we had 500? This would take a really long time to run on many stocks. Solution: Parallelize with furrr
!
There are two ways we could do this since there are two maps:
- Parallelize the
map()
internal to themulti_roller()
function - Parallelize the rowwise
map()
applied to each symbol
We’ll choose the former (1) to show off furrr
.
To make the multi_roller_parallel()
function, copy the multi_roller()
function and do 2 things:
- Add
plan("multiprocess")
at the beginning - Change
map_dfc()
tofuture_map_dfc()
That’s it!
In the previous rowwise map, switch out multi_roller()
for multi_roller_parallel()
and change the window = 2:500
. Sit back and let the function run in parallel using each of your computer cores.
Bam! 500 moving averages run in parallel in fraction of the time it would take running in series.
3. flyingfox
We have one final package we need to demo prior to jumping into our Algorithmic Trading Backtest Optimization: flyingfox
.
What is Quantopian?
Quantopian is a company that has setup a community-driven platform for everyone (from traders to home-gamers) enabling development of algorithmic trading strategies. The one downside is they only use Python.
What is Zipline?
Zipline is a Python module open-sourced by Quantopian to help traders back-test their trading algorithms. Here are some quick facts about Quantopian’s Zipline
Python module for backtesting algorithmic trading strategies:
-
It is used to develop and backtest financial algorithms using Python.
-
It includes an event-driven backtester (really good at preventing look-ahead bias)
-
Algorithms consist of two main functions:
-
initialize()
: You write aninitialize()
function that is called once at the beginning of the backtest. This sets up variables for use in the backtest, schedules functions to be called daily, monthly, etc, and let’s you set slippage or commission for the backtest. -
handle_data()
: You then write ahandle_data()
function that is called once per day (or minute) that implements the trading logic. You can place orders, retrieve historical pricing data, record metrics for performance evalutation and more.
-
-
Extra facts: You can use any Python module inside the handle_data() function, so you have a lot of flexibility here.
What is reticulate?
The reticulate package from RStudio is an interface with Python. It smartly takes care of (most) conversion between R and Python objects.
Can you combine them?
Yes, and that’s exactly what we did. We used reticulate
to access the Zipline
Python module from R!
What is the benefit to R users?
What if you could write your initialize()
and handle_data()
functions in R utilizing any financial or time series R package for your analysis and then have them called from Python and Zipline
?
Introducing flyingfox: An R interface to Zipline
flyingfox integrates the Zipline
backtesting module in R! Further, it takes care of the overhead with creating the main infrastructure functions initialize()
and handle_data()
by enabling the user to set up:
fly_initialize()
: R version of Zipline’sinitialize()
fly_handle_data()
: R version of Zipline’shandle_data()
flyingfox
takes care of passing these functions to Python and Zipline
.
Why “Flying Fox”?
Zipline just doesn’t quite make for a good hex sticker. A flying fox is a synonym for zipliners, and it’s hard to argue that this majestic animal wouldn’t create a killer hex sticker.
Getting Started With flyingfox: Moving Average Crossover
Let’s do a Moving Average Crossover example using the following strategy:
- Using JP Morgan (JPM) stock prices
- If the 20 day short-term moving average crosses above the 150 day long-term moving average, buy 100% into JP Morgan
- If 20 day crosses below the 150 day, sell all of the current JPM position
Setup
Setup can take a while and take up some computer space due to ingesting data (which is where Zipline
saves every major asset to your computer). We recommend one of two options:
-
No weight option (for people that just want to try it out): Use our
flyingfox
sandbox on RStudio Cloud. You can connect here: https://rstudio.cloud/project/38291 -
Heavy weight option (for people that want to expand and really test it): Follow the instructions on my GitHub page to
fly_ingest()
data. The setup and data ingesting process are discussed here: https://github.com/DavisVaughan/flyingfox. Keep in mind that this is still a work in progress. We recommend doing the no weight option as a first start.
Initialize
First, write the R function for initialize()
. It must take context
as an argument. This is where you store variables used later, which are accessed via context$variable
. We’ll store context$i = 0L
to initialize the tracking of days, and context$asset = fly_symbol("JPM")
to trade the JPM symbol. You can select any symbol that you’d like (provided Quantopian pulls it from Quandl).
Handle Data
Next, write a handle_data()
function:
-
This implements the crossover trading algorithm logic
-
In this example we also use
fly_data_history()
to retrieve historical data each day for JP Morgan -
We use
fly_order_target_percent()
to move to a new percentage amount invested in JP Morgan (if we order1
, we want to move to be 100% invested in JP Morgan, no matter where we were before) -
We use
fly_record()
to store arbitrary metrics for review later
Run The Algorithm
Finally, run the algorithm from 2013-2016 using fly_run_algorithm()
.
If you got to this point, you’ve just successfully run a single backtest. Let’s review the performance output.
Reviewing The Performance
Let’s glimpse performance_time
to see what the results show. It’s a tbl_time
time series data frame organized by the “date” column, and there is a ton of information. We’ll focus on:
- date: Time stamp for each point in the performance analysis
- JPM: This is the price of the asset
- short_mavg and long_mavg: These are our moving averages we are using for the buy/sell crossover signals
- portfolio_value: The value of the portfolio at each time point
- transactions: Transactions stored as a list column. The tibble contains a bunch of information that is useful in determining what happened. More information below.
First, let’s plot the asset (JPM) along with the short and long moving averages. We can see there are a few crossovers.
Next, we can investigate the transactions. Stored within the performance_time
output are transaction information as nested tibbles. We can get these values by flagging which time points contain tibbles and the filtering and unnesting. A transaction type can be determined if the “amount” of the transaction (number of shares bought or sold) is positive or negative.
Finally, we can visualize the results using ggplot2
. We can see that the ending portfolio value is just under $11.5K.
Last, let’s use tibbletime
to see what happened to our portfolio towards the end. We’ll use the portfolio value as a proxy for the stock price, visualizing the crossover of the 20 and 150-day moving averages of the portfolio. Note that the actual algorithm is run with moving averages based on the adjusted stock price, not the portfolio value.
Backtest Optimization Via Grid Search
Now for the main course: Optimizing our algorithm using the backtested performance. To do so, we’ll combine what we learned from our three packages: tibbletime
+ furrr
+ flyingfox
.
Let’s say we want to use backtesting to find the optimal combination or several best combinations of short and long term moving averages for our strategy. We can do this using Cartesian Grid Search, which is simply creating a combination of all of the possible “hyperparameters” (parameters we wish to adjust). Recognizing that running multiple backtests can take some time, we’ll parallelize the operation too.
Preparation
Before we can do grid search, we need to adjust our fly_handle_data()
function to enable our parameters to be adjusted. The two parameters we are concerned with are the short and long moving averages. We’ll add these as arguments of a new function fly_handle_data_parameterized()
.
Making The Grid
Next, we can create a grid of values from a list()
containing the hyperparameter values. We can turn this into a cross product as a tibble
using the cross_df()
function.
Now that we have the hyperparameters, let’s create a new column with the function we wish to run. We’ll use the partial()
function to partially fill the function with the hyper parameters.
Running Grid Search In Parallel Using furrr
Now for the Grid Search. We use the future_map()
function to process in parallel. Make sure to setup a plan()
first. The following function runs the fly_run_algorithm()
for each fly_handle_data()
function stored in the “f” column.
Inspecting The Backtest Performance Results
The performance results are stored in the “results” column as tbl_time
objects. We can examine the first result.
We can also get the final portfolio value using a combination of pull()
and tail()
.
We can turn this into a function and map it to all of the columns to obtain the “final_portfolio_value” for each of the grid search combinations.
Visualizing The Backtest Performance Results
Now let’s visualize the results to see which combinations of short and long moving averages maximize the portfolio value. It’s clear that short >= 60 days and long >= 200 days maximize the return. But, why?
Let’s get the transaction information (buy/sell) by unnesting the results and determining which transactions are buys and sells.
Finally, we can visualize the portfolio value over time for each combination of short and long moving averages. By plotting the buy/sell transactions, we can see the effect on a stock with a bullish trend. The portfolios with the optimal performance are those that were bought and held rather than sold using the moving average crossover. For this particular stock, the benefit of downside protection via the moving average crossover costs the portfolio during the bullish uptrend.
Conclusions
We’ve covered a lot of ground in this article. Congrats if you’ve made it through. You’ve now been exposed to three cool packages:
tibbletime
: For time-series in the tidyversefurrr
: Our parallel processing extension topurrr
flyingfox
: Our experimental package brought to you as part of our Business Science Labs initiative
Further, you’ve seen how to apply all three of these packages to perform grid search backtest optimization of your trading algorithm.
Business Science University
If you are looking to take the next step and learn Data Science For Business (DS4B), you should consider Business Science University. Our goal is to empower data scientists through teaching the tools and techniques we implement every day. You’ll learn:
- Data Science Framework: Business Science Problem Framework
- Tidy Eval
- H2O Automated Machine Learning
- LIME Feature Explanations
- Sensitivity Analysis
- Tying data science to financial improvement
All while solving a REAL WORLD CHURN PROBLEM: Employee Turnover!
DS4B Virtual Workshop: Predicting Employee Attrition
Did you know that an organization that loses 200 high performing employees per year is essentially losing $15M/year in lost productivity? Many organizations don’t realize this because it’s an indirect cost. It goes unnoticed. What if you could use data science to predict and explain turnover in a way that managers could make better decisions and executives would see results? You will learn the tools to do so in our Virtual Workshop. Here’s an example of a Shiny app you will create.
Shiny App That Predicts Attrition and Recommends Management Strategies, Taught in HR 301
Our first Data Science For Business Virtual Workshop teaches you how to solve this employee attrition problem in four courses that are fully integrated:
- HR 201: Predicting Employee Attrition with
h2o
andlime
- HR 301 (Coming Soon): Building A
Shiny
Web Application - HR 302 (EST Q4): Data Story Telling With
RMarkdown
Reports and Presentations - HR 303 (EST Q4): Building An R Package For Your Organization,
tidyattrition
The Virtual Workshop is intended for intermediate and advanced R users. It’s code intensive (like these articles), but also teaches you fundamentals of data science consulting including CRISP-DM and the Business Science Problem Framework. The content bridges the gap between data science and the business, making you even more effective and improving your organization in the process.
Don’t Miss A Beat
- Sign up for the Business Science blog to stay updated
- Enroll in Business Science University to learn how to solve real-world data science problems from Business Science
- Check out our Open Source Software
Connect With Business Science
If you like our software (anomalize
, tidyquant
, tibbletime
, timetk
, and sweep
), our courses, and our company, you can connect with us:
- business-science on GitHub
- Business Science, LLC on LinkedIn
- bizScienc on twitter
- Business Science, LLC on Facebook
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.