Pre-computing a trading plan in parallel
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
R version 2.14 introduced a new package, called parallel. This new package combines the functionality from two previous packages: snow and multicore. Since I was using multicore to parallelise my computations, I had to migrate to the new package and decided to publish some code.
Often trading strategies are tested using the daily closing price both to determine the position and to perform the trading. Since we need to pre-compute an action plan, parallelisation may be necessary if the computations are heavy.
The code at the end of this post is pre-computing the actions for the CSS Analytics’ DVI indicator. The entry point in the code is as follows:
library( quantmod ) library( parallel ) # Load the code at the end of this post # Get the SPY ETF from Yahoo getSymbols( "SPY", from="1900-01-01" ) # Compute the actions computeDVIActionPar( Cl( SPY ), range=10, cores=8 )
This basically requests to compute the position for all possible closing prices between -10% and +10%, parallelising the work 8 fold. The output of the command is something like:
Price Pct Position 1 111.59 -10.00 1 2 127.97 3.21 -1 3 136.38 10.00 -1
This output tells us that if the SPY doesn’t advance more than 3.21%, closing above $127.97, we should establish a long position at the close, otherwise – short. With that knowledge, and depending on the current position, what is left to do is to go to our Interactice Broker account and to put a limit on-close order. The complete code for the missing functions follows.
computeOneDVIAction = function( close, x ) { x[tail( index( x ), 1 )] = close dvi = DVI( x ) val = as.numeric( tail( dvi$dvi, 1 ) ) # Short if DVI > 0.5, long otherwise if( is.na( val ) ) { return( 0 ) } else if( val > 0.5 ) { return( -1 ) } return( 1 ) } computeDVIActionPar = function( x, step=0.01, range=5, cores ) { require( quantmod, quietly=TRUE ) require( parallel, quietly=TRUE ) prices = c( ) positions = c( ) latestClose = as.numeric( coredata( last( x ) ) ) # Shift to the left to use the last entry as the "guessed" close yy = lag( x, -1 ) # range is percentages range = range / 100 # Compute the vector with all closing prices within the range close = latestClose * ( 1 - range ) lastClose = latestClose * ( 1 + range ) close = round( close / step ) * step numSteps = ( close - latestClose ) / step + 1 close = round( close, 2 ) lastClose = ceiling( lastClose * 100 ) / 100 closes = close repeat { if( close >= lastClose ) break close = round( latestClose + step*numSteps, 2 ) numSteps = numSteps + 1 closes = c( closes, close ) } # Detect the cores if not supplied if( missing( cores ) ) { cores = parallel:::detectCores() } res = mclapply( closes, computeOneDVIAction, x = yy, mc.cores = cores ) # Summarize the positions prices = c() pcts = c() positions = c() # Impossible position lastPosition = -1e9 len = length( closes ) for( ii in 1:(len - 1) ) { if( res[[ii]] != lastPosition ) { positions = append( positions, res[[ii]] ) prices = append( prices, closes[ii] ) pcts = append( pcts, round( ( closes[ii] - latestClose ) / latestClose * 100, 2 ) ) lastPosition = res[[ii]] } } positions = append( positions, res[[len]] ) prices = append( prices, closes[ii] ) pcts = append( pcts, round( ( closes[len] - latestClose ) / latestClose * 100, 2 ) ) df = data.frame( prices, pcts, positions ) colnames( df ) = c( "Price", "Pct", "Position" ) return( df ) }
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.