dplyrXdf 0.10.0 beta prerelease
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I’m happy to announce that version 0.10.0 beta of the dplyrXdf package is now available. You can get it from Github:
install_github("RevolutionAnalytics/dplyrXdf", build_vignettes=FALSE)
This is a major update to dplyrXdf that adds the following features:
- Support for the tidyeval framework that powers the latest version of dplyr
- Works with Spark and Hadoop clusters and files in HDFS
- Several utility functions to ease working with files and datasets
- Many bugfixes and workarounds for issues with the underlying RevoScaleR functions
This (pre-)release of dplyrXdf requires Microsoft R Server or Client version 8.0 or higher, and dplyr 0.7 or higher. If you’re using R Server, dplyr 0.7 won’t be in the MRAN snapshot that is your default repo, but you can get it from CRAN:
install.packages("dplyr", repos="https://cloud.r-project.org")
The tidyeval framework
This completely changes the way in which dplyr handles standard evaluation. Previously, if you wanted to program with dplyr pipelines, you had to use special versions of the verbs ending with “_”: mutate_
, select_
, and so on. You then provided inputs to these verbs via formulas or strings, in a way that was almost but not quite entirely unlike normal dplyr usage. For example, if you wanted to programmatically carry out a transformation on a given column in a data frame, you did the following:
x <- "mpg" transmute_(mtcars, .dots=list(mpg2=paste0("2 * ", x))) # mpg2 #1 42.0 #2 42.0 #3 45.6 #4 42.8 #5 37.4
This is prone to errors, since it requires creating a string and then parsing it. Worse, it's also insecure, as you can't always guarantee that the input string won't be malicious.
The tidyeval framework replaces all of that. In dplyr 0.7, you call the same functions for both interactive use and programming. The equivalent of the above in the new framework would be:
# the rlang package implements the tidyeval framework used by dplyr library(rlang) x_sym <- sym(x) transmute(mtcars, mpg2=2 * (!!x_sym)) # mpg2 #1 42.0 #2 42.0 #3 45.6 #4 42.8 #5 37.4
Here, the !!
symbol is a special operator that means to get the column name from the variable to its right. The verbs in dplyr 0.7 understand the special rules for working with quoted symbols introduced in the new framework. The same code also works in dplyrXdf 0.10:
# use the new as_xdf function to import to an Xdf file mtx <- as_xdf(mtcars) transmute(mtx, mpg2=2 * (!!x_sym)) %>% as.data.frame # mpg2 #1 42.0 #2 42.0 #3 45.6 #4 42.8 #5 37.4
For more information about tidyeval, see the dplyr vignettes on programming and compatibility.
New features in dplyrXdf
Copy, move and delete Xdf files
The following functions let you manipulate Xdf files as files:
copy_xdf
andmove_xdf
copy and move an Xdf file, optionally renaming it as well.rename_xdf
does a strict rename, ie without changing the file’s location.delete_xdf
deletes the Xdf file.
HDFS file transfers
The following functions let you transfer files and datasets to and from HDFS, for working with a Spark or Hadoop cluster:
copy_to
uploads a dataset (a data frame or data source object) from the native filesystem to HDFS, saving it as an Xdf file.collect
andcompute
do the reverse, downloading an Xdf file from HDFS.hdfs_upload
andhdfs_download
transfer arbitrary files and directories to and from HDFS.
Uploading and downloading works (or should work) both from the edge node and from a remote client. The interface is the same in both cases: no need to remember when to use rxHadoopCopyFromLocal and rxHadoopCopyFromClient. The hdfs_* functions mostly wrap the rxHadoop* functions, but also add extra functionality in some cases (eg vectorised copy/move, test for directory existence, etc).
HDFS file management
The following functions are for file management in HDFS, and mirror similar functions in base R for working with the native filesystem:
hdfs_dir
lists files in a HDFS directory, like dir() for the native filesystem.hdfs_dir_exists
andhdfs_file_exists
test for existence of a directory or file, like dir.exists() and file.exists().hdfs_file_copy
,hdfs_file_move
andhdfs_file_remove
copy, move and delete files in a vectorised fashion, like file.copy(), file.rename() and unlink().hdfs_dir_create
andhdfs_dir_remove
make and delete directories, like dir.create() and unlink(recursive=TRUE).in_hdfs
returns whether a data source is in HDFS or not.
As far as possible, the functions avoid reading the data via rxDataStep and so should be more efficient. The only times when rxDataStep is necessary are when importing from a non-Xdf data source, and converting between standard and composite Xdfs.
Miscellaneous functions
as_xdf
imports a dataset or data source into an Xdf file, optionally as composite.as_standard_xdf
andas_composite_xdf
are shortcuts for creating standard and composite Xdfs respectively.is_xdf
andis_composite_xdf
return whether a data source is a (composite) Xdf.local_exec
runs an expression in the local compute context: useful for when you want to work with local Xdf files while connected to a remote cluster.
Wrapup
For more information, check out the package vignettes in particular "Using the dplyrXdf package" and "Working with HDFS".
dplyrXdf 0.10 is tentatively scheduled for a final release at the same time as the next version of Microsoft R Server, or shortly afterwards. In the meantime, please download this and give it a try; if you run into any bugs, or if you have any feedback, you can email me or log an issue at the Github repo.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.