Advent of 2021, Day 13 – Spark SQL bucketing and partitioning

[This article was first published on R – TomazTsql, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Series of Apache Spark posts:

Spark SQL includes also JDBC and ODBC drivers that gives the capabilities to read data from other databases. Data is returned as DataFrames and can easly be processes in Spark SQL. Databases that can connect to Spark SQL are:
– Microsoft SQL Server
– MariaDB
– PostgreSQL
– Oracle DB
– DB2

JDBC can also be used with kerberos authentication with keytab, but before use, make sure that the built-in connection provider supports kerberos authentication with keytab.

Using JDBC to store data using SQL:

CREATE TEMPORARY VIEW jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS (
  url "jdbc:mssql:dbserver",
  Dbtable "dbo.InvoiceHead",
  user 'tomaz',
  password 'pa$$w0rd'
)

INSERT INTO TABLE jdbcTable
SELECT * FROM Invoices

With R, reading and storing data:

# Loading data from a JDBC source
df <- read.jdbc("jdbc:mssql:localhost", "dbo.InvoiceHead", user = "tomaz", password = "pa$$w0rd")

# Saving data to a JDBC source
write.jdbc(df, "jdbc:mssql:localhost", "dbo.InvoiceHead", user = "tomaz", password = "pa$$w0rd")

To optimize the reads, we can also design storing data to Hive. Persistent datasource tables have per-partition metadata store in the Hive metastore. Hive metastore brings couple of benefits: 1) metastore can only return necessary partition for a query and tables in first query are not needed and 2) Hive DLLs are available for tables created with Datasource API.

Partitioning and bucketing

Partitioning and Bucketing in Hive are used to improve performance by eliminating table scans when dealing with a large set of data on a Hadoop file system (HDFS). The major difference between them is how they split the data.

Hive Partition is organising large tables into smaller logical tables based. These partitioning are based on values of columns, which can be one logical table (partition) for each distinct value. A single table can have one or more partitions. These partitioning correspond to am underlying directory map for table that is stored in HDFS.

ZIP code, Year, YearMonth, Region, State, and many more are perfect candidates for partitioning persistent table. If data have 150 ZIP codes, this would create 150 partitions, returning results much faster, when searching using ZIP (ZIP=1000 or ZIP=4283).

Each partition created will also create an underlying directory, where the partition or a single column is stored.

Bucketing is splitting the data into manageable binary files. It is also called clustering. The key to determine the buckets is the bucketing column and is hashed by end-user defined number. Bucketing can also be created on a single column (out of many columns in a table) and these buckets can also be partitioned. This would further split the data – making it inadvertently smaller and improve the query performance. Each bucket is stored as a file within the table’s root directory or within the partitioned directories.

Bucketing and partitioning are applicable only to persistent HIVE tables. With the use of Python, this operations are straightforward:

df.write.bucketBy(42, "name").sortBy("age").saveAsTable("people_bucketed")

and with SQL:

CREATE TABLE users_bucketed_by_name(
  name STRING,
  favorite_color STRING,
  favorite_numbers array<integer>
) USING parquet
CLUSTERED BY(name) INTO 42 BUCKETS;

And you can do both on a single table using SQL:

CREATE TABLE users_bucketed_and_partitioned(
  name STRING,
  favorite_color STRING,
  favorite_numbers array<integer>
) USING parquet
PARTITIONED BY (favorite_color)
CLUSTERED BY(name) SORTED BY (favorite_numbers) INTO 42 BUCKETS;

Tomorrow we will look into SQL Query hints and executions.

Compete set of code, documents, notebooks, and all of the materials will be available at the Github repository: https://github.com/tomaztk/Spark-for-data-engineers

Happy Spark Advent of 2021! ?

To leave a comment for the author, please follow the link and comment on their blog: R – TomazTsql.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)