Visualize urban growth
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The General Directorate for the Cadastre of Spain has spatial information of the all buildings except for the Basque Country and Navarra. This data set is part of the implementation of INSPIRE, the Space Information Infrastructure in Europe. More information can be found here. We will use the links (urls) in ATOM format, which is an RSS type for web feeds, allowing us to obtain the download link for each municipality.
This blog post is a reduced version of the case study that you can find in our recent publication – Introduction to GIS with R – published by Dominic Royé and Roberto Serrano-Notivoli (in Spanish).
Packages
Package | Description |
---|---|
tidyverse | Collection of packages (visualization, manipulation): ggplot2, dplyr, purrr, etc. |
sf | Simple Feature: import, export and manipulate vector data |
fs | Provides a cross-platform, uniform interface to file system operations |
lubridate | Easy manipulation of dates and times |
feedeR | Import feeds RSS or ATOM |
tmap | Easy creation of thematic maps |
classInt | Create univariate class intervals |
sysfonts | Loading system fonts and Google Fonts |
showtext | Using fonts more easily in R graphs |
# install the packages if necessary if(!require("tidyverse")) install.packages("tidyverse") if(!require("feedeR")) install.packages("feedeR") if(!require("fs")) install.packages("fs") if(!require("lubridate")) install.packages("lubridate") if(!require("fs")) install.packages("fs") if(!require("tmap")) install.packages("tmap") if(!require("classInt")) install.packages("classInt") if(!require("showtext")) install.packages("showtext") if(!require("sysfonts")) install.packages("sysfonts") # load packages library(feedeR) library(sf) library(fs) library(tidyverse) library(lubridate) library(classInt) library(tmap)
Download links
The first url will give us access to a list of provinces, territorial headquarters (they do not always coincide with the oficial province), with new RSS links, which include the final download link for each municipality. In this case, we will download the buildings of Valencia. Cadastre data is updated every six months.
url <- "http://www.catastro.minhap.es/INSPIRE/buildings/ES.SDGC.bu.atom.xml" # import RSS feed with provincial links prov_enlaces <- feed.extract(url) str(prov_enlaces) # object is a list ## List of 4 ## $ title : chr "Download service of Buildings. Territorial Office" ## $ link : chr "http://www.catastro.minhap.es/INSPIRE/buildings/ES.SDGC.BU.atom.xml" ## $ updated: POSIXct[1:1], format: "2019-10-26" ## $ items :'data.frame': 52 obs. of 4 variables: ## ..$ title: chr [1:52] "Territorial office 02 Albacete" "Territorial office 03 Alicante" "Territorial office 04 Almería" "Territorial office 05 Avila" ... ## ..$ date : POSIXct[1:52], format: "2019-10-26" ... ## ..$ link : chr [1:52] "http://www.catastro.minhap.es/INSPIRE/buildings/02/ES.SDGC.bu.atom_02.xml" "http://www.catastro.minhap.es/INSPIRE/buildings/03/ES.SDGC.bu.atom_03.xml" "http://www.catastro.minhap.es/INSPIRE/buildings/04/ES.SDGC.bu.atom_04.xml" "http://www.catastro.minhap.es/INSPIRE/buildings/05/ES.SDGC.bu.atom_05.xml" ... ## ..$ hash : chr [1:52] "d21ebb7975e59937" "bdba5e149f09e9d8" "03bcbcc7c5be2e17" "8a154202dd778143" ... # extract the table with the links prov_enlaces_tab <- as_tibble(prov_enlaces$items) prov_enlaces_tab ## # A tibble: 52 x 4 ## title date link hash ## <chr> <dttm> <chr> <chr> ## 1 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ d21ebb7~ ## 2 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ bdba5e1~ ## 3 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ 03bcbcc~ ## 4 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ 8a15420~ ## 5 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ 7d3fd37~ ## 6 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ 9c08741~ ## 7 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ ff722b1~ ## 8 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ b431aa6~ ## 9 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ f79c656~ ## 10 Territorial o~ 2019-10-26 00:00:00 http://www.catastro.minhap.~ d702a6a~ ## # ... with 42 more rows
Now, we access and download the data from Valencia. To filter the final download link we use the filter()
function of the dplyr
package, searching for the name of the territorial headquarter and then the name of the municipality in capital letters with the str_detect()
function of stringr
. The pull()
function allows us to extract a column from a data.frame
.
# filter the province and get the RSS link val_atom <- filter(prov_enlaces_tab, str_detect(title, "Valencia")) %>% pull(link) # import the RSS val_enlaces <- feed.extract(val_atom) # get the table with the download links val_enlaces_tab <- val_enlaces$items # filter the table with the name of the city val_link <- filter(val_enlaces_tab, str_detect(title, "VALENCIA")) %>% pull(link) val_link ## [1] "http://www.catastro.minhap.es/INSPIRE/Buildings/46/46900-VALENCIA/A.ES.SDGC.BU.46900.zip"
Data download
The download is done with the download.file()
function that only has two main arguments, the download link and the path with the file name. In this case, we use the tempfile()
function, which is useful for creating temporary files, that is, files that only exist in the memory for a certain time.
The file we download has the extension *.zip
, so we must unzip it with another function (unzip()
), which requires the name of the file and the name of the folder, where we want to unzip it. Finally, the URLencode()
function encodes an URL address that contains special characters.
# create a temporary file temp <- tempfile() # download the data download.file(URLencode(val_link), temp) # unzip to a folder called buildings unzip(temp, exdir = "buildings")
Import the data
To import the data we use the dir_ls()
function of the fs
package, which can obtain the files and folders of a specific path while filtering through a text pattern (regexp : regular expression). We apply the st_read()
function of the sf
package to the Geography Markup Language (GML) file.
# get the path with the file file_val <- dir_ls("buildings", regexp = "building.gml") # import the data buildings_val <- st_read(file_val) ## Reading layer `Building' from data source `C:\Users\xeo19\Documents\GitHub\blogR_update\content\post\en\2019-11-01-visualize-urban-growth\buildings\A.ES.SDGC.BU.46900.building.gml' using driver `GML' ## Simple feature collection with 36296 features and 24 fields ## geometry type: MULTIPOLYGON ## dimension: XY ## bbox: xmin: 720608 ymin: 4351287 xmax: 734982.5 ymax: 4382906 ## epsg (SRID): 25830 ## proj4string: +proj=utm +zone=30 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs
Data preparation
We only have to convert the column of the construction year (beginning) into a Date
class. The date column contains some dates in --01-01
format, which does not correspond to any recognizable date. Therefore, we replace the first -
with 0000
.
buildings_val <- mutate(buildings_val, beginning = str_replace(beginning, "^-", "0000") %>% ymd_hms() %>% as_date() ) ## Warning: 4 failed to parse.
Distribution chart
Before creating the maps of the construction years, which will reflect urban growth, we will make a graph of distribution of the beginning variable. We can clearly identify periods of urban expansion. We will use the ggplot2
package with the geometry of geom_density()
for this purpose. The font_add_google()
function of the sysfonts
package allows us to download and include font families from Google.
#font download sysfonts::font_add_google("Montserrat", "Montserrat") #use showtext for fonts showtext::showtext_auto() # limit the period after 1750 filter(buildings_val, beginning >= "1750-01-01") %>% ggplot(aes(beginning)) + geom_density(fill = "#2166ac", alpha = 0.7) + scale_x_date(date_breaks = "20 year", date_labels = "%Y") + theme_minimal() + theme(title = element_text(family = "Montserrat"), axis.text = element_text(family = "Montserrat")) + labs(y = "",x = "", title = "Evolution of urban development")
Buffer of 2,5 km for Valencia
To visualize better the distribution of urban growth, we limit the map to a radius of 2.5 km from the city center. Therefore, we use the geocode_OSM()
function of the tmaptools
package to obtain the coordinates of Valencia in class sf
. Then we project the points to the system we use for the buildings (EPSG: 25830). Finally, we create with the function st_buffer()
a buffer with 2500 m and the intersection with our building data. It is also possible to create a buffer in the form of a rectangle indicating the style with the argument endCapStyle =" SQUARE "
.
# get the coordinates of Valencia ciudad_point <- tmaptools::geocode_OSM("Valencia", as.sf = TRUE) # project the points ciudad_point <- st_transform(ciudad_point, 25830) # create the buffer point_bf <- st_buffer(ciudad_point, 2500) # get the intersection between the buffer and the building buildings_val25 <- st_intersection(buildings_val, point_bf) ## Warning: attribute variables are assumed to be spatially constant ## throughout all geometries
Prepare data for mapping
We categorize the year into 15 groups using quartiles.
# find 15 classes br <- classIntervals(year(buildings_val25$beginning), 15, "quantile") ## Warning in classIntervals(year(buildings_val25$beginning), 15, "quantile"): ## var has missing values, omitted in finding classes # create labels lab <- names(print(br, under = "<", over = ">", cutlabels = FALSE)) ## style: quantile ## < 1890 1890 - 1912 1912 - 1925 1925 - 1930 1930 - 1940 1940 - 1950 ## 940 1369 971 596 1719 1080 ## 1950 - 1957 1957 - 1962 1962 - 1966 1966 - 1970 1970 - 1973 1973 - 1977 ## 1227 1266 1233 1165 1161 932 ## 1977 - 1987 1987 - 1999 > 1999 ## 1337 1197 1190 # categorize the year buildings_val25 <- mutate(buildings_val25, yr_cl = cut(year(beginning), br$brks, labels = lab, include.lowest = TRUE))
Map of Valencia
For the mapping, we will use the tmap
package. It is an interesting alternative to ggplot2
. It is a package of functions specialized in creating thematic maps. The philosophy of the package follows the same as in ggplot2
, creating multiple layers with different functions, which always start with tm_ *
and combine with +
. Building a map with tmap always starts with tm_shape(), where the data, we want to draw, is defined. Then we add the corresponding geometry to the data type (tm_polygon()
, tm_border()
, tm_dots()
or even tm_raster()
). The tm_layout()
function help us to configure the map style.
When we need more colors than the maximum allowed by RColorBrewer
, we can pass the colors to the colorRampPalette()
function. This function interpolates a set of given colors.
# colours col_spec <- RColorBrewer::brewer.pal(11, "Spectral") # colour ramp function col_spec_fun <- colorRampPalette(col_spec) # create the final map tm_shape(buildings_val25) + tm_polygons("yr_cl", border.col = "transparent", palette = col_spec_fun(15), textNA = "Without data", title = "") + tm_layout(bg.color = "black", outer.bg.color = "black", legend.outside = TRUE, legend.text.color = "white", legend.text.fontfamily = "Montserrat", panel.label.fontfamily = "Montserrat", panel.label.color = "white", panel.label.bg.color = "black", panel.label.size = 5, panel.label.fontface = "bold")
Dynamic map with leaflet
A very interesting advantage is the tmap_leaflet()
function of the tmap
package to easily pass a map created in the same frame to leaflet
.
# tmap object m <- tm_shape(buildings_val25) + tm_polygons("yr_cl", border.col = "transparent", palette = col_spec_fun(15), textNA = "Without data", title = "") # dynamic map tmap_leaflet(m)
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.