R⁶ — Scraping Images To PDFs
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I’ve been doing intermittent prep work for a follow-up to an earlier post on store closings and came across this CNN Money “article” on it. Said “article” is a deliberately obfuscated or lazily crafted series of GIF images that contain all the Radio Shack impending store closings. It’s the most comprehensive list I’ve found, but the format is terrible and there’s no easy, in-browser way to download them all.
CNN has ToS that prevent automated data gathering from CNN-proper. But, they used Adobe Document Cloud for these images which has no similar restrictions from a quick glance at their ToS. That means you get an R⁶ post on how to grab the individual 38 images and combine them into one PDF. I did this all with the hopes of OCRing the text, which has not panned out too well since the image quality and font was likely deliberately set to make it hard to do precisely what I’m trying to do.
If you work through the example, you’ll get a feel for:
- using
sprintf()
to take a template and build a vector of URLs - use
dplyr
progress bars - customize
httr
verb options to ensure you can get to content - use
purrr
to iterate through a process of turning raw image bytes into image content (viamagick
) and turn a list of images into a PDF
library(httr) library(magick) library(tidyverse) url_template <- "https://assets.documentcloud.org/documents/1657793/pages/radioshack-convert-p%s-large.gif" pb <- progress_estimated(38) sprintf(url_template, 1:38) %>% map(~{ pb$tick()$print() GET(url = .x, add_headers( accept = "image/webp,image/apng,image/*,*/*;q=0.8", referer = "http://money.cnn.com/interactive/technology/radio-shack-closure-list/index.html", authority = "assets.documentcloud.org")) }) -> store_list_pages map(store_list_pages, content) %>% map(image_read) %>% reduce(image_join) %>% image_write("combined_pages.pdf", format = "pdf")
I figured out the Document Cloud links and necessary httr::GET()
options by using Chrome Developer Tools and my curlconverter
package.
If any academic-y folks have a test subjectsummer intern with a free hour and would be willing to have them transcribe this list and stick it on GitHub, you’d have my eternal thanks.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.