[This article was first published on Weird Data Science, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Sightings of homo sapiens cognatus, or homo sylvestris, are well-recorded, with a particular prevalence in North America. Bigfoot, otherwise known as the Sasquatch, is one of the most widely-reported cryptids in the world; it is the subject of documentaries, film, music, and countless books.
The Bigfoot Field Research Organisation has compiled a detailed database of Bigfoot sightings going back to the 1920s. Each sighting is dated, located to the nearest town or road, and contains a full description of the sighting. In many cases, sightings are accompanied by a follow-up report from the organisation itself.
As previously with UFO sightings and paranormal manifestations, our first step is to retrieve the data and parse it for analysis. Thankfully, the bfro.net dataset is relatively well-formatted; reports are broken down by region, with each report following a mainly standard format.
As before, we rely on the rvest package in R to explore and scrape the website. In this case, the key elements were to retrieve each state’s set of reports from the top level page, and retrieve the link for each report. Conveniently, these are in a standard format; the website also allows a printer-friendly mode that greatly simplifies scraping.
The scraping code is given here:
Show scraping code.
library(tidyverse)
library(progress)
library(rvest)
# Base URLs for scraping
index_url <- "https://www.bfro.net/GDB/"
base_url <- "https://www.bfro.net"
report_base_url_pattern <- "https:\\/\\/www.bfro.net\\/GDB\\/show_report.asp\\?id="
# Retrieve list of state-level county report pages
get_state_listing_urls <- function( url ) {
read_html( url ) %>%
html_nodes( 'a' ) %>%
html_attr( 'href' ) %>%
str_match( '.*state_listing.asp\\?state=.*' ) %>%
na.omit %>%
unique %>%
{ paste0( base_url, . ) }
}
# Return all URLs pointing to a county-level list of reports
get_county_report_urls <- function( url ) {
read_html( url ) %>%
html_nodes( 'a' ) %>%
html_attr( 'href' ) %>%
str_match( '.*show_county_reports.asp\\?state=.*' ) %>%
na.omit %>%
unique %>%
{ paste0( index_url, . ) }
}
# Return all URLs pointing to a report in this page.
get_report_urls <- function( url ) {
read_html( url ) %>%
html_nodes( 'a' ) %>%
html_attr( 'href' ) %>%
str_match( '.*show_report.asp\\?id=[0-9]+' ) %>%
na.omit %>%
unique %>%
{ paste0( index_url, . ) }
}
progressive_get_county_report_urls <- function( url, progress_bar ) {
progress_bar$tick()$print()
cat( paste(url, "\n") , file="work/progress.log", append=TRUE )
chime()
get_county_report_urls( url )
}
progressive_get_report_urls <- function( url, progress_bar ) {
progress_bar$tick()$print()
cat( paste(url, "\n") , file="work/progress.log", append=TRUE )
get_report_urls( url )
}
# Pull the report listing from a page.
store_report <- function( url ) {
report_id <-
url %>%
str_replace( report_base_url_pattern, "" ) %>%
str_replace( "/", "-" ) %>%
str_replace( ".html", "" )
#message( paste0("Processing report ", report_id, "... " ), appendLF=FALSE )
# Check if this report has already been stored.
if( file.exists( paste0( "data/", report_id, ".rds" ) ) ) {
message( paste0( "Report already retrieved: ", report_id ) )
return()
}
url_pf <- paste0( url, "&PrinterFriendly=True" )
report <-
tryCatch(
{
# Fetch and parse HTML of current page.
read_html( url_pf ) %>%
as.character
},
error=function( cond ) {
message( paste( "URL caused an error:", url ))
message( "Error message:")
message( cond )
return( NULL )
},
warning=function( cond ) {
message( paste( "URL caused a warning:", url ))
message( "Warning message:")
message( cond )
return( NULL )
},
finally={
}
)
# Write report result to disk
saveRDS( report, paste0("data/", report_id, ".rds") )
}
# Progressive version of store_report
progressive_store_report <- function( url, progress_bar ) {
progress_bar$tick()$print()
cat( paste(url, "\n") , file="work/progress.log", append=TRUE )
store_report( url )
}
# The key to this is that links of the form
# <https://www.bfro.net/GDB/show_county_reports.asp?state=...>
# link to each state's data. At the top level, then, we can just get all links matching that form.
# At each sub-page, the links are:
# <https://www.bfro.net/GDB/show_report.asp?id=...>
# We can just pull all of those for each state.
# NOTE: There are also non-US reports linked like this directly from the main GDB page.
# Finally, it seems that adding "&PrinterFriendly=True" will strip a lot of
# unnecessary formatting.
# From the top-level page, get every URL that matches 'show_county_reports.asp?state=', make
# the list unique, then download the pages.
# Get all non-US state indices
message("Getting state report lists...", appendLF=FALSE )
bfro_state_indices <- get_state_listing_urls( index_url )
message("done." )
saveRDS( bfro_state_indices, "work/bfro_state_indices.rds" )
# Get all US county-level indices
message("Getting county-level report urls... " )
pb <- progress_estimated( length( bfro_state_indices ) )
bfro_county_urls <-
bfro_state_indices %>%
map( progressive_get_county_report_urls, pb ) %>%
unlist %>%
unique
saveRDS( bfro_county_urls, "work/bfro_county_urls.rds" )
# Get URLs of US reports from county pages.
# (Final subset line handles cases where pages are empty of reports, but
# contain other links such as media articles.)
pb <- progress_estimated( length( bfro_county_urls ) )
bfro_report_urls <-
bfro_county_urls %>%
map( progressive_get_report_urls, pb ) %>%
unlist %>%
unique %>%
str_subset( "GDB\\/." )
saveRDS( bfro_report_urls, "work/bfro_report_urls.rds" )
# Store all US reports
pb <- progress_estimated( length( bfro_report_urls ) )
bfro_report_urls %>%
map( progressive_store_report, pb )
# Get all non-US indices
message("Getting non-US top-level report lists...", appendLF=FALSE )
bfro_nonus_indices <-
get_county_report_urls( index_url ) %>%
str_replace( "GDB\\/\\/", "\\/" )
message("done." )
saveRDS( bfro_nonus_indices, "work/bfro_nonus_indices.rds" )
# Get URLs of US reports from county pages
message("Getting non-US report lists...", appendLF=FALSE )
pb <- progress_estimated( length( bfro_nonus_indices ) )
bfro_nonus_report_urls <-
bfro_nonus_indices %>%
map( progressive_get_report_urls, pb ) %>%
unlist %>%
unique %>%
str_subset( "GDB\\/." )
saveRDS( bfro_nonus_report_urls, "work/bfro_nonus_report_urls.rds" )
# Store all US reports
pb <- progress_estimated( length( bfro_nonus_report_urls ) )
bfro_nonus_report_urls %>%
map( progressive_store_report, pb )
With each page retrieved, we step through and parse each report. Again, each page is fairly well-formatted, and uses a standard set of tags for date, location, and similar. The report parsing code is given here:
Show parsing code.
# NOTES
# Process entire list to check for standardised headers. These are in capitals, and located in <span class="field"> tags.
# Report starts with:
# <span class="reportheader">
# <span class=\"reportclassification\"> (Look up what these mean.)
# Some following <span class="field"> types contain general summary information, eg:
# " <span class=\"field\">Submitted by witness on Thursday, November 1, 2007.</span>"
# Following fields are in span tags separated by paragraph tags, eg:
# <p><span class=\"field\">YEAR:</span> 2007</p>
# STATE and COUNTY fields are typically links; we should pull their text. (I can't see a good reason to parse links to anything other than the link text for our purposes.)
library(tidyverse)
library(magrittr)
library(progress)
library(rvest)
library(lubridate)
# Exploratory functions
# List all capitalised fields seen in any file
list_all_fields <- function() {
filenames <- list.files( "data", pattern="*.rds", full.names=TRUE)
# In each file, select the <span> classes, match the uppercase field names, and extract text.
all_fields <-
filenames %>%
map( list_report_fields ) %>%
unlist %>%
unique
saveRDS( all_fields, "work/fields.rds" )
return( all_fields )
}
fields_to_colnames <- function( fields ) {
# In total there are 18 fields reported in the data, with the final one
# being an apparently miscoded date and place from a report.
# Format the text appropriately for dataframe column names
fields %>%
head( 17 ) %>%
str_remove( ":" ) %>%
str_replace_all( " ", "_" ) %>%
str_to_lower()
}
list_report_fields <- function( report ) {
to_process <- readRDS( report )
bfro_fields <- NULL
if( !is.null( to_process ) ) {
bfro_fields <-
to_process %>%
read_html %>%
html_nodes( "span" ) %>%
html_nodes(xpath = '//*[@class="field"]') %>%
html_text %>%
str_subset( "[A-Z]+:" )
}
return( bfro_fields )
}
# Extract node date
parse_report_data <- function( report_data ) {
# report_data should be an xml_nodeset
# Metadata
metadata_list <-
report_data %>%
html_text %>%
str_subset( "^[A-Z ]+: " ) %>%
str_split_fixed( ": ", 2 ) %>%
as.tibble %>%
spread( key=V1, value=V2 ) %>%
set_colnames( fields_to_colnames( colnames(.) ) )
# Extra details
extra_text <-
report_data %>%
html_text %>%
str_remove( "^[A-Z ]+:.*" ) %>%
str_flatten( " " ) %>%
str_trim
# Add extra details as a column
metadata_list$extra <- extra_text
# Note whether date is rough or accurately reported
metadata_list$rough_date <- FALSE
# "Fix" missing date or month columns
if( "date" %in% colnames( metadata_list ) == FALSE ) {
metadata_list$date <- "1"
metadata_list$rough_date <- TRUE
}
if( "month" %in% colnames( metadata_list ) == FALSE ) {
metadata_list$month <- "January"
metadata_list$rough_date <- TRUE
}
# Combine date columns (year, month, date) to a true date.
metadata_list <-
metadata_list %>%
mutate( year = str_replace( year, ".*([0-9]{4}).*", "\\1" ) ) %>%
mutate( date = paste0( year, "-", month, "-", date ), year=NULL, month=NULL ) %>%
mutate( date = as.POSIXct( date, format="%Y-%B-%d" ) )
}
# Read a file and pass to `post_get_all_thread`
process_file <- function( filename ) {
# Read stored file
to_process <- readRDS( filename )
# Don't process null files
if( is.null( to_process ) )
return( NULL )
if( length( to_process ) == 0 )
return( NULL )
# Produce an xml_nodeset for parsing
xml_thread <-
to_process %>%
read_html %>%
html_nodes( "p" )
parse_report_data( xml_thread )
}
# Progressive version of process_file
progressive_process_file <- function( filename, progress_bar ) {
progress_bar$tick()$print()
cat( paste(filename, "\n") , file="progress.log", append=TRUE )
report <-
tryCatch(
{
process_file( filename )
},
error=function( cond ) {
message( paste( "File caused an error:", filename ))
message( "Error message:")
message( cond )
return( NULL )
},
warning=function( cond ) {
message( paste( "URL caused a warning:", url ))
message( "Warning message:")
message( cond )
return( NULL )
},
finally={
}
)
return( report )
}
## Processing starts here
# Read all .rds data files and process into a thread
filenames <- list.files("data", pattern="*.rds", full.names=TRUE)
pb <- progress_estimated( length( filenames ) )
# Begin logging
cat( "Processing... ", file="progress.log", append=FALSE )
bfro_tbl <-
filenames %>%
map( progressive_process_file, pb ) %>%
bind_rows
saveRDS( bfro_tbl, "data/bfro_processed.rds" )
With each report parsed into a form suitable for analysis, the final step in scraping the site is to geolocate the reports. As in previous posts, we rely on Google’s geolocation API. For each report, we extract an appropriate address and parse it into a set of latitude and longitude coordinates. For the purposes of this initial scrape we restrict ourselves to North America, which compromises a large majority of reports on `bfro.net`. Geolocation code is included below. (Note that a Google Geolocation API key is required for this code to run.)
Show geolocation code.
library(googleway)
library(progress)
library(tidyverse)
library(magrittr)
# weird.data.science Google API key (Geocoding API enabled)
key <- <INSERT API KEY HERE>
# Load bfro tibble
bfro_data <- readRDS( "data/bfro_processed.rds" )
# Geocode entries
# BFRO entries contain some, all, or none of:
# country, province, state, county, nearest_town
# Country is only used for Canada, in which case a province is given.
# Country and province are NA for the US.
# Best plan, then is to create a string of (nearest_town, province, country) for Canadian results, and (nearest_town, county, state, "US") for US results.
# Bound the request to be only in North America.
# (Used: <https://boundingbox.klokantech.com>)
# (SW->NE latitude and longitude.)
bounding_box <- list( c( -170.3, 24.4),
c( -52.3, 83.3) )
form_location_string <- function( country, province, state, county, nearest_town ) {
location_string <- NA
# US case, then Canadian
if( is.na( country ) )
location_string <- paste0( nearest_town, ", ", county, ", ", state, ", US" )
else
location_string <- paste0( nearest_town, ", ", province, ", ", ", Canada" )
# Strip double commas caused by NA values and remove bracketed clauses.
location_string %<>%
str_remove_all( "\\([^\\)]*\\)" ) %>%
str_replace_all( ", , ", ", " ) %>%
str_replace_all( " ,", "," ) %>%
str_remove_all( "NA, " )
}
# Create a safe version of google_geocode
safe_geocode <- safely( google_geocode )
# Wrap google_geocode with a progress bar call.
# Also, optionally remove any bracketed substrings entirely (strip_brackets)
progressive_geocode <- function( location_string, progress_bar ) {
# Write status to log file.
progress_bar$tick()$print()
cat( paste0( location_string, "\n" ), file="progress.log", append=TRUE )
result <-
safe_geocode(
address = location_string,
key = key,
bounds = bounding_box,
simplify = TRUE )
# Note that this can be NULL
return( result$result )
}
# Logging and output
## Clear the screen first
cat(c("\033[2J","\033[0;0H"))
cat("Geolocating entries...\n")
cat( "Geolocating entries...\n", file="progress.log", append=FALSE )
# Create the location string
bfro_data %<>%
mutate( location = form_location_string( country, province, state, county, nearest_town ) )
# Geolocate each location.
pb <- progress_estimated(nrow(bfro_data))
bfro_data$geolocation <-
map( bfro_data$location, progressive_geocode, progress_bar = pb )
cat("\nComplete.\n")
## With all values scraped, save geolocated data.
saveRDS( bfro_data, file="work/bfro_data_geolocated.rds")
With geolocated data in hand, we can now venture into the wilds. In which areas of North America are Sasquatch most commonly seen to roam? The plot below shows the overall density of Bigfoot sightings, with individual reports marked.
Density of Bigfoot sightings in North America. (PDF Version)
There are particular clusters on the Great Lakes, particularly in Southern Ontario; as well as in the Pacific Northwest. Smaller notable clusters exist in Florida, centered around Orlando. As with most report-based datasets, sightings are skewed towards areas of high population density.
The obvious first question to ask of such data is which, if any, environmental features correlate with these sightings. Other analyses of Bigfoot sightings, such as the seminal work of Lozier et al.
Penalizing P Values Ioannidis' paper suggesting that most published results in medical research are not true is now high profile enough that even my dad, an artist who wouldn't know a test statistic if it hit him in the face, knows about it. It has even shown up recently in…
I don’t know whether ‘rename taxa’ is a common task or not. It seems not a good idea to rename taxa in Newick tree text, since it may introduce problems when mapping the original sequence alignment to the tree. If you just want to show different or additional information when…
UPDATE: there were some errors in the tests for taxize, so the binaries aren't avaiable yet. You can install from source though, see below. Getting taxonomic information for the set of species you are studying can be a pain in the ass. You have to manually type, or paste in,…
December 6, 2012
Similar post
To leave a comment for the author, please follow the link and comment on their blog: Weird Data Science.