Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Introduction
Recently I’ve been interested in analyzing trends in electric vehicle (EV) charging stations, using data from the Alternative Fuels Data Center’s Alternative Fuel Stations database. In this first post I’ll go over retrieving the data via an API, getting it into a tidy format, and some initial analysis and visualization.
< section id="data" class="level2">Data
I’ll retrieve the EV station data using the AFDC API. The documentation for the AFDC fuel-stations API can be found at: https://developer.nrel.gov/docs/transportation/alt-fuel-stations-v1/all/#station-count-record-fields
You can obtain a free API key at: [https://developer.nrel.gov/signup/]. I’ve saved my API key in my local .Renviron file so I can load it without exposing the key in my code.
I will request data for all EV stations in Colorado.
I’ll retrieve the data from the API using the httr (Wickham 2023) package.
# API key is stored in my .Renviron file api_key <- Sys.getenv("AFDC_KEY") # base url for AFDC alternative fuel stations API target <- "https://developer.nrel.gov/api/alt-fuel-stations/v1" # Return data for all electric stations in Colorado api_path <-".json?&fuel_type=ELEC&state=CO&limit=all" complete_api_path <- paste0(target,api_path,'&api_key=',api_key) response <- httr::GET(url=complete_api_path) if (response$status_code !=200){ print(paste('Warning, API call returned error code',response$status_code)) } response$status_code
[1] 200
The result returned from the API is a response object, and the data is in JSON format. The response (which I’m not printing here because would show my API key) contains a status code; a code of 200 means the API request was successful. Some of the general error codes the API might return are described here.
I’ll use the jsonlite (Ooms 2014) package to convert the JSON to R.
ev_dat <- jsonlite::fromJSON(httr::content(response,"text")) class(ev_dat)
[1] "list"
names(ev_dat)
[1] "station_locator_url" "total_results" "station_counts" [4] "fuel_stations"
The converted response is actually a list containing the data as well as some metadata about the request.
The total_results field gives the total number of fuel station records match your requested query (regardless of any limit applied).
ev_dat$total_results
[1] 2049
-
The *station_counts* field gives a breakdown by fuel type (here I requested only electric so the counts for all other fuel types are zero).
total includes the number of individual chargers/plugs, which is why it is greater than the station count.
In this case, there are 2049 stations, and a total of 4910 chargers/plugs.
ev_dat$station_counts$fuels$ELEC
$total [1] 4910 $stations $stations$total [1] 2049
Finally, the data we want to analyze is in the fuel_stations data frame.
ev <- ev_dat$fuel_stations
Filter out non-EV data columns
The returned data contains many non-electric fields that we don’t need (they will all be NA since we requested electric fuel type only), so I’ll remove the non-relevant fields from the data frame to clean things up a bit, using the starts_with function from the Wickham et al. (2023) package. – I’ll also change the date column type and add a variable for year opened, since I want to look at how many stations were opened over time.
# filter out non-EV related fields ev <- ev %>% select(-dplyr::starts_with("lng")) %>% select(-starts_with("cng")) %>% select(-starts_with("lpg")) %>% select(-starts_with("hy")) %>% select(-starts_with("ng")) %>% select(-starts_with("e85")) %>% select(-starts_with("bd")) %>% select(-starts_with("rd")) %>% filter(status_code=='E') # change date field to date type and add a year opened variable ev$open_date <- lubridate::ymd(ev$open_date) ev$open_year <- lubridate::year(ev$open_date) colnames(ev)
[1] "access_code" "access_days_time" [3] "access_detail_code" "cards_accepted" [5] "date_last_confirmed" "expected_date" [7] "fuel_type_code" "groups_with_access_code" [9] "id" "open_date" [11] "owner_type_code" "status_code" [13] "restricted_access" "maximum_vehicle_class" [15] "station_name" "station_phone" [17] "updated_at" "facility_type" [19] "geocode_status" "latitude" [21] "longitude" "city" [23] "intersection_directions" "plus4" [25] "state" "street_address" [27] "zip" "country" [29] "ev_connector_types" "ev_dc_fast_num" [31] "ev_level1_evse_num" "ev_level2_evse_num" [33] "ev_network" "ev_network_web" [35] "ev_other_evse" "ev_pricing" [37] "ev_renewable_source" "nps_unit_name" [39] "access_days_time_fr" "intersection_directions_fr" [41] "groups_with_access_code_fr" "ev_pricing_fr" [43] "ev_network_ids" "federal_agency" [45] "open_year"
Analysis
< section id="station-openings-over-time" class="level3">Station Openings Over Time
< section id="look-at-how-many-stations-opened-each-year" class="level4">Look at how many stations opened each year
First I’d like to look at how many EV stations opened over time, so I’ll make a new data frame summarizing the number of stations opened by year.
ev_opened <- ev %>% count(open_year,name = "nopened") %>% filter(!is.na(open_year)) head(ev_opened)
open_year nopened 1 2010 2 2 2011 13 3 2012 30 4 2013 18 5 2014 35 6 2015 57
Plot Number of Stations Opened Each year
ev_opened %>% ggplot(aes(open_year, nopened)) + geom_col()+ xlab("Year Opened")+ ylab("# Stations Opened")+ ggtitle('EV Stations Opened in Colorado Each Year')+ theme_grey(base_size = 15)+ geom_text(aes(label = nopened), vjust = 0)
Cumulative sum of stations opened over time
We can also look at the cumulative sum of stations opened over time
#| fig-width: 8 ev_opened %>% ggplot(aes(open_year,cumsum(nopened)))+ geom_line(linewidth=1.5)+ geom_point()+ xlab("Year")+ ylab("# Stations")+ ggtitle("Cumulative sum of EV stations opened in CO")+ theme_grey(base_size = 15)
Station openings by level/charger type
Next I want to dig a little deeper and break down the station openings by charger type and/or level. I’d expect to see more Level 2 chargers in earlier years, and an increase in DC fast charging stations in more recent years. I’ll make a new data frame with the number of chargers opened by year, grouped by charging level (Level 1, Level 2, or DC fast).
- Note here I’m working with the number of chargers of each level, not the number of stations.
ev_opened_level <- ev %>% select(id,open_date, open_year, ev_dc_fast_num, ev_level2_evse_num,ev_level1_evse_num) %>% group_by(open_year) %>% summarize(n_DC=sum(ev_dc_fast_num,na.rm = TRUE), n_L2=sum(ev_level2_evse_num,na.rm = TRUE), n_L1=sum(ev_level1_evse_num,na.rm = TRUE) ) %>% filter(!is.na(open_year)) head(ev_opened_level)
# A tibble: 6 × 4 open_year n_DC n_L2 n_L1 <dbl> <int> <int> <int> 1 2010 1 21 18 2 2011 1 17 0 3 2012 9 41 0 4 2013 20 31 28 5 2014 24 62 0 6 2015 29 127 0
To make plotting easier, I’ll pivot the dataframe from wide to long format so I can group by charging level:
ev_opened_level_long <- ev_opened_level %>% tidyr::pivot_longer(cols=c('n_DC','n_L2','n_L1'), names_to = "Level", names_prefix = "n_", values_to = "n_opened") head(ev_opened_level_long)
# A tibble: 6 × 3 open_year Level n_opened <dbl> <chr> <int> 1 2010 DC 1 2 2010 L2 21 3 2010 L1 18 4 2011 DC 1 5 2011 L2 17 6 2011 L1 0
Now I can go ahead and plot the number of chargers opened over time, by level.
#| fig-width: 8 ev_opened_level_long %>% ggplot(aes(open_year, n_opened, group=Level)) + geom_line(aes(col=Level), linewidth=1.5)+ geom_point(aes(col=Level))+ xlab("Year Opened")+ ylab("# Charges Opened")+ ggtitle("Number of Chargers Opened Per Year By Level")
Session Info
sessionInfo()
R version 4.2.3 (2023-03-15) Platform: x86_64-apple-darwin17.0 (64-bit) Running under: macOS Big Sur ... 10.16 Matrix products: default BLAS: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRblas.0.dylib LAPACK: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRlapack.dylib locale: [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 attached base packages: [1] stats graphics grDevices datasets utils methods base other attached packages: [1] dplyr_1.1.2 ggplot2_3.4.2 jsonlite_1.8.5 httr_1.4.6 loaded via a namespace (and not attached): [1] pillar_1.9.0 compiler_4.2.3 tools_4.2.3 digest_0.6.31 [5] lubridate_1.9.2 evaluate_0.21 lifecycle_1.0.3 tibble_3.2.1 [9] gtable_0.3.3 timechange_0.2.0 pkgconfig_2.0.3 rlang_1.1.1 [13] cli_3.6.1 rstudioapi_0.14 curl_5.0.1 yaml_2.3.7 [17] xfun_0.39 fastmap_1.1.1 withr_2.5.0 knitr_1.43 [21] generics_0.1.3 vctrs_0.6.2 grid_4.2.3 tidyselect_1.2.0 [25] glue_1.6.2 R6_2.5.1 fansi_1.0.4 rmarkdown_2.22 [29] purrr_1.0.1 tidyr_1.3.0 farver_2.1.1 magrittr_2.0.3 [33] scales_1.2.1 htmltools_0.5.5 colorspace_2.1-0 renv_0.17.3 [37] labeling_0.4.2 utf8_1.2.3 munsell_0.5.0
References
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.