Set Operations
By Carl Goodwin in R
November 14, 2017
In Let’s Jitter I looked at a relatively simple set of sales data. G-Cloud data offers a much richer source of data with many thousands of services documented by several thousand suppliers and hosted across myriad web pages. These services straddle many categories. I’ll use these data to explore sets and their intersections.
library(tidyverse)
library(rvest)
library(kableExtra)
library(furrr)
library(wesanderson)
library(tictoc)
library(ggupset)
library(ggVennDiagram)
library(glue)
plan(multicore)
theme_set(theme_bw())
(cols <- wes_palette("Royal2"))
I’m going to focus on the Cloud Hosting lot. Suppliers document the services they want to offer to Public Sector buyers. Each supplier is free to assign each of their services to one or more service categories. It would be interesting to see how these categories overlap when looking at the aggregated data.
I’ll begin by harvesting the URL for each category’s search results. And I’ll also capture the number of search pages for each category. This will enable me to later control how R iterates through the web pages to extract the required data.
path <-
"https://www.digitalmarketplace.service.gov.uk/g-cloud/search?lot=cloud-"
lot_urls <-
c(
str_c(path, "hosting"),
str_c(path, "software"),
str_c(path, "support")
)
cat_urls <- future_map_dfr(lot_urls, function(x) {
nodes <- x |>
read_html() |>
html_elements(".app-lot-filter__last-list li a")
tibble(
url = nodes |>
html_attr("href"),
pages = nodes |>
html_text()
)
}) |>
mutate(
pages = parse_number(as.character(pages)),
pages = if_else(pages %% 30 > 0, pages %/% 30 + 1, pages %/% 30),
lot = str_extract(url, "(?<=cloud-)[\\w]+"),
url = str_remove(url, ".*(?=&)")
)
version <- lot_urls[[1]] |>
read_html() |>
html_elements(".app-search-result:first-child") |>
html_text() |>
str_extract("G-Cloud \\d\\d")
So now I’m all set to parallel process through the data at two levels. At category level. And within each category, I’ll iterate through the multiple pages of search results, harvesting 100 service IDs per page.
I’ll also auto-abbreviate the category names so I’ll have the option of more concise names for less-cluttered plotting later on.
tic()
data_df <-
future_pmap_dfr(
list(
cat_urls$url,
cat_urls$pages,
cat_urls$lot
),
function(x, y, z) {
future_map_dfr(1:y, function(y) {
refs <- str_c(
"https://www.digitalmarketplace.service.gov.uk/g-cloud/search?page=",
y,
x,
"&lot=cloud-",
z
) |>
read_html() |>
html_elements("#js-dm-live-search-results .govuk-link") |>
html_attr("href")
tibble(
lot = str_c("Cloud ", str_to_title(z)),
id = str_extract(refs, "[[:digit:]]{15}"),
cat = str_remove(x, "&serviceCategories=") |>
str_replace_all("\\Q+\\E", " ") |>
str_remove("%28[[:print:]]+%29")
)
})
}
) |>
select(lot:cat) |>
mutate(
cat = str_trim(cat) |> str_to_title(),
abbr = str_remove(cat, "and") |> abbreviate(3) |> str_to_upper()
)
toc()
## 5007.295 sec elapsed
# Uncached, 78 mins with multicore
Now that I have a nice tidy tibble, I can start to think about visualisations.
I like Venn diagrams. But to create one I’ll first need to do a little prep as ggVennDiagram
requires separate character vectors for each set.
host_df <- data_df |>
filter(lot == "Cloud Hosting") |>
group_by(abbr)
keys <- host_df |>
group_keys() |>
pull(abbr)
all_cats <- host_df |>
group_split() |>
map("id") |>
set_names(keys)
Venn diagrams work best with a small number of sets. So we’ll select four categories.
four_cats <- all_cats[c("CAAH", "PAAS", "OBS", "IND")]
four_cats |>
ggVennDiagram(label = "count", label_alpha = 0) +
scale_fill_gradient(low = cols[3], high = cols[5]) +
scale_colour_manual(values = cols[c(rep(4, 4))]) +
labs(
x = "Category Combinations", y = NULL, fill = "# Services",
title = "The Most Frequent Category Combinations",
subtitle = glue("Focusing on Four {version} Service Categories"),
caption = "Source: digitalmarketplace.service.gov.uk\n"
)
Let’s suppose I want to find out which Service IDs lie in a particular intersection. Perhaps I want to go back to the web site with those IDs to search for, and read up on, those particular services. I could use purrr’s reduce
to achieve this. For example, let’s extract the IDs at the heart of the Venn which intersect all categories.
four_cats |> reduce(intersect)
## [1] "616441239909908" "162482129495664" "905776993996934" "376219180313371"
## [5] "407652487989048" "225809803026529" "351972947261871" "141467821661395"
## [9] "428474094741326" "407756608170776" "124196562348938" "550690199626866"
## [13] "150223927888075" "860854746891399"
And if we wanted the IDs intersecting the “OBS” and “IND” categories?
list(
four_cats$OBS,
four_cats$IND
) |>
reduce(intersect)
## [1] "616441239909908" "151022407134695" "852500112300305" "162482129495664"
## [5] "905776993996934" "376219180313371" "407652487989048" "474814461260648"
## [9] "225809803026529" "351972947261871" "576030340787556" "141467821661395"
## [13] "758507645451314" "428474094741326" "407756608170776" "774442655261342"
## [17] "853411470123271" "504203548963816" "100473171421762" "456595685823942"
## [21] "513866403672024" "110346802387127" "580227303306629" "911282351993476"
## [25] "948752760228442" "124196562348938" "496076310695431" "550690199626866"
## [29] "150223927888075" "675399761262086" "497814439666264" "615391229994933"
## [33] "860854746891399"
Sometimes though we need something a little more scalable than a Venn diagram. The ggupset package provides a good solution. Before we try more than four sets though, I’ll first use the same four categories so we may compare the visualisation to the Venn.
set_df <- data_df |>
filter(abbr %in% c("CAAH", "PAAS", "OBS", "IND")) |>
group_by(id) |>
mutate(category = list(cat)) |>
distinct(id, category) |>
group_by(category) |>
mutate(n = n()) |>
ungroup()
set_df |>
ggplot(aes(category)) +
geom_bar(fill = cols[1]) +
geom_label(aes(y = n, label = n), vjust = -0.1, size = 3, fill = cols[5]) +
scale_x_upset() +
scale_y_continuous(expand = expansion(mult = c(0, 0.15))) +
theme(panel.border = element_blank()) +
labs(
x = "Category Combinations", y = NULL,
title = "The Most Frequent Category Combinations",
subtitle = glue("Focusing on Four {version} Service Categories"),
caption = "Source: digitalmarketplace.service.gov.uk"
)
Now let’s take a look at the intersections across all the categories. And let’s suppose that our particular interest is all services which appear in one, and only one, category.
set_df <- data_df |>
group_by(id) |>
filter(n() == 1, lot == "Cloud Hosting") |>
mutate(category = list(cat)) |>
distinct(id, category) |>
group_by(category) |>
mutate(n = n()) |>
ungroup()
set_df |>
ggplot(aes(category)) +
geom_bar(fill = cols[2]) +
geom_label(aes(y = n, label = n), vjust = -0.1, size = 3, fill = cols[3]) +
scale_x_upset(n_sets = 10) +
scale_y_continuous(expand = expansion(mult = c(0, 0.15))) +
theme(panel.border = element_blank()) +
labs(
x = "Category Combinations", y = NULL,
title = "10 Most Frequent Single-Category Services",
subtitle = "Focused on Service Categories in the Cloud Hosting Lot",
caption = "Source: digitalmarketplace.service.gov.uk"
)
Suppose we want to extract the intersection data for the top intersections across all sets. I could use functions from the tidyr package to achieve this.
cat_mix <- data_df |>
filter(lot == "Cloud Hosting") |>
mutate(x = cat) |>
pivot_wider(id, names_from = cat, values_from = x, values_fill = "^") |>
unite(col = intersect, -id, sep = "/") |>
count(intersect) |>
mutate(
intersect = str_replace_all(intersect, "(?:\\Q/^\\E|\\Q^/\\E)", ""),
intersect = str_replace_all(intersect, "/", " | ")
) |>
arrange(desc(n)) |>
slice(1:21)
cat_mix |>
kbl(col.names = c("Intersecting Categories", "Services Count"))
Intersecting Categories | Services Count |
---|---|
Platform As A Service | 719 |
Compute And Application Hosting | 209 |
Networking | 147 |
Archiving Backup And Disaster Recovery | 126 |
Compute And Application Hosting | Nosql Database | Relational Database | Other Database Services | Networking | Platform As A Service | Search | Block Storage | Object Storage | Other Storage Services | 115 |
Logging And Analysis | 82 |
Other Storage Services | 77 |
Compute And Application Hosting | Platform As A Service | 77 |
Infrastructure And Platform Security | 71 |
Other Database Services | 66 |
Container Service | 52 |
Archiving Backup And Disaster Recovery | Compute And Application Hosting | Content Delivery Network | Data Warehousing | Distributed Denial Of Service Attack Protection | Firewall | Infrastructure And Platform Security | Intrusion Detection | Platform As A Service | Protective Monitoring | 39 |
Relational Database | 36 |
Message Queuing And Processing | 34 |
Infrastructure And Platform Security | Intrusion Detection | Logging And Analysis | Protective Monitoring | 34 |
Archiving Backup And Disaster Recovery | Compute And Application Hosting | Container Service | Distributed Denial Of Service Attack Protection | Firewall | Infrastructure And Platform Security | Load Balancing | Networking | Protective Monitoring | Block Storage | 29 |
Archiving Backup And Disaster Recovery | Compute And Application Hosting | Platform As A Service | 26 |
Archiving Backup And Disaster Recovery | Compute And Application Hosting | 24 |
Object Storage | 21 |
Protective Monitoring | 19 |
Block Storage | Object Storage | Other Storage Services | 16 |
And I can compare this table to the equivalent ggupset visualisation.
set_df <- data_df |>
group_by(id) |>
filter(lot == "Cloud Hosting") |>
mutate(category = list(cat)) |>
distinct(id, category) |>
group_by(category) |>
mutate(n = n()) |>
ungroup()
set_df |>
ggplot(aes(category)) +
geom_bar(fill = cols[5]) +
geom_label(aes(y = n, label = n), vjust = -0.1, size = 3, fill = cols[4]) +
scale_x_upset(n_sets = 22, n_intersections = 21) +
scale_y_continuous(expand = expansion(mult = c(0, 0.15))) +
theme(panel.border = element_blank()) +
labs(
x = "Category Combinations", y = NULL,
title = "Top Intersections Across all Sets",
subtitle = "Focused on Service Categories in the Cloud Hosting Lot",
caption = "Source: digitalmarketplace.service.gov.uk"
)
And if I want to extract all the service IDs for the top 5 intersections, I could use dplyr and tidyr verbs to achieve this too.
I won’t print them all out though!
top5_int <- data_df |>
filter(lot == "Cloud Hosting") |>
select(id, abbr) |>
mutate(x = abbr) |>
pivot_wider(names_from = abbr, values_from = x, values_fill = "^") |>
unite(col = intersect, -id, sep = "/") |>
mutate(
intersect = str_replace_all(intersect, "(?:\\Q/^\\E|\\Q^/\\E)", ""),
intersect = str_replace(intersect, "/", " | ")
) |>
group_by(intersect) |>
mutate(count = n_distinct(id)) |>
arrange(desc(count), intersect, id) |>
ungroup() |>
add_count(intersect, wt = count, name = "temp") |>
mutate(temp = dense_rank(desc(temp))) |>
filter(temp %in% 1:5) |>
distinct(id)
top5_int |>
summarise(ids = n_distinct(id))
## # A tibble: 1 × 1
## ids
## <int>
## 1 1316
R Toolbox
Summarising below the packages and functions used in this post enables me to separately create a toolbox visualisation summarising the usage of packages and functions across all posts.
Package | Function |
---|---|
base | abbreviate[1]; as.character[1]; c[6]; cat[5]; conflicts[1]; cumsum[1]; function[4]; list[5]; rep[1]; search[1]; sum[1] |
dplyr | filter[12]; intersect[4]; add_count[1]; arrange[4]; count[2]; dense_rank[1]; desc[5]; distinct[4]; group_by[9]; group_keys[1]; group_split[1]; id[7]; if_else[4]; mutate[18]; n[8]; n_distinct[2]; pull[1]; select[2]; slice[1]; summarise[2]; ungroup[4] |
furrr | future_map_dfr[2]; future_pmap_dfr[1] |
future | multicore[1]; plan[1] |
ggplot2 | aes[6]; element_blank[3]; expansion[3]; geom_bar[3]; geom_label[3]; ggplot[3]; labs[4]; scale_colour_manual[1]; scale_fill_gradient[1]; scale_y_continuous[3]; theme[3]; theme_bw[1]; theme_set[1] |
ggupset | scale_x_upset[3] |
ggVennDiagram | ggVennDiagram[1] |
glue | glue[2] |
kableExtra | kbl[2] |
purrr | map[2]; map2_dfr[1]; possibly[1]; reduce[2]; set_names[2] |
readr | parse_number[1]; read_lines[1] |
rvest | html_attr[2]; html_elements[3]; html_text[2] |
stringr | str_c[10]; str_count[1]; str_detect[2]; str_extract[3]; str_remove[6]; str_remove_all[1]; str_replace[1]; str_replace_all[4]; str_starts[1]; str_to_title[2]; str_to_upper[1]; str_trim[1] |
tibble | as_tibble[1]; tibble[4]; tribble[1]; enframe[1] |
tictoc | tic[1]; toc[1] |
tidyr | pivot_wider[2]; unite[2]; unnest[1] |
wesanderson | wes_palette[1] |
- Posted:
- November 14, 2017
- Updated:
- April 20, 2022
- Length:
- 8 minute read, 1637 words
- Categories:
- R
- Tags:
- sets web scraping regex