Finding Happiness in ‘The Smoke’

[This article was first published on R | Quantum Jitter, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The Smoke, to use London’s nickname, has 32 boroughs plus the central business district known as the City of London. What does Cluster Analysis tell us about the characteristics that bind them?

library(tidyverse)
library(tidymodels)
library(tidytext)
library(wesanderson)
library(glue)
library(readxl)
library(janitor)
library(ggrepel)
library(sf)
library(scales)
library(kableExtra)
theme_set(theme_bw())

(cols <- wes_palette(6, name = "GrandBudapest1", type = "continuous"))

The London Datastore provides data profiling each area.

raw_df <-
  read_xlsx("london-borough-profiles.xlsx", sheet = 2) %>%
  clean_names() %>%
  filter(str_starts(code, "E")) %>%
  mutate(across(where(is.character), ~ na_if(., ".")),
    inner_outer_london = str_remove(inner_outer_london, " London")
  )

These data include 81 numeric variables quantifying such things as population density, happiness and age. Way too many variables to visualise two-dimensionally. Principal Components Analysis can reduce the bulk of the information down to two variables. It is then possible to more easily visualise the relationships.

The City of London, aka “The Square Mile”, is quite distinct from the other 32 areas and has many NA values.

raw_df %>%
  mutate(na_count = rowSums(is.na(.))) %>%
  select(area_name, na_count) %>%
  filter(na_count != 0) %>%
  arrange(desc(na_count)) %>%
  kbl() %>%
  kable_material()
area_name na_count
City of London 27
Kensington and Chelsea 3
Barnet 1
Camden 1
Hackney 1
Haringey 1
Harrow 1
Islington 1
Lewisham 1
Merton 1
Richmond upon Thames 1
Waltham Forest 1
Wandsworth 1

Not surprisingly, the two-dimensional visualisation sets the City of London apart. And the other 32 are broadly, albeit with some mixing, divided into inner and outer London boroughs.

pca_fit <- raw_df %>%
  select(where(is.numeric)) %>%
  prcomp(scale = TRUE)

pca_augmented <-
  pca_fit %>%
  augment(raw_df)

pca_augmented %>%
  ggplot(aes(.fittedPC1, .fittedPC2, fill = inner_outer_london)) +
  geom_label(aes(label = area_name), size = 2, hjust = "inward") +
  scale_fill_manual(values = cols[c(1, 3)]) +
  labs(
    title = "33 London Areas", fill = "London",
    x = "Principal Component 1", y = "Principal Component 2",
    caption = "Source: data.london.gov.uk"
  )

After squeezing the many dimensions into two, how much of the original information was it possible to retain?

pca_tidied <- pca_fit %>%
  tidy(matrix = "eigenvalues")

pct_explained <-
  pca_tidied %>%
  pluck("cumulative", 2)

pca_tidied %>%
  ggplot(aes(PC, percent)) +
  geom_col(aes(fill = if_else(PC <= 2, cols[1], cols[2])),
    alpha = 0.8, show.legend = FALSE
  ) +
  scale_y_continuous(labels = percent_format(1)) +
  scale_fill_manual(values = cols[c(1, 8)]) +
  coord_flip() +
  labs(
    title = glue(
      "{percent(pct_explained, 0.1)} of the ",
      "Variance Explained by Principal Components 1 & 2"
    ),
    x = "Principal Component", y = NULL
  )

Whilst we do lose ease of interpretation by distilling the information in this way, it is still possible to understand which of the original variables influenced their two-dimensional positioning.

The axes depicted by the arrows below tell us that anxiety scores play a significant role in the placement of the City of London towards the upper-left. Average age pushes areas more towards the top. And happiness influences the bottom-right.

pattern <- "_\\d{4}|_st.+|_score|_rates|proportion_of_|_\\d{2}_out_of_\\d{2}"

pca_fit %>%
  tidy(matrix = "rotation") %>%
  pivot_wider(names_from = "PC", names_prefix = "PC", values_from = "value") %>%
  mutate(column = str_remove_all(column, pattern)) %>%
  ggplot(aes(PC1, PC2)) +
  geom_segment(
    xend = 0, yend = 0, colour = "grey70",
    arrow = arrow(ends = "first", length = unit(8, "pt"))
  ) +
  geom_text_repel(aes(label = column), size = 3) +
  theme_minimal() +
  labs(
    x = "PC 1", y = "PC 2",
    title = "Characteristics Influencing Area Positioning",
    caption = "Source: data.london.gov.uk"
  ) +
  theme(axis.text = element_blank())

This may be validated by ranking all 33 areas by these three original variables.

pca_long <- 
  pca_augmented %>%
  select(area_name, matches("happ|anx|average_age")) %>%
  rename_with(~ str_remove(., "_.*")) %>%
  rename("avg_age" = "average") %>%
  pivot_longer(-area, values_to = "score") %>%
  mutate(area = reorder_within(area, score, name)) 

pca_long %>%
  ggplot(aes(area, score, colour = name)) +
  geom_point(show.legend = FALSE) +
  facet_wrap(~name, scales = "free") +
  scale_x_reordered() +
  scale_colour_manual(values = cols[c(1, 3, 5)]) +
  coord_flip() +
  labs(x = NULL, caption = "Source: data.london.gov.uk")

To collect these areas into their natural groupings, a decision is needed on the desired number of clusters. We can visualise dividing the areas into 1, 2, 3 and so forth clusters. And, per below, 3 appears to nicely capture the natural grouping of the coloured points.

set.seed(2022)

kclusts <-
  tibble(k = 1:6) %>%
  mutate(
    kclust = map(k, ~ kmeans(pca_augmented %>% 
                               select(".fittedPC1", ".fittedPC2"), .x)),
    tidied = map(kclust, tidy),
    glanced = map(kclust, glance),
    augmented = map(kclust, augment, pca_augmented)
  )

assignments <-
  kclusts %>%
  unnest(cols = c(augmented))

clusters <-
  kclusts %>%
  unnest(cols = c(tidied))

assignments %>%
  ggplot(aes(x = .fittedPC1, y = .fittedPC2)) +
  geom_point(aes(color = .cluster)) +
  facet_wrap(~k, nrow = 2) +
  scale_colour_manual(values = cols[c(1:6)]) +
  geom_point(data = clusters, size = 4, shape = 13) +
  labs(
    title = "How Many Clusters Best Captures the Groupings?",
    subtitle = "X Marks the Cluster Centre",
    caption = "Source: data.london.gov.uk"
  )

The elbow method provides a more mathematical approach to the choice. The compactness of the clustering (as measured by the total within-cluster sum of squares) is significantly optimised when choosing 3 clusters, with diminishing returns thereafter.

kclusts %>%
  unnest(cols = c(glanced)) %>%
  ggplot(aes(k, tot.withinss)) +
  geom_line() +
  geom_point() +
  geom_label(aes(label = if_else(k == 3, "Elbow", NA_character_)),
    nudge_y = -25, fill = cols[1]
  ) +
  labs(
    title = "Elbow Method",
    x = "Clusters", y = "Within-Cluster Variance"
  )

And settling on this choice of 3 clusters, we get this split.

assignments %>%
  filter(k == 3) %>%
  ggplot(aes(.fittedPC1, .fittedPC2, fill = .cluster)) +
  geom_label(aes(label = area_name), size = 2, hjust = "inward", overlap = FALSE) +
  scale_fill_manual(values = cols[c(1, 3, 6)]) +
  labs(
    title = "Closely-Related London Areas", fill = "Cluster",
    x = "Principal Component 1", y = "Principal Component 2",
    caption = "Source: data.london.gov.uk"
  )

How does this look with geospatial data? And how do the clusters relate to inner and outer London?

shape_df <-
  st_read("statistical-gis-boundaries-london/ESRI",
    "London_Borough_Excluding_MHW",
    as_tibble = TRUE, quiet = TRUE
  ) %>%
  left_join(assignments %>% 
              filter(k == 3), by = c("GSS_CODE" = "code")) %>%
  select(.cluster, inner_outer_london, NAME, geometry) %>%
  pivot_longer(c(.cluster, inner_outer_london)) %>%
  mutate(value = recode(value, "1" = "Cluster 1", 
                        "2" = "Cluster 2", "3" = "Cluster 3"))

shape_df %>%
  mutate(name = recode(name,
    ".cluster" = "By Cluster",
    "inner_outer_london" = "By Inner/Outer"
  )) %>%
  ggplot() +
  geom_sf(aes(fill = value)) +
  geom_sf_label(aes(label = NAME), size = 2, alpha = 0.7) +
  scale_fill_manual(values = cols[c(1:4, 6)]) +
  facet_wrap(~name) +
  theme_void() +
  theme(legend.position = "none") +
  labs(fill = NULL)

Not too dissimilar, but with some notable differences.

The City of London is a cluster apart in the heart of London. Kensington and Chelsea is an inner-London borough, but exhibits outer-London characteristics. And the reverse is true of the likes of Brent and Greenwich.

R Toolbox

Summarising below the packages and functions used in this post enables me to separately create a toolbox visualisation summarising the usage of packages and functions across all posts.

Package Function
base c[5]; conflicts[1]; cumsum[1]; function[1]; is.character[1]; is.na[1]; is.numeric[1]; rowSums[1]; search[1]; set.seed[4]; sum[1]
dplyr filter[9]; across[1]; arrange[3]; desc[3]; group_by[1]; if_else[5]; left_join[1]; mutate[11]; na_if[1]; recode[2]; rename[1]; rename_with[1]; select[6]; summarise[1]
ggplot2 aes[15]; arrow[1]; coord_flip[2]; element_blank[1]; facet_wrap[3]; geom_col[1]; geom_label[3]; geom_line[1]; geom_point[4]; geom_segment[1]; geom_sf[1]; geom_sf_label[1]; ggplot[8]; labs[8]; scale_colour_manual[2]; scale_fill_manual[4]; scale_y_continuous[1]; theme[2]; theme_bw[1]; theme_minimal[1]; theme_set[1]; theme_void[1]; unit[1]
ggrepel geom_text_repel[1]
glue glue[1]
janitor clean_names[1]
kableExtra kable_material[2]; kbl[2]
purrr map[5]; map2_dfr[1]; pluck[1]; possibly[1]; set_names[1]
readr read_lines[1]
readxl read_xlsx[1]
scales percent[2]; percent_format[1]
sf st_read[1]
stats kmeans[2]; prcomp[1]
stringr str_c[5]; str_count[1]; str_detect[2]; str_remove[4]; str_remove_all[2]; str_starts[2]
tibble as_tibble[1]; tibble[3]; enframe[1]
tidyr pivot_longer[2]; pivot_wider[1]; unnest[4]
tidytext reorder_within[1]; scale_x_reordered[1]
wesanderson wes_palette[1]

To leave a comment for the author, please follow the link and comment on their blog: R | Quantum Jitter.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)