How to download complete XML records from PubMed and extract data

[This article was first published on R Psychologist » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Use R to download complete XML records from PubMed and extract data

Introduction

My first PubMed script (An R Script to Automatically download PubMed Citation Counts By Year of Publication) extracted yearly counts for any number of search strings, by using PubMed’s E-utilities. Specifically, it’s using the esearch-function, which will report the number of hits for your search and/or the articles PMIDs. This method is very reliable and fast if you’re only interested in the number of hits for a given query. However, PubMed’s E-utilities have a lot more features than that, some of which I will use in this article to download complete article records in XML.

How it works

What’s cool about esearch is that you can tell it to save a history of the articles found by your query, and then use another function called efetch to download that history. This is done by adding &usehistory=y to your search, which will generate this XML (in addition to some other XML-tags):
<WebEnv>
NCID_1_90555773_130.14.18.48_5553_1335519406_1226217114
</WebEnv>
Once we have extracted the WebEnv string, we just tell PubMed’s efetch to send us the articles saved in WebEnv. There’s one complication, though. PubMed “only” allows us to fetch 10 000 articles in one go, therefore my code includes a loop that will batch download the data, and paste it together in order to create valid XML-code. The XML cutting and pasting is done with gsub, since the unparsed XML-data is just a long string. It’s not the most beautiful solution, but it seems to work. Now that all XML-data is saved in one object, we just need to parse it an extract whatever PubMed field(s) we’re interested in. I’ve included a function that will parse the XML-code and extract journal counts, although you could use the same method to extract any field.

One example run: Top 20 CBT journals in 2010, 2011 and all time

Top 20 Cognitive Behavior Therapy journals by PubMed citation count These two graphs were created by using the following 3 queries (notice that I use single-quotes inside my query). This script does not have the functionality to download different queries automatically for you, so I ran my three searches individually. The R code for searchPubmed() and extractJournal() are at the end of this article.
# Get data for 2011
query <- c("cbt" = "'cognitive behavior therapy' OR 'cognitive behavioral therapy' OR 'cognitive therapy' AND 2011[DP]")
pub.efetch <- searchPubmed(query)
cbt_2011 <- extractJournal()

# Get data for 2010
query <- c("cbt" = "'cognitive behavior therapy' OR 'cognitive behavioral therapy' OR 'cognitive therapy' AND 2010[DP]")
pub.efetch <- searchPubmed(query)
cbt_2010 <- extractJournal()

# Get data total data for all years
query <- c("cbt" = "'cognitive behavior therapy' OR 'cognitive behavioral therapy' OR 'cognitive therapy'")
pub.efetch <- searchPubmed(query)
cbt_any <- extractJournal()

Reshaping the data and creating the plots

I needed to reshape my data a bit, and combine it into one object, before I used ggplot2 to make the graphs. I did it like this:
# Add year-column
cbt_2010$year <- "2010"
cbt_2011$year <- "2011"
cbt_any$year <- "All"

# Reorder by $freq
cbt_any <- cbt_any[order(cbt_any$freq, decreasing=TRUE),]
# keep top 20
cbt_any <- cbt_any[1:20,]
#  Reorder factor levels, this will also drop levels not used
cbt_any$x <- factor(cbt_any$x, levels=cbt_any$x)

# Only keep values that's in Top 20 all time
cbt_2010 <- cbt_2010[cbt_2010$x %in% cbt_any$x,]
cbt_2011 <- cbt_2011[cbt_2011$x %in% cbt_any$x,]

# Combine data into one data frame
cbt_total <- rbind(cbt_2010,cbt_2011,cbt_any)

# Copy levels from cbt_any, but with levels in reverse order
# since I want the hightest value at the top
cbt_total$x <- factor(cbt_total$x, levels=rev(cbt_any$x))

Ggplot2 code

Now that I have all my top 20 data in one object in the long format, the ggplot2 code is pretty simple.
## Names for plot legend ##
my_labels <- c("2010", "2011", "1977-2012")

# Box plot
ggplot(cbt_total, aes(x, percent, group=year, fill=year)) + geom_bar(stat="identity") + 
  coord_flip() +
  scale_fill_manual(values=c("All" = "#b41f5b", "2010" = "#A6CEE3", "2011" = "#1F78B4"), labels=my_labels) +
  xlab(NULL) +
  opts(legend.position = "bottom")

# Line plot
ggplot(cbt_total, aes(x, percent, group=year, color=year, linetype=year)) + geom_line(size=0.7) + 
  coord_flip() +
  scale_color_manual(values=c("All" = "#b41f5b", "2010" = "#A6CEE3", "2011" = "#1F78B4"), labels=my_labels) +
  scale_linetype_manual(values=c("All" = "dashed", "2010" = "solid", "2011" = "solid"), labels=my_labels) +
  xlab(NULL) +
  opts(legend.position = "bottom")

Reliability of the method

To check the reliability of my method I compared the number of extracted journals to the total number of hits. These are the numbers: 2010: 1487 / 1488 = 0.999328 2011: 1488 / 1493 = 0.996651 All years: 14345 / 14354 = 0.999373 Since the error is so low, I didn’t bother to check why some journals were left out. My guess is, that they were missing in the original data as well.

The R code for searchPubmed() and extractJournal()

Updated 2013-02-23: The script broke when the date in doctype-declaration was changed from 2012 to 2013. I’ve updated the code, and it should be working now.
require("RCurl")
require("XML")
require("plyr")

#######################
# Download PubMed XML #
#######################

# Search and fetch XML from PubMed
searchPubmed <- function(query.term) {
  # change spaces to + in query
  query.gsub <- gsub(" ", "+", query.term)
  # change single-quotes to URL-friendly %22
  query.gsub <- gsub("'","%22", query.gsub)
  # Perform search and save history, this will save PMIDS in history
  pub.esearch <- getURL(paste("http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=pubmed&term=", 
                              query.gsub, "&usehistory=y", sep = ""))
  # Parse esearch XML
  pub.esearch <- xmlTreeParse(pub.esearch, asText = TRUE)
  # Count number of hits (super assign)
  pub.count <<- as.numeric(xmlValue(pub.esearch[["doc"]][["eSearchResult"]][["Count"]]))
  # Save WebEnv-string, it contains "links" to all articles in my search
  pub.esearch <- xmlValue(pub.esearch[["doc"]][["eSearchResult"]][["WebEnv"]])
  # Show how many articles that's being downloaded
  cat("Searching (downloading", pub.count, "articles)\n")
  
  ## We need to batch download, since efetch will cap at 10k articles ##
  # Start at 0
  RetStart <- 0
  # End at 10k
  RetMax <- 10000
  # Calculate how many itterations will be needed
  Runs <- (pub.count %/% 10000) + 1
  # Create empty object
  pub.efetch <- NULL
  # Loop to batch download
  for (i in 1:Runs) { 
        # Download XML based on hits saved in pub.esearch (WebEnv)
        x <- getURL(paste("http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&WebEnv=",
                          pub.esearch,"&query_key=1&retmode=xml&retstart=", RetStart, "&retmax=", RetMax, sep = ""))
        # Remove XML declarations, else it wont parse correctly later, since different gets are being pasted together. 
        # This is probably quick-and-dirty, perhaps it could be done more elegantly with the XML-package
        x <- gsub("<.xml version=\"1\\.0\".>\n<!DOCTYPE PubmedArticleSet PUBLIC \"-//NLM//DTD PubMedArticle, 1st January
                   2013//EN\"\"http://www\\.ncbi\\.nlm\\.nih\\.gov/corehtml/query/DTD/pubmed_130101\\.dtd\">\n", "", x)
        x <- gsub("<PubmedArticleSet>\n", "", x)
        x <- gsub("\n</PubmedArticleSet>\n", "", x)
        # Add data to previous downloads
        pub.efetch <- paste(pub.efetch, x, sep="")
        # Increase range for next batch
        RetStart <- RetStart + 10000
        RetMax <- RetMax + 10000
      }
  # Add tags to create valid XML
  pub.efetch <- paste("<PubmedArticleSet>\n",pub.efetch,"</PubmedArticleSet>\n")
  # Print that download is completed
  cat("Completed download from PubMed.\n")
  # Return XML
  return(pub.efetch)
}

# Function to extract journal name from individual article
extractJournal <- function(query.term = query) {
  # Parse XML into XML Tree
  xml.data <- xmlTreeParse(pub.efetch, useInternalNodes = TRUE)
  # Use xpathSApply to extract Journal name
  journal <- xpathSApply(xml.data, "//PubmedArticle/MedlineCitation/MedlineJournalInfo/MedlineTA", xmlValue)
  # Show how many journals that were extracted
  cat("Extracted ", length(journal), " hits (",(length(journal)/pub.count)*100," %) from a total of ",
     pub.count," hits. For query named: ", query.term,"\n", sep="")
  # Create data frame with journal counts
  journal <- data.frame(count(journal))
  # Calculcate percent
  journal$percent <- journal$freq / pub.count
  # return data
  return(journal)
}

To leave a comment for the author, please follow the link and comment on their blog: R Psychologist » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)