An Iterative Approach to Data Science

[This article was first published on R – NYC Data Science Academy Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

It is the nature of boot camp.  We drink from the firehose because we only have 12 weeks to learn what university programs would spread out over two to four years.  The program is more rigorous.  It is able to be kept up-to-date in a way that conventional programs that have to submit their curriculum to central committees can not.  But there is a price.  How to get done what you need under very pressing time constraints?

You will see many posts on here that end with some flavor of the words “I wanted to do more.”  I’ve decided to begin mine that way.  This post is not as much about the Consumer Reports (CR) Data on Computers that I chose to analyze as it is about a process that, when the unexpected happened to me time and time again, at least allowed for the completion of something.  Here’s how it worked …

A Modular Approach To Screen Scraping

Scrapy -> Selenium” width=”166″ height=”405″ srcset_temp=”http://2igww43ul7xe3nor5h2ah1yq.wpengine.netdna-cdn.com/wp-content/uploads/2017/08/selenium_spider_selenium_diag1b.png 210w, http://2igww43ul7xe3nor5h2ah1yq.wpengine.netdna-cdn.com/wp-content/uploads/2017/08/selenium_spider_selenium_diag1b-123×300.png 123w, http://2igww43ul7xe3nor5h2ah1yq.wpengine.netdna-cdn.com/wp-content/uploads/2017/08/selenium_spider_selenium_diag1b-60×146.png 60w, http://2igww43ul7xe3nor5h2ah1yq.wpengine.netdna-cdn.com/wp-content/uploads/2017/08/selenium_spider_selenium_diag1b-20×50.png 20w, http://2igww43ul7xe3nor5h2ah1yq.wpengine.netdna-cdn.com/wp-content/uploads/2017/08/selenium_spider_selenium_diag1b-31×75.png 31w” sizes=”(max-width: 166px) 100vw, 166px” />

 

My presentation slides have some images you can feel free to check out with respect to the organization of the Consumer Reports site.  High level, this project was targeting product specification and review data for the 3 computer related product classes available on the site:

  • desktop computers,
  • laptop computers, and
  • ChromeBooks.

Each product had its own unique URL, and upon initial inspection, the site looked like it could be done with Selenium to scrape the pages of links for each product URL, and then a Scrapy spider should have been enough to go after all of the data.  This proved to not be the case.

While specifications data loaded immediately after clicking on a page link, review data only fully loaded after clicking the “Reviews” tab.  It turned out to be dynamically generated by JavaScript.  A conventional approach might have been to build one giant Selenium script where you would not know if you were truly going to get all of your data until the whole thing was built and you “let her rip.”

I took a somewhat different approach.  Selenium scripts were run to scrape the URLs for each product, and the results were saved to a csv file.  The file was then loaded into other scripts.  First a Scrapy spider proved the path of least resistance to obtain specification data.  Then a Selenium script loaded the same csv file iterating over it to capture reviews for each product that had them.  The results were then saved to two files this time:  basic fields for products with no reviews and many fields full of data for products that had reviews.

As the reviews were more challenging to capture, this approach ensured that I already had all of the specification data safely stored in a csv ready for analysis before I even began the work on capturing reviews data.  This also made debugging easier. Since each script only focused on a piece of the puzzle, it was easier to see where something might be going wrong to fix it.

Two points of interest from this process:  (1) How specification data was captured, and (2) the hard learned lesson that what works in one part of a website does not always work in another.

1) For the specifications data, a pattern was identified in the source HTML tagging that made it possible to to extract about 50+ variables using only 2 XPath coding rules:xpath rules

  • one for a “spec_label” and
  • another for a “spec_value”

The data was saved to csv in a long stack with a plan to then bring it into R and use the dplyer “spread” function to convert the 2 column stack to a wide format with about 50 column variables per observation.

The review data could not be approached in this manner and had to be obtained with one coding rule per field.

2) While the first Selenium script to capture page URLs was able to “see” the data with a simple “sleep” time delay (though much trial and error to get the timing right was required), the second script required experimentation with other commands in the Selenium arsenal.  Both strategies had to be complemented with trial and error experiments to understand random errors (unique to each process) that required error handling to go with each script.

An Iterative Approach to Data Munging and Analysis

High level, if you try to do “everything” up front in what project managers might call a classical waterfall approach, you run the risk of running out of time without delivering anything useful.  Although I was just one guy doing lone research, I found that an iterative approach to the project helped ensure there was something to deliver before the clock ran down.  A quick summary of how to do this would look something like this:

  • Get all of the data up front
  • Integrate some data cleaning into your web scraping code for known patterns
  • Create CSV spreadsheets each step of the way to preserve what you have so far
  • Load the sheets into R or Jupyter/iPython – whichever one “feels faster” for data cleaning and preparation steps
  • Generate new csvs of the results
  • Use cleaned and transformed data in your analysis generating visualizations in R / iPython
  • Go back to the source, identify more fields that can yield data insights and do it all again …

You don’t build everything you want this way, but it guarantees that each iteration results in actual finished analysis.  Each sheet along the way allows you to pick up where you left off without having to re-run code.

Note:  It is important during the first step to “edit” yourself.  “Get all the data” could leave you writing complex extraction rules in your scraper leaving not enough time for actual data analysis ahead of deadline.  Set reasonable goals.  Get enough data so that you have choices, but don’t try to “scrape the universe” on the first go-round.

Resulting Data Analysis and Lessons Learned

I went into this with a number of ideas about what I wanted to explore.  I was able to give the most coverage to y initial question of support for new features and standards in the products reviewed by Consumer Reports.  This was then followed with a little analysis of RAM (Random Access Memory) by brand, and a look at RAM for its potential impact on prices (in the models under review by CR).  The full details of the research using Consumer Reports specification data is provided in this Jupyter Notebook on my GitHub:

TheMitchWorksPro (on Github) -> NYCDSA_CR_WebScrape (Repo) -> CR Data Analysis Jupyter Notebook

User Review Word Clouds

With respects to the data collected on user reviews, several word clouds were developed but not as much time was left to look into either a more detailed configuration, or to explore finding better R or Python libraries with which to do this.  The one I selected proved finicky to configure.  Word clouds with none of the original words suppressed by the algorithm are provided near the end of these power points.  After much tinkering, I added an exclusion list to the word cloud for positive reviews.  The intent of this kind of filtering is to eliminate words that are not reflective of true sentiment so the words that are reflective are brought more into focus.  The revised result is presented here.

WordCloud - Positive Reviews on Consumer Reports

The top 25 words (by frequency) shown in this word cloud are provided here:

Note:  Even now, if you look closely, you can probably see many words that might be worth considering for exclusion.  It’s a word game you can literally spend hours or days on.

It should also be noted that as you remove words from the frequency list to bring more relevant words into focus, you start getting more and more ties for words with the same frequency near the bottom.  A max words limit prevents the diagram from becoming too cluttered by randomly choosing which “tie words” to not include in the word cloud.  You then start seeing warnings like this for what got left out:

Final Thoughts

The data used in this research is clearly a modest sample of the much larger population that is the computer market.  As more expensive model configurations do not appear to be included in what Consumer Reports is provided to review, findings on this blog and in my project should be treated as specific to the market segments and price levels covered and cannot be generalized to the whole of the computer market place.  Given what Consumer Reports data is and how it is used, analysis can still be useful.  If there were more time to pursue this avenue of research, I might seek to collect more disparate samples from other sources and blend them together to see what they might tell us.

The post An Iterative Approach to Data Science appeared first on NYC Data Science Academy Blog.

To leave a comment for the author, please follow the link and comment on their blog: R – NYC Data Science Academy Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)