Highlights (at least from my perspective)
- Screen scraping is not about regular expressions. It is just too hard to use pattern matching for these tasks, as the tags can change regularly and have significant maintenance issues.
- BeautifulSoup is the go-to html parser for poor quality source. I have used this in the past and am pleased to hear that I was not too far off the money!
- Configuration of User Agent settings is discussed in detail, as well as other mechanisms that websites exploit to stop you from scraping content
- Good description of how to use the Live HTTP Headers add-on for Firefox.
- A thought-provoking discussion about APIs, and comments that suggest that their maintenance and support is woefully inadequate. I was interested to hear his views, as they imply that scraping may be the only alternative when you really need data that is highly inaccessible.
The mechanise package features heavily in the examples for this presentation. The following link provides some good examples of how to use mechanise to automate forms:
Please feel free to post your comments about your experiences with screen scraping, and other tools that you use to collect web data for R.