Outlier Detection with DPM Slides from JSM 2011

[This article was first published on BioStatMatt » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Here are the 14 slides I used during my talk at the Joint Statistical Meetings 2011: shotwell-jsm-2011.pdf. I’m trying hard to minimize the text in my presentation slides. But, this usually requires that I practice more. Hence, you will know which talks I have practiced thoroughly by the amount of text in the slides :) . Below are a few notes to accompany the slides (in numerical order):

  1. This is the title slide. The work presented was with my advisor Elizabeth Slate, and was recently accepted to appear in the journal Bayesian Analysis.
  2. Dirichlet Process Mixture (DPM). This slide presents hierarchical notation for the DPM, and illustrates how (implicit) clustering occurs among draws from DP-distributed distributions.
  3. Product Partition Model (PPM). This is the PPM representation of the DPM in the previous slide. The partition parameter ‘z’ makes the data partition explicit. I think this model is easier to describe and understand than the DPM representation on the previous slide. Note that PPMs are a much larger class of models than DPMs. Only when the prior distribution over ‘z’ takes the form of the expression given in the slide, does the PPM represent a DPM.
  4. Outlier Detection Using Partitioning. When we do clustering, we can think of ‘small’ clusters as outlying, relative to other clusters. The trick is to decide what ‘small’ means. The ‘1% of n’ rule prescribes that clusters are considered small then they consist of less than or equal to 1% of the total number of observations.
  5. Quantifying Evidence to Detect Outliers: Questions. Partition estimation, or clustering, isn’t enough to make inferences about outliers. These are some key unanswered questions.
  6. A Criterion for Outlier Detection: Setup. These are some candidate partitions. The first consists of three clusters, where clusters 2 and 3 consist of just one observations apiece. The remaining four candidate partitions are formed by merging one or both of clusters 2 and 3 with cluster 1, or with one another. The key point here, is that outlier detection may be cast as a decision between the first candidate (the ‘outlier partition’) and the remaining four candidate partitions.
  7. A Criterion for Outlier Detection: The Trick. This slide illustrates how, under the decision principle of largest posterior mass (yes, yes, zero-one loss), the fixed-precision DPM imposes a lower bound on the Bayes factor favoring the outlier partition versus any partition formed by merging one or more outlier clusters. The inverse DPM precision parameter is then interpreted as the fold increase in said Bayes factor, required for each detected outlier.
  8. A Criterion for Outlier Detection: How to Fix α. Since the inverse precision parameter forms a lower bound on a Bayes factor, it’s natural to consider an established scale of evidence for Bayes factors.
  9. A Criterion for Outlier Detection: Nice Properties. This slide is self-explanatory.
  10. Microarray Time Series in Cell Cycle Synchronized Yeast. The grey lines in this figure represent 297 yeast RNA microarray probes, monitored over a 120 minute time-series. These probes were determined by the original author (Spellman et al., 1998) to be regulated in the yeast cell cycle, because of their periodic expression. Our goal was to identify the outlier probes (if any) in these data. That is, each grey line is a potential outlier. Though I didn’t mention this in the talk, the likelihood for these data was a normal linear model, where the time covariate is transformed onto a collection of periodic and non-linear basis functions, in order to capture periodic and non-linear expression.
  11. Microarray Time Series in Cell Cycle Synchronized Yeast. For DPM precision fixed at 1/150, this figure represents the maximum a posteriori (MAP) data partition estimate. Using the ‘1% of n’ rule, any cluster with fewer than four observations is considered outlying. Consider, for example, the collection of partitions that might result from merging the upper rightmost cluster with one of the other clusters. By fixing the precision parameter to 1/150, we have ensured that the Bayes factor favoring the MAP partition estimate versus any such partition is at least 150. Hence, there is ‘very strong’ evidence that this cluster is outlying.
  12. MAP Estimation for ‘z’. We considered several existing methods, and proposed a new method that is free of posterior sampling. More details will be available in the forthcoming Bayesian Analysis article. The R package profdpm implements each method.
  13. Outlier Detection with Finite Mixtures. This slide mentions the comparison between outlier detection with DPMs and finite mixtures in the Fraley and Raftery framework. The DPM method is a bit more conservative than the finite mixture method. Again, more details will be had in the article.
  14. This slide is a list of references used in the presentation.
  15. To leave a comment for the author, please follow the link and comment on their blog: BioStatMatt » R.

    R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
    Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)