I’m excited to share that we’ve started a new R users’ group at UC Davis! Right now our main purpose is to run weekly 2-hour work/hack sessions where R users can get together to work through problems together. More info here

I’m excited to share that we’ve started a new R users’ group at UC Davis! Right now our main purpose is to run weekly 2-hour work/hack sessions where R users can get together to work through problems together. More info here

Back in example 8.41 we showed how to make a graphic combining a scatterplot with histograms of each variable. A commenter suggested we change the R graphic to allow post-hoc plotting of, for example, lowess lines. In addition, there are further refinements to be made. In this R-only entry, we'll make the figure...

This is a short follow up on THIS posting.. I will briefly show how to use the dismo- and the googeVis package to plot species occurrences on an interactive Google map, like the one below (HERE is the R-script)MapID2ce4348e653

What is variance targeting in garch estimation? And what is its effect? Previously Related posts are: A practical introduction to garch modeling Variability of garch estimates garch estimation on impossibly long series The last two of these show the variability of garch estimates on simulated series where we know the right answer. In response to … Continue reading...

Today I want to follow up with the Minimum Correlation Algorithm Paper post and show how to incorporate the Minimum Correlation Algorithm into your portfolio construction work flow and also explain why I like the Minimum Correlation Algorithm. First, let’s load the ETF’s data set used in the Minimum Correlation Algorithm Paper using the Systematic

Learn how Oracle R Enterprise is used to generate new insight and new value to business, answering not only what happened, but why ...

It’s been quite a while since my last post on Euler problems. Today a visitor post his solution to the second problem nicely, which encouraged me to keep solving these problems. Just for fun! 10! = 10 * 9 * … * 3 * 2 * 1 … Continue reading →

Numerically-coded data sequences can exhibit a very wide range of distributional characteristics, including near-Gaussian (historically, the most popular working assumption), strongly asymmetric, light- or heavy-tailed, multi-modal, or discrete (e.g., count data). In addition, numerically coded values can be effectively categorical, either ordered, or unordered. A specific example that illustrates the range of distributional behavior often seen in a collection...

Consider our loss-ALAE dataset, and – as in Frees & Valdez (1998) - let us fit a parametric model, in order to price a reinsurance treaty. The dataset is the following, > library(evd) > data(lossalae) > Z=lossalae > X=Z;Y=Z The first step can be to estimate marginal distributions, independently. Here, we consider lognormal distributions for both components, > Fempx=function(x) mean(X<=x) >...

I write sloppy R scripts. It is a byproduct of working with a high-level language that allows you to quickly write functional code on the fly (see this post for a nice description of the problem in Python code) and the result of my limited formal training in computer programming. The lack of formal training

This post is actually a homework I did. The data file contains input use, output, quantities, costs, and prices for total U.S. nondurable manufacturing for 1949-2001. The data are deﬁned as follows: , , , , = Inputs corresponding to capital, labor, energy, materials, and purchased services, = represents total output, = respective quantity indexes, ...

Another full day spent working with Jean-Michel Marin on the new edition of Bayesian Core (soon to be Bayesian Essentials with R!) and the remaining hierarchical Bayes chapter… I have reread and completed the regression and GLM chapters, sent to very friendly colleagues for a last round of comments. Now, I am essentially idle, waiting

Power analysis is a very useful tool to estimate the statistical power from a study. It effectively allows a researcher to determine the needed sample size in order to obtained the required statistical power. Clients often ask (and rightfully so) what the sample size should be for a proposed project. Sample sizes end up being

The other day Critical Juncture put up an API for the Federal Register. I thought it would be great if there was a package that could use this API to download data directly into R (much like the excellent WDI package). This would make it easier to anal...

Over summer I was busy collaborating with David Varadi on the Minimum Correlation Algorithm paper. Today I want to share the results of our collaboration: Minimum Correlation Algorithm Paper Back Test reports Supporting R code The Minimum Correlation Algorithm is fast, robust, and easy to implement. Please add it to you portfolio construction toolbox and

As advertised on the ‘Og, the ISBA mailing list and now the birth certificate of BayesComp (!), MCMSki IV is taking place for sure in Chamonix-Mont-Blanc, January 6-8 2014. The webpage has been started, thanks to Merrill Liechty, and should grow with informations about the location, the hotels, registration, transportation, and of course skiing (check