This is a short follow up on THIS posting.. I will briefly show how to use the dismo- and the googeVis package to plot species occurrences on an interactive Google map, like the one below (HERE is the R-script)MapID2ce4348e653

## Computing kook density in R

## qgraph version 1.1.0 and how to simply make a GUI using ‘rpanel’

## The fear-index: is the VIX efficient to be warned about high volatility? (Finance & Systematic Processus)

## Simple visually-weighted regression plots

## New Zealand school performance: beyond the headlines

## Variance targeting in garch estimation

What is variance targeting in garch estimation? And what is its effect? Previously Related posts are: A practical introduction to garch modeling Variability of garch estimates garch estimation on impossibly long series The last two of these show the variability of garch estimates on simulated series where we know the right answer. In response to … Continue reading...

## Popularity indicator, with images (NFL)

## Universal portfolio, part 11

## Minimum Correlation Algorithm Example

Today I want to follow up with the Minimum Correlation Algorithm Paper post and show how to incorporate the Minimum Correlation Algorithm into your portfolio construction work flow and also explain why I like the Minimum Correlation Algorithm. First, let’s load the ETF’s data set used in the Minimum Correlation Algorithm Paper using the Systematic

## Video: Analyzing Big Data using Oracle R Enterprise

Learn how Oracle R Enterprise is used to generate new insight and new value to business, answering not only what happened, but why ...

## Football model; plots and usage

## Project Euler — problem 20

It’s been quite a while since my last post on Euler problems. Today a visitor post his solution to the second problem nicely, which encouraged me to keep solving these problems. Just for fun! 10! = 10 * 9 * … * 3 * 2 * 1 … Continue reading →

## The infamous apply function

## Text Analysis Tutorial on Spam Email in R

## Maximum likelihood estimates for multivariate distributions

## Spacing measures: heterogeneity in numerical distributions

Numerically-coded data sequences can exhibit a very wide range of distributional characteristics, including near-Gaussian (historically, the most popular working assumption), strongly asymmetric, light- or heavy-tailed, multi-modal, or discrete (e.g., count data). In addition, numerically coded values can be effectively categorical, either ordered, or unordered. A specific example that illustrates the range of distributional behavior often seen in a collection...

## Maximum likelihood estimates for multivariate distributions

Consider our loss-ALAE dataset, and – as in Frees & Valdez (1998) - let us fit a parametric model, in order to price a reinsurance treaty. The dataset is the following, > library(evd) > data(lossalae) > Z=lossalae > X=Z;Y=Z The first step can be to estimate marginal distributions, independently. Here, we consider lognormal distributions for both components, > Fempx=function(x) mean(X<=x) >...

## Good programming practices in R

I write sloppy R scripts. It is a byproduct of working with a high-level language that allows you to quickly write functional code on the fly (see this post for a nice description of the problem in Python code) and the result of my limited formal training in computer programming. The lack of formal training

## KLEMS (1)

This post is actually a homework I did. The data file contains input use, output, quantities, costs, and prices for total U.S. nondurable manufacturing for 1949-2001. The data are deﬁned as follows: , , , , = Inputs corresponding to capital, labor, energy, materials, and purchased services, = represents total output, = respective quantity indexes, ...

## Core [still] minus one…

Another full day spent working with Jean-Michel Marin on the new edition of Bayesian Core (soon to be Bayesian Essentials with R!) and the remaining hierarchical Bayes chapter… I have reread and completed the regression and GLM chapters, sent to very friendly colleagues for a last round of comments. Now, I am essentially idle, waiting

## Network of trade

## PLS2 with "R"

## Power Analysis and the Probability of Errors

Power analysis is a very useful tool to estimate the statistical power from a study. It effectively allows a researcher to determine the needed sample size in order to obtained the required statistical power. Clients often ask (and rightfully so) what the sample size should be for a proposed project. Sample sizes end up being

## Federal Register API/R Package Ideas?

The other day Critical Juncture put up an API for the Federal Register. I thought it would be great if there was a package that could use this API to download data directly into R (much like the excellent WDI package). This would make it easier to anal...

## Minimum Correlation Algorithm Paper

Over summer I was busy collaborating with David Varadi on the Minimum Correlation Algorithm paper. Today I want to share the results of our collaboration: Minimum Correlation Algorithm Paper Back Test reports Supporting R code The Minimum Correlation Algorithm is fast, robust, and easy to implement. Please add it to you portfolio construction toolbox and

## MCMSki IV, Jan. 6-8, 2014, Chamonix (news #1)

As advertised on the ‘Og, the ISBA mailing list and now the birth certificate of BayesComp (!), MCMSki IV is taking place for sure in Chamonix-Mont-Blanc, January 6-8 2014. The webpage has been started, thanks to Merrill Liechty, and should grow with informations about the location, the hotels, registration, transportation, and of course skiing (check

## Some helps for running and evaluating Bayesian regression models

Around two years ago, I suddenly realized my statistical training had a great big Bayes-shaped hole in it. My formal education in statistics was pretty narrow – I got my degree in anthropology, a discipline not exactly known for its rigorously systematic analytic methods. I learned the basics of linear models and principal components analysis