It looks like the next LondonR meeting on 10 September 2013 will involve a series of 5 minute lightning talks rather than a few half hour slots. I have proposed “Audiblization / sonification of data: what are people doing and is R a good launchpad for it?”
This is at such an embryonic stage that it’s pretty comparable to graphs in the 1770s. There are even two terms in common usage, although it feels like consensus is moving towards ‘sonification’. We don’t even have a William Playfair yet, popularising it with good ideas. But some people are giving it a go and I’ll show a few examples. What I’m particularly interested in is using it as a communication tool for data and statistical patterns. There may be an exploratory role too but I’m unconvinced at present. R is well placed for this given the ease with which it can interface to other packages. Some people are keen on Python for much the same reason. There is at present a well-established R package tuneR which will handle some basic audio editing, and a package audio that does some I/O, but anything fancier will be best handled externally. There are, of course, powerful C++ libraries like CLAM, opening up the possibility of Rcpp-powered on-the-fly synthesis. With tuneR, for example, you work on a sound object and then end by writing it to the hard drive as a .wav file. Then, and only then, can you hear it.
Open-source synthesis packages like SuperCollider and PureData are powerful but essentially they speak a totally different language to stats people. It’s a bit like the old days where programming a GPU involved pretending your data was an image. We need data-oriented interfaces or this will remain the preserve of oddballs like me who have somehow acquired experience of stats and programming and sound art or music. It will also remain hard to convince your boss that what you’re spending time on is not just messing about!