**Cartesian Faith » R**, and kindly contributed to R-bloggers)

A number of people suggested that my book, Modeling Data With Functional Programming In R, be more upfront with examples since the benefits of functional programming may not be immediately obvious to readers. As a first example, I have implemented some iterative function systems for a few common fractals. Iterative function systems are an elegant approach to generating fractals as they are based on repeated application of a collection of linear transformations. Hence each application of an IFS produces more granularity for an existing set of points.

The larger motivation for using IFS is that Iâ€™m updating my aging fractalrock package to provide a generic framework for generating data via processes. The idea is to be able to plug in any process to generate a time series (or other entity) for simulation purposes. In the current version, I use a pattern based approach for generating fractal-based time series. This is a fun and easy way to explore fractal processes, but itâ€™s a bit clunky and doesnâ€™t offer much in terms of mathematical reasoning. On the other hand, IFS and fractional brownian motion are more rigorous techniques that achieve the same goal.

### Heighway Dragon

The first IFS weâ€™ll explore is the Heighway Dragon, which is initialized with a line segment. This system applies two transformations to each collection of points: scale by and rotate by 45 degrees counter clockwise; and scale by , rotate 135 degrees, and translate to , where . This can be accomplished by taking the union of two function applications: , where and . Note that the requirement is there simply to join the two segments at their end point and is equivalent to , which uses a more familiar rotation matrix.

Here is what the Heighway Dragon looks like for different numbers of iterations.

I wanted to do a code comparison to show the difference in using a functional programming approach to this versus a more procedural approach. Funnily enough, most implementations I found described the process in terms of providing a sequence of commands to a line-drawing function (like Logo when I was little or Turtle these days), which is the ultimate imperative approach. Anyway here is my version for a single iteration.

heighway <- function(x) { f1 <- function(x) matrix(c(.5,.5,-.5,.5), nrow=2) %*% x f2 <- function(x) matrix(c(-.5,.5,-.5,-.5), nrow=2) %*% x + c(1,0) cbind(f1(x),f2(x)) }

If we look at this function it does exactly what we described mathematically. In fact we can even transform the function back into symbolic notation since the code has no side effects and satisfies referential transparency.

Let `f1 <- function(x) matrix(c(.5,.5,-.5,.5), nrow=2) %*% x`

, which is equivalent to

.

Similarly, `f2 <- function(x) matrix(c(-.5,.5,-.5,-.5), nrow=2) %*% x + c(1,0)`

is equivalent to

.

The function overall is `heighway(x) = cbind(f1(x), f2(x))`

. Generating an explicit version of the dragon is a matter of applying a n-fold composition on the function `heighway`

. Note that the terminology points the way to the implementation:

x <- rbind(seq(0,1, by=0.05),0) xn <- fold(1:12, function(a,b) heighway(b), x)

The `fold`

function is a staple of functional programming and is a binary operator used for accumulating results from a single vector input. As shown above it can also be used for straight function composition by throwing away the first argument and keeping only the accumulated result. I include the function in my upcoming lambda.tools package, which is a collection of functions that supplement the book. In the meantime, the definition is below. You will need to install and load lambda.r for this to work.

fold(EMPTY, fn, acc) %as% acc fold(x, fn, acc=0) %when% { is.null(dim(x)) } %as% fold(x[-1], fn, fn(x[[1]], acc)) fold(x, fn, acc=0) %as% fold(x[,-1,drop=FALSE], fn, fn(x[,1], acc))

In this rather trivial example it is clear that the code implementation is equivalent to the mathematical formulas. When such an equivalence exists, this is a great boon as we can be confident that the software model has the same certainty as the mathematical model. The goal is to extend this certainty to as much of the code as is possible. That is the point of my book.

### Sierpinsky Pentagon

The rationale for the implementation of the Sierpinski Pentagon is similar, albeit a little more involved. The function is three lines of code, where the first two lines are creating matrices and there is one line of function application. In this function `fold`

makes an appearance within the body itself. Why is this? The reason is because the class of Sierpinski polygons is basically replicas that are scaled and shifted to the appropriate locations. Hence it is easy to simply create a matrix that represents the scaling and shifting for each replica and again perform a union of the results. In this implementation, `b`

represents the accumulated set of points.

sierpinski5 <- function(x) { m <- matrix(c(0.382,0, 0,0.382), nrow=2) o <- matrix(c(0,0, 0.618,0, 0.809,0.588, 0.309,0.951, -0.191,0.588), nrow=2) fold(o, function(a,b) cbind(m %*% x + a, b)) }

This function can be called in the same manner as the Heighway Dragon, with the difference being that the starting set of points is a polygon and not a line segment.

x <- polygon(5) xn <- fold(1:5, function(a,b) sierpinski5(b), x)

If you try this yourself, be aware that the set of points grows quite quickly, so donâ€™t use too high an value (5 in this example), or you may run out of memory.

It would be remiss of me to not show the `polygon`

function. Not to worry as the same philosophy underpins its implementation. Notice that most of this implementation is also primarily building data structures. The actual creation of the polygon happens in the last line with `fold`

. This works by recognizing that just like the iterative function systems above, it is possible to generate any collection of line segments by taking the union of a collection of functions. So we fold by the number of sides, creating a rotation matrix as a multiple of the degree of each angle. The offset is simply a column vector in the offset matrix.

Not surprisingly, the offset matrix, `o`

, is also constructed using functional programming principles. The function `cumsum`

is naturally functional (trademark anyone?) and creates a vector of the cumulative sum of the input vector. The insight here is that each vertex of the polygon can be described as an offset from the previous vertex (moving in order counter clockwise). The offset is simply the cosine and sine of the angle of the line segment.

rotation <- function(degrees) { theta <- radians(degrees) cbind(c(cos(theta),sin(theta)), c(-sin(theta), cos(theta))) } radians <- function(degrees) degrees / 360 * 2 * pi polygon <- function(sides=5, by=0.1) { theta <- 360/sides xo <- cumsum(c(0,cos(radians((0:(sides-1)) * theta)))) yo <- cumsum(c(0,sin(radians((0:(sides-1)) * theta)))) o <- rbind(xo,yo) x <- rbind(seq(0,1,by=by),0) fold(1:sides, function(a,b) cbind(rotation((a-1)*theta) %*% x + o[,a], b)) }

### Conclusion

As a first example of the benefits of functional programming for model development, I hope this gives readers a taste of what is to come. One of the remarkable things about this approach is that thinking about the program functionally gives insights into the mathematically nature of the problem. This elegant interplay between code and math is only possible because functional programming is so close to traditional mathematics.

For those wondering where the magic of lambda.r comes into play, I chose intentionally not to use any syntactic constructs from lambda.r (excepting the definition of fold) as I wanted to illustrate what is possible with standard R.

**leave a comment**for the author, please follow the link and comment on their blog:

**Cartesian Faith » R**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...