# Part 2 of 3: Non-linear Optimization of Predictive Models with R

[This article was first published on

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

In my previous post, I was able to build a predictive model (simple linear model) to predict the gross margin % of an eCommerce site based on the promotional spend accross various paid channels. I repeated the process for AOV (average order value) and conversion rate resulting in 3 models.**Advanced Analytics Blog by Scott Mutchler**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Wouldn’t it be great if I could find the optimal promotional [spend] strategy to maximize gross margin %, AOV and conversion rate? Is there a single recipe that maximizes all 3.

Let’s use the optim() function in R to optimize each model independantly and then see if we can arrive at a global strategy for our spend.

SRC_PATH <- '/analytics/margin_model/'

# load existing models

load(paste(SRC_PATH,’gm_pct_model.model’,sep=”))

load(paste(SRC_PATH,’conv_rate_model.model’,sep=”))

load(paste(SRC_PATH,’aov_model.model’,sep=”))

# load the original data and use the first row as a scoring stub

data <- read.csv(file=paste(SRC_PATH,'margin_model.csv',sep=''),header=TRUE)

stub <- data[1, ]

We are going to write a wrapper function to simplify the call to optim(). The optim() method tries to find the global minimum. We want the global maximum so we will return the negative of the result. We also have a contraint that all the spend numbers sum to 1 (i.e. 100%). Finally, since the call to predict() requires a full row (as we trained it) we copy the inputs over top of the scoring stub record. This is required because optim() optimizes all the data in the vector and it doesn’t make sense to optimize the additional columns.

# create optimization wrapper for gross margin %

opt_gm <- function(x) {

# normalize the data to sum to 1

t <- sum(x)

x <- x/t

z <- stub

# copy the data over top of the stub record

for (i in 1:9) {

z[4+i] <- x[i]

}

# score the data and return the negative

-1 * predict(gm_pct_model,z)

}

Now we can call the optim() method. We are going to use the quasi-Newton method with bumpers on the input values of +/- 20%. Our starting point is the mean of each variable.

# start with mean values

opt_start <- (mean(data))[5:13]

# optimize

opt_results <- optim(opt_start, opt_gm, method="L-BFGS-B", lower=opt_start*0.8, upper=opt_start*1.2)

# view the optimized inputs & predicted output (gross margin %)

opt_results

> opt_results$par

PROMO_AFFILIATE_UNITS PROMO_COMP_SHOP_ENGINES_UNITS PROMO_DISPLAY_ADS_UNITS

0.173467236 0.012642756 0.005502173

PROMO_EMAIL_UNITS PROMO_LOCAL_SEM_UNITS PROMO_SEARCH_ENG_MKT_UNITS

0.072762391 0.237869173 0.155058253

PROMO_TELESALES_UNITS PROMO_UNPAID_UNITS 0.327984493

$value [1] -0.4946575

So here is our “recipe” to optimize gross margin % to 49.47%. In the next installment, we put a Java interface on all 3 models and try to find a global “recipe” for all 3 metrics.

To

**leave a comment**for the author, please follow the link and comment on their blog:**Advanced Analytics Blog by Scott Mutchler**.R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.