[This article was first published on Notes of a Dabbler » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

In case you missed previous parts, the links to them are listed below.

In this part, I tried to recreate the examples in section A.2.3 of the computational appendix in the reaction engineering book (by Rawlings and Ekerdt).

## Function Minimization

In part 2, the reaction equilibrium conditions was determined by solving a set of nonlinear equations. Alternately, it could be determined by minimizing the Gibbs free energy of the system. The function to be minimized is (refer to part2 for information on what the variables refer to): $G=-(x_1lnK_1+x_2lnK_2)+(1-x_1-x_2)lnP+(y_{I0}-x_1-x_2)ln(y_I)+(y_{B0}-x_1-x_2)ln(y_B)+(y_{P10}+x_1)ln(y_{p1})+(y_{P20}+x_2)ln(y_{P2})$

The following constraints need to be satisfied for $x_1$ and $x_2$ \begin{aligned} 0 &\leq x_1 &\leq 0.5 \\ 0 &\leq x_2 &\leq 0.5 \\ x_1&+x_2 &\leq 0.5 \\ \end{aligned}

First I used constrOptim function for minimization. We need to specify the function to be minimized

# function to be minimized
eval_f0=function(x){
dg1=-3.72e3; dg2=-4.49e3; T=400; R=1.987; P=2.5
K1=exp(-dg1/(R*T)); K2=exp(-dg2/(R*T))

yI0=0.5; yB0=0.5; yP10=0; yP20=0;
d=1-x-x
yI=(yI0-x-x)/d
yB=(yB0-x-x)/d
yP1=(yP10+x)/d
yP2=(yP20+x)/d

f=-(x*log(K1)+x*log(K2))+(1-x-x)*log(P)+yI*d*log(yI)+
yB*d*log(yB)+yP1*d*log(yP1)+yP2*d*log(yP2)

return(f)
}


The constraints need to be specified in the form $Ax-b \geq 0$

#  constraint
A=matrix(c(-1,-1,1,0,-1,0,0,1,0,-1),ncol=2,byrow=TRUE)
> A
[,1] [,2]
[1,]   -1   -1
[2,]    1    0
[3,]   -1    0
[4,]    0    1
[5,]    0   -1

b=c(-0.5,0,-0.5,0,-0.5)
> b
 -0.5  0.0 -0.5  0.0 -0.5



Next, the function is minimized using constrOptim (starting from an initial guess of (0.2,0.2)). Here Nelder-Mead method is used since BFGS method requires specifying the gradient of the function by the user. R taskview optimization lists other options.

# initial guess
xinit=c(0.2,0.2)

# minimize function subject to bounds and constraints
> xans2
$par  0.1331157 0.3509254$value
 -2.559282

$counts function gradient 100 NA$convergence
 0

$message NULL$outer.iterations
 3

$barrier.value  8.695209e-05  The solution can be accessed from xans2$par and is (0.1331,0.3509)

Next, I also tried function minimization with ConstrOptim.nl from the package alabama. Here the constraints are specified in terms of $h(x) \geq 0$. Even if gradient is not supplied, this function will estimate it using finite-differences.

Definition of constraints in the format for constrOptim.nl

# load library alabama
library(alabama)
library(numDeriv)

h_ineq=function(x){
h=rep(NA,1)
h=-x-x+0.5
h=x
h=-x+0.5
h=x
h=-x+0.5
return(h)
}

> xans3=constrOptim.nl(par=xinit,fn=eval_f0,hin=h_ineq)
Min(hin):  0.1
par:  0.2 0.2
fval:  -2.313951
Min(hin):  0.01602445
par:  0.1332729 0.3507026
fval:  -2.559282
Min(hin):  0.01597985
par:  0.1331951 0.350825
fval:  -2.559282
Min(hin):  0.01597985
par:  0.1331951 0.350825
fval:  -2.559282

> xans3
$par  0.1331951 0.3508250$value
 -2.559282

$counts function gradient 97 14$convergence
 0

$message NULL$outer.iterations
 4

$barrier.value  0.0008697741  The solution can be accessed from xans3$par and is (0.1332,0.3508).
Since this is a 2 dimensional problem, the solution can also be visualized using a contour plot of the function.

# Region of interest: 0.01<=x1<=0.49, 0.01<=x2<=0.49
x1=seq(0.01,0.49,by=0.01)
x2=seq(0.01,0.49,by=0.01)

# vectorizing function eval_f0 so that it can be evaluated in the outer function
fcont=function(x,y) eval_f0(c(x,y))
fcontv=Vectorize(fcont,SIMPLIFY=FALSE)
> z=outer(x1,x2,fcontv)
There were 50 or more warnings (use warnings() to see the first 50)


The warnings are produced in regions where