# Maple: for deriving objective function and jacobian function

We can generate C code of an objective function by using Maple as

1. with(linalg);
2. F := …;
3. Ffunc := unapply(convert(convert(F, vector), list), PARAMETERS);
4. CodeGeneration[C](Ffunc, optimize);

I tried the following equation $\frac{p_0}{ 1 + exp(p_1 - p_2*x) }$ as

1. F := < p_0 / ( 1 + exp(p_1 – p_2*x) ) >;
2. Ffunc := unapply(convert(convert(F, vector), list), p_0, p_1, p_2, x);
3. CodeGeneration[C](Ffunc, optimize);

The result is as follows:

The above result looks nice. However, this style cannot handle huge size of parameters like hundreds or thousands. Therefore, the variables p_* should be converted to a single dynamic array double *p for handling arbitrary dimensionality of parameters.

The revised version of Maple code is

1. F := < p[1] / ( 1 + exp(p[2] – p[3]*x) ) >;
2. Ffunc := unapply(convert(convert(F, vector), list), p, x);
3. CodeGeneration[C](Ffunc, optimize);

NOTE that the indices of the parameters are incremented due to very important reason, mentioned below. The result is

Then, it follows similar way to derive jacobian function.

1. J := jacobian(F, [ PARAMETERS ]);
2. Jfunc := unapply(convert(convert(J, vector), list), PARAMETERS);
3. CodeGeneration[C](Jfunc, optimize);

In this example, the command is

1. J := jacobian(F, [ p[1], p[2], p[3] ]);
2. Jfunc := unapply(convert(convert(J, vector), list), p);
3. CodeGeneration[C](Jfunc, optimize);

Then, we obtain the following result

This looks perfect as well. Finally, I can use these objective and jacobian functions for performing non-linear optimization.

NOTE: you can easily understand the reason why the argument’s indices must start from 1. Index rule of Maple is same as one of MatLab, meaning 1 origin. If we start the index with 0, generated code use p[-1] as following result shows.