1

Maple: for deriving objective function and jacobian function

We can generate C code of an objective function by using Maple as

  1. with(linalg);
  2. F := …;
  3. Ffunc := unapply(convert(convert(F, vector), list), PARAMETERS);
  4. CodeGeneration[C](Ffunc, optimize);

I tried the following equation \frac{p_0}{ 1 + exp(p_1 - p_2*x) } as

  1. F := < p_0 / ( 1 + exp(p_1 – p_2*x) ) >;
  2. Ffunc := unapply(convert(convert(F, vector), list), p_0, p_1, p_2, x);
  3. CodeGeneration[C](Ffunc, optimize);

The result is as follows:

mapleCodegenVariables

The above result looks nice. However, this style cannot handle huge size of parameters like hundreds or thousands. Therefore, the variables p_* should be converted to a single dynamic array double *p for handling arbitrary dimensionality of parameters.

The revised version of Maple code is

  1. F := < p[1] / ( 1 + exp(p[2] – p[3]*x) ) >;
  2. Ffunc := unapply(convert(convert(F, vector), list), p, x);
  3. CodeGeneration[C](Ffunc, optimize);

NOTE that the indices of the parameters are incremented due to very important reason, mentioned below. The result is
mapleCodegenVectorCorrect

Then, it follows similar way to derive jacobian function.

  1. J := jacobian(F, [ PARAMETERS ]);
  2. Jfunc := unapply(convert(convert(J, vector), list), PARAMETERS);
  3. CodeGeneration[C](Jfunc, optimize);

In this example, the command is

  1. J := jacobian(F, [ p[1], p[2], p[3] ]);
  2. Jfunc := unapply(convert(convert(J, vector), list), p);
  3. CodeGeneration[C](Jfunc, optimize);

Then, we obtain the following result
mapleCodegenJacobian

 

This looks perfect as well. Finally, I can use these objective and jacobian functions for performing non-linear optimization.

 

NOTE: you can easily understand the reason why the argument’s indices must start from 1. Index rule of Maple is same as one of MatLab, meaning 1 origin. If we start the index with 0, generated code use p[-1] as following result shows.mapleCodegenVector

 

11

Levenberg Marquardt algorithm in C++

For start-up projects, I need a non-linear optimization C++ library, especially Levenberg Marquardt algorithm. I found two candidates: levmar and Eigen::LevenbergMarquardt.

In terms of examples, levmar provides good examples homest while Eigen::LevenbergMarquardt does poor one in their package /eigen/unsupported/test/NonLinearOptimization.cpp.

I tried both of them. First impression is levmar is better. Once I understand the structure of the Eigen::LevenbergMarquardt, I’m feeling Eigen::LevenbergMarquardt is more easy to customize.

0

function dlevmar_der() of levmar

int dlevmar_der(
void (*func)(double *p, double *x, int m, int n, void *adata),
void (*jacf)(double *p, double *j, int m, int n, void *adata),
double *p,
double *x,
int m,
int n,
int itmax,
double opts[4],
double info[LM_INFO_SZ],
double *work,
double *covar,
void *adata
)

Inputs:

  • void (*func): pointer to objective function
  • void (*jacf): pointer to function computing Jacobian
  • double *p: pointer to the parameter vector. need to be initialized.
  • double *x: pointer to the observation vector. NULL means zero vector.
  • int m: dimension of the parameter vector.
  • int n: dimension of the observation vector.
  • int itmax: maximum number of iterations.
  • double opts[5]: options for minimization:
    • opts[0]: scale factor for initial \mu of J^{T}J+\mu I .
    • opts[1]: stopping threshold for || J^{T} e ||_{\inf}
    • opts[2]: stopping threshold for || DP ||_{2}
    • opts[3]: stopping threshold for || e ||_{2}
    • opts[4]: the step used in difference approximation to the Jacobian
  • double *covar: covariance matrix corresponding to its solution

Outputs:

  • double *p: pointer to the estimated parameter vector.
  • double info[LM_INFO_SZ]: information regarding the minimization.
    • info[0]: || e ||_2 at initial p.
    • info[1]: || e ||_2 at estimated p.
    • info[2]: || J^{T} e ||_{\inf} at estimated p.
    • info[3]: || Dp ||_2 at estimated p.
    • info[4]: \frac{\mu}{ ( J^{T} J )_{ii} } at estimated p.
    • info[5]: reason why LM converged.
    • info[6]: stopped by small || e ||_2 .
    • info[7]: number of execution of func.
    • info[8]: number of Jacobian evaluations.
    • info[9]: number of linear systems solved.
  • double *work: working memory.
  • double *covar: covariance matrix corresponding to its solution
  • void *adata: pointer to possibly needed additional data, passed uninterpreted to func.
  • {\rm x} = {\rm func}( {\rm p}, {\rm adata} )
  • {\rm jac} = {\rm jacf}( {\rm p}, {\rm adata} )

pseudo code is as follows:

  1. set adata
  2. set p
  3. set x = func(p, adata);
  4. initialize p_hat
  5. perform non-linear optimization
    dlevmar_der( func, jacf, p_hat, x, m, n, itmax, opts, info, work, covar, adata );
  6. compute x_hat = func(p_hat, adata);
7

levmar 2.6 with Visual Studio 2010

memo for compiling levmar 2.6 on windows with visual studio 2010.

we need to modify the code a bit for compilation!!!

  1. .lib files
    CmakeLists.txt of levmar 2.6 assumes that you put all lapack related .lib files into one directory. In Cmake GUI, it’s called LAPACKBLAS_DIR that is pre-defined as ‘/usr/lib’. If you compile clapack by yourself, the lib files are generated in different directories not in a directory. To use original CmakeLists.txt, you should put the .lib files into one directory and set LAPACKBLAS_DIR on Cmake GUI. Then, link problem should be solved.
  2. setting related to BLAS
    See the FAQ. We have to modify misc.h according to BLAS. The original misc.h assumes that you have a F77 BLAS. If your BLAS is either f2c’ed BLAS or CBLAS, you should modify a few lines in misc.h. If you compiled CLAPACK by yourself, your BLAS is f2d’ed one. Then, what you should do is

    1. Uncomment the line 31 of ”#define LM_BLAS_PREFIX f2c_”.
    2. Comment out the line 37 of “#define LM_BLAS_SUFFIX _
    3. Uncomment the line 35 of “#define LM_BLAS_SUFFIX”.
  3. HAVE_LAPACK
    check the HAVE_LAPACK option in cmake, and then compile it.

You can see how levmar works with an example program provided by the author.

0

Using levmar 2.6 with Visual Studio

On my previous post, I posted how to compile levmar 2.5 with visual studio 2010. Since the procedure mentioned there is not enough to compile levmar 2.6, I write how to solve the compilation issue even though the chips is written on levmar FAQ.

  1. link error of LAPACK lib files
    CmakeLists.txt of levmar 2.6 assumes that you put all lapack related .lib files into one directory. In Cmake GUI, it’s called LAPACKBLAS_DIR that is pre-defined as ‘/usr/lib’. If you compile clapack by yourself, the lib files are generated in different directories not in a directory. To use original CmakeLists.txt, you should put the .lib files into one directory and set LAPACKBLAS_DIR on Cmake GUI. Then, link problem should be solved.
  2. Link error related to sgemm and dgemm
    You may face another link error with the generated solution file. The error is like:
    levmar.lib(misc.obj) : error LNK2019: unresolved external symbol _sgemm_ referenced in function _slevmar_trans_mat_mat_mult
    levmar.lib(misc.obj) : error LNK2019: unresolved external symbol _dgemm_ referenced in function _dlevmar_trans_mat_mat_mult

    This problem is mentioned in levmar FAQ 27. Following the FAQ, I could compile levmar and run the example program. What I did was:

    1. Uncomment the line 31 of ”#define LM_BLAS_PREFIX f2c_”.
    2. Comment out the line 37 of “#define LM_BLAS_SUFFIX _
    3. Uncomment the line 35 of “#define LM_BLAS_SUFFIX”.

    Next, I compiled levmar with HAVE_LAPACK option. Then, the compilation was successfully done.

  3. Necessary .h file and .lib files
    After that, I could compile the expfit.c with some modification. What I did was to specify the include file “levmar.h” and library files “blas.lib”, “lapack.lib”, “tmglib.lib”, “libf2c.lib”, and “levmar.lib”.