Solver settings

FORCES implements a primal-dual Mehrotra predictor-corrector method without any scalings (i.e. just the standard Nesterov-Todd scalings for the LP cone). There are not many knobs to tune the solver, with the line search and tolerance parameters being the most important ones. Others options that do not influence the convergence of the solver are the maximum number of iterations and printing.

Before calling the generateCode() routine of the client, you can obtain a settings struct with default settings as follows:

codeoptions = getOptions('this_is_how_the_solver_will_be_named')

This generates a default options struct and sets the name of the solver. We detail in the following on the available options.

Standard options


By default, the generated solver will print a progress line at each interior point iteration:

codeoptions.printlevel = 2;

Other options are 0: no printing, 1: print summary after each solve and 3: flood the console with debug information.

Solver name

The name of the solver will be used to name variables, functions, but also the MEX file and associated help file. This helps you to use multiple solvers generated by FORCES within the same software project. As showed above, you can directly call generateCode(name) with a string name, or you use = 'but_I_want_a_different_name_for_my_solver';

to rename the solver.

Maximum number of iterations

You can set the maximum number of interior point iterations that the generated solver is allowed to perform before quitting with “Maximum iterations reached, exiting.” by using

codeoptions.maxit = 20;

Solver initialization

You can set the method of initialization in FORCES by

codeoptions.init = 0; % select initialization method

Possible options are:

There are currently 2 initialization options available in FORCES:

  • 0: Set all primal variables to 0, and all dual variables to one: $z_i=0$, $s_i=1$, $\lambda_i=1$.
  • 1 (default): Set all primal variables to zero, the slacks to the RHS of the corresponding inequality, and the Lagrange multipliers associated with the inequalities such that the pairwise product between slacks and multipliers is one: $z_i=0$, $s_i=b_{ineq}$ and $s_i\lambda_i=1$.

Solution accuracy

The accuracy for which FORCES returns the OPTIMAL flag can be set as follows:

codeoptions.accuracy.ineq = 1e-6;  % infinity norm of residual for inequalities
codeoptions.accuracy.eq = 1e-6;  % infinity norm of residual for equalities = 1e-6;    % absolute duality gap
codeoptions.accuracy.rdgap = 1e-4; % relative duality gap := (pobj-dobj)/pobj

Initial State

In MPC problems there often exists a simple equality constraint for the first state $z_1 = c_i$. If this is the case, the first equality constraint can be split up into two separate constraints. This will reduce the dimension of the equality constraint and hence reduce the computation complexity.

The equality constraints in the case of $z_1 = c_i$ are (note the shift of the $c_i$'s):

& D_i z_i = c_i \enspace, \qquad \qquad \qquad  \, \; \; i=1 \qquad \qquad\;\; \textsf{(equality coupling 1 variable)}\\
& C_{i-1} z_{i-1} + D_i z_i = c_i\enspace, \quad \, \; \; i=2,\dots,N\ \quad \textsf{(equalities coupling 2 variables)}

Attention: This optimization works at the moment only if the dimensions of the first two equality constraints coincide, such as in the classical MPC setting with

&x_0 = x \\
&x_1 = Ax_0 + Bu_0

In the example above, both equality constraints have the same dimension.

Example: The implementation of the special treatment of the inital state is explained on the basis of the Simple MPC example where $x_1 = x$. These are the changes:

%% stage 1
% dimension
stages(i).dims.r = nx;  % number of equality constraints  
% equality constraints
stages(i).eq.C = [A, B];  
stages(i).eq.D = [eye(nx), zeros(nx,nu)];
%% stages along the horizon
% equality constraints
stages(i).eq.D = [-eye(nx), zeros(nx,nu)];
%% final stage
stages(i).dims.r = nx;    % number of equality constraints
% equality constraints
stages(i).eq.c = zeros(nx,1);

The size of the parameter $z_1$ changed, thus it must be adapted in the simulation:

%% simulate
problem.z1 = zeros(nx,1);

You can download the complete code here.

Parallel Computation

FORCES supports the computation on multiple cores. This is implemented by the use of OpenMP and can be switched on by using

codeoptions.parallel = 1;

The parallel computation can only be used, if the compiler supports OpenMP. By default parallel computation is switched off.

Optimization Level

The compiler has a flag for the code optimization:

codeoptions.optlevel = 1;

1: This will optimize the code such that it executes faster
0: The optimization is disabled, but it takes less time to generate the mex-file (for debugging).
By default the optimization level is set to 1.

Diagonal Hessian

When the Hessians are diagonal they are computed automatically in sparse form which will give a speed up. If the Hessian is diagonal but a parameter this can be specified by

par = newParam('myparam', 1:N, 'cost.H', 'diag');


You can measure the time used for executing the generated code by using

codeoptions.timing = 1;

By default the execution time is measured. The execution time can be accessed in the infostruct info.solvetime. In addition the execution time is printed in the console if the flag printlevel > 0.


The type of variables can be set by using

codeoptions.floattype = 'float';

The default value is 'double'.


When generating new code one needs often to overwrite the old one. This can be controlled by

codeoptions.overwrite = 2;

Possible Options are:
0: Never overwrite existing code.
1: Always overwrite existing code.
2: Ask to overwrite existing code.

Expert options

It is not advised to change these parameters unless you know what you are doing, and why.

Line search

If FORCES experiences convergence difficulties, you can try selecting different line search parameters.

The first two parameters, factor_aff and factor_cc are the backtracking factors for the line search (if the current step length is infeasible, then it is reduced by multiplication with these factors) for the affine and combined search direction, respectively.

codeoptions.linesearch.factor_aff = 0.9;  
codeoptions.linesearch.factor_cc  = 0.95;

The remaining two parameters determine the minimum (minstep) and maximum step size (maxstep). Choosing minstep too high will cause the generated solver to quit with an exitcode saying that the line search failed, i.e. no progress could be made along the computed search direction. Choosing maxstep too close to 1 is likely to cause numerical issues, but choosing it too conservatively (too low) is likely to increase the number of iterations needed to solve a problem.

codeoptions.linesearch.minstep = 1e-8;
codeoptions.linesearch.maxstep = 0.995;


During factorization of supposedly positive definite matrices, FORCES applies a regularization to the $i$th pivot element if it is smaller than $\epsilon$. In this case, it is set to $\delta$, which is the lower bound on the pivot element that FORCES allows to occur.

% NOTE: these are the default settings for double precision
codeoptions.regularize.epsilon = 1e-13;  % if pivot element < epsilon ... = 1e-6;  % then set it to delta