CONOPT
Loading...
Searching...
No Matches
Additional Information

Some additional details and options related to the CONOPT solution algorithm.

Error Return Codes

Zero returned from coi_solve() means that CONOPT terminated properly. This does not necessarily mean that the model was solved to optimality, but a mathematically well defined solution was reached (Optimal, Infeasible, Unbounded etc), the solution process was terminated by some limit (iteration limit, resource limit, function evaluation error limit), or the modeler asked for a termination by returning a nonzero value from Progress (labeled User interrupt). The classification of the solution will be returned by the Status callback routine.If CONOPT stops because the user has returned a nonzero ERROR value return from one of the callback routines, then coi_solve() will return a negative value that has the same absolute values as the users ERROR value.If CONOPT stops because of other errors then the error return code will be positive and the values will have the following interpretation:
  • 1 The control vector passed to coi_solve() was corrupted.
  • 3 License problem.
  • 105 Could not allocate basic memory needed to start CONOPT.
  • 106 Too many equations. (Message contains the limit).
  • 107 Too many variables. (Message contains the limit).
  • 108 Too many equations + variables. (Message contains the limit).
  • 109 Too many nonzeros. (Message contains the limit).
  • 110 Model exceeds size limits for demonstration license. (Message contains the limits). This return code can be caused by an incorrect license, since CONOPT only will be able to solve models below the small-scale demo limit after a license error.
  • 111 The index for the objective function defined with coidef_objvar() or coidef_objcon() is outside the legal range.
  • 112 More nonlinear nonzeros than total nonzeros.
  • 113 Insufficient memory to start CONOPT.
  • 114 Insufficient memory during the optimization. (defined from version 3.15)
  • 115 Illegal value of ThreadC. Is not greater than ThreadS.
  • 200 An internal system error has been encountered.
  • 400 The model did not call Status. This return code is used when an error or inconsistency is found in the bounds, the Jacobian matrix, or in the structure of the Hessian. It is also returned if functions or derivatives are not defined in the initial point of if the function or derivative debuggers find an error.
  • 1002 Callback routine ReadMatrix was not registered.
  • 1003 Callback routine FDEval was not registered.
  • 1004 Callback routine ErrMsg was not registered.
  • 1005 Callback routine Message was not registered.
  • 1009 Callback routine Status was not registered.
  • 1010 Callback routine Solution was not registered.
  • 1012 coidef_base() has not been called to define a base, i.e. whether to use the Fortran calling convention with vectors starting at index 1 or the C calling convention with vectors starting at index 0.
  • 1013 Neither coidef_fortran() nor coidef_c() has not been called to define an argument calling convention. Fortran means that all arguments are passed by address while C means that scalar input arguments are passed by value following standard C conventions.
  • 2000 The Function and Derivative Debugger has found an error in the sparsity pattern of the Jacobian or in the numerical value of a Jacobian element.
  • 2001 The 2nd derivative Debugger has found an error in the sparsity pattern of the Hessian, in the numerical value of a Hessian element, or in one of the directional 2nd derivative routines.

File Units (Fortran)

CONOPT uses a few Fortran file units and there is a possibility for conflict on some systems. The Windows DLL seem to have its own space for file handles and users seem to be able to use any legal Fortran file unit. The Shared Library used on some Unix systems will sometimes share file handles with the calling program and there is a possibility for conflict. The Fortran units used by CONOPT are:Unit 31: Used for reading an Options file if one has been defined using coidef_optfile(). The file unit is opened during the initialization of CONOPT and closed and released immediately after reading.

Quick Mode

CONOPT is designed to find a local optimum or to prove that a feasible solution does not exist by finding a locally infeasible solution. In some cases it can be useful to terminate early if there are indications that CONOPT otherwise would spend too much time finding the solution. A relevant case is inside a global optimization solver where CONOPT can be called frequently to find incumbent solutions within some restricted set of bounds. Many solutions will be infeasible and many locally optimal solutions will be worse than the best found so far. For use under these circumstances there are a few options that can be used to terminate CONOPT early, called “Quick Mode”.
LSNOP2 If defined as True there will be “No Phase 2”. The optimization is stopped before a feasible solution is found if CONOPT is about to switch to phase 2, the nonlinear infeasible mode. In order to continue it will be necessary to use second order information and progress is likely to be slow and a feasible solution may not even exist. If the solution process is stopped by this option CONOPT will return Model Status = 6 = Intermediate Infeasible, and Solver Status = 15 = Quick Mode Termination. The dual variables in the solution will be Undefined. The default value is False.
LFMXP4 A bound on the number of iterations in Phase 4 with the line search ending before a bound is reached. The initial optimization is likely to use fast Phase 3 iterations where the model appears to be locally linear. If nonlinearities become dominating CONOPT will switch to Phase 4 and accumulate 2nd order information in some form. The overall progress is likely to be slow if there are many iterations in Phase 4 in which the line search stops before a bound is reached. If Lfmxp4 is defined then CONOPT will stop if the number of consecutive line searches of this type exceeds 2+Lfmxp4. If the solution process is stopped by this option CONOPT will return Model Status = 7 = Intermediate Non Optimal, and Solver Status = 15 = Quick Mode Termination. The dual variables in the solution will be Undefined. The default value is integer Infinity (1 000 000 000).
RVOBJL

Limit on Objective in Quick Mode. If a feasible solution is found and the objective is better than Rvobjl then Quick Mode is turned off again, i.e. Lfmxp4 is reset to infinity and CONOPT will attempt to find a local optimum. The default value is Undefined.

When Rvobjl is defined from the value of the best incumbent solution found so far then the option can be used to turn Quick Mode off if we are about to find a better incumbent solution. The new solution is then likely to be a better Locally Optimal solution instead of an Intermediate Non Optimal solution.


Multi Threading

CONOPT can take advantage of multiple processors or cores using the OpenMP standard (see www.openmp.org) to control multiple threads.First of all, CONOPT is thread-safe and several copies of CONOPT can be executed in concurrent threads as long as each thread is initialized with its own control vector. The UsrMem argument on the callbacks can be used to include information that distinguishes the individual threads. This possibility can for example be used to implement a parallel branch and bound solver.If only one copy of CONOPT is active in an application, this copy can use multiple threads internally for example during various matrix operations, and it can use multiple threads during some function evaluation calls where individual constraints can be evaluated in parallel. Note that you cannot have multiple copies of CONOPT active and use multiple threads inside CONOPT at the same time.By default, only one thread will be used. If you want CONOPT to use more threads you must use the coidef_threads() definition routine to define how many threads should be used. If you ask for multiple threads then CONOPT will assume that it can use multiple threads both internally and in the FDEval and 2DDir callback routines, where information related to different constraints but to the same point (and possibly the same direction) is evaluated in parallel. If you would like to use multiple threads internally, but call the FDEval and/or 2DDir callback routines sequentially (for example because they use global variables), then you must also use the coidef_threadf() definition routine with ThreadF = 1 and/or the coidef_thread2d() definition routine with Thread2D = 1 to define that these routines cannot run in parallel.Note that the optional initialization routines, FDEvalIni and 2DDirIni, are not called in parallel, and the modeler is free to implement these routines so they use multiple threads internally.The other alternative 2nd derivative evaluation routines, 2DLagr and 2DDirLag, do not have a thread argument. They evaluate information for all constraints in one call. The modeler is free to implement these routines so they use multiple threads internally to evaluate individual constraints in parallel.Sometimes parallel FDEval or 2DDir callback routines may need scratch memory that should be allocated once. The number of scratch vectors or the size of the overall amount of scratch memory will depend on the maximum number of threads that CONOPT will be using. This information is available with the coiget_maxthreads() call. coiget_maxthreads() can be called at any time outside the parallel parts of the code, i.e. outside FDEval and 2DDir.

Options

Lim_Err_2DDir

Limit on errors in Directional Second Derivative evaluation.

If the evaluation of Directional Second Derivatives (Hessian information in a particular direction) has failed more than Lim_Err_2DDir times CONOPT will not attempt to evaluate them any more and will switch to methods that do not use Directional Second Derivatives. Note that second order information may not be defined even if function and derivative values are well-defined, e.g. in an expression like power(x,1.5) at x=0.

Lim_Start_Degen

Limit on number of degenerate iterations before starting degeneracy breaking strategy.

The default CONOPT pivoting strategy has focus on numerical stability, but it can potentially cycle. When the number of consecutive degenerate iterations exceeds Lim_Start_Degen CONOPT will switch to a pivoting strategy that is guaranteed to break degeneracy but with slightly weaker numerical properties.

Lim_Msg_Dbg_1Drv

Limit on number of error messages from function and derivative debugger.

The function and derivative debugger (see Lim_Dbg_1Drv) may find a very large number of errors, all derived from the same source. To avoid very large amounts of output CONOPT will stop the debugger after Lim_Msg_Dbg_1Drv error(s) have been found.

Lim_Err_Fnc_Drv

Limit on number of function evaluation errors. Overwrites GAMS Domlim option

Function values and their derivatives are assumed to be defined in all points that satisfy the bounds of the model. If the function value or a derivative is not defined in a point CONOPT will try to recover by going back to a previous safe point (if one exists), but it will not do it more than at most Lim_Err_Fnc_Drv times. If CONOPT is stopped by functions or derivatives not being defined it will return with a intermediate infeasible or intermediate non-optimal model status.

Lim_Msg_Large

Limit on number of error messages related to large function value and Jacobian elements.

Very large function value or derivatives (Jacobian elements) in a model will lead to numerical difficulties and most likely to inaccurate primal and/or dual solutions. CONOPT is therefore imposing an upper bound on the value of all function values and derivatives. This bound is 1.e30. If the bound is violated CONOPT will return with an intermediate infeasible or intermediate non-optimal solution and it will issue error messages for all the violating Jacobian elements, up to a limit of Lim_Msg_Large error messages.

Lim_Err_Hessian

Limit on errors in Hessian evaluation.

If the evaluation of Hessian information has failed more than Lim_Err_Hessian times CONOPT will not attempt to evaluate it any more and will switch to methods that do not use the Hessian. Note that second order information may not be defined even if function and derivative values are well-defined, e.g. in an expression like power(x,1.5) at x=0.

Frq_Log_Simple

Frequency for log-lines for non-SLP/SQP iterations.

Frq_Log_Simple and Frq_Log_SlpSqp can be used to control the amount of iteration send to the log file. The non-SLP/SQP iterations, i.e. iterations in phase 0, 1, and 3, are usually fast and writing a log line for each iteration may be too much, especially for smaller models. The default value for the log frequency for these iterations is therefore set to 10 for small models, 5 for models with more than 500 constraints or 1000 variables and 1 for models with more than 2000 constraints or 3000 variables.

Frq_Log_SlpSqp

Frequency for log-lines for SLP or SQP iterations.

Frq_Log_Simple and Frq_Log_SlpSqp can be used to control the amount of iteration send to the log file. Iterations using the SLP and/or SQP sub-solver, i.e. iterations in phase 2 and 4, may involve several inner iterations and the work per iteration is therefore larger than for the non-SLP/SQP iterations and it may be relevant to write log lines more frequently. The default value for the log frequency is therefore 5 for small models and 1 for models with more than 500 constraints or 1000 variables.

Lim_Iteration

Maximum number of iterations. Overwrites GAMS Iterlim option.

The iteration limit can be used to prevent models from spending too many resources. You should note that the cost of the different types of CONOPT iterations (phase 0 to 4) can be very different so the time limit (GAMS Reslim or option Lim_Time) is often a better stopping criterion. However, the iteration limit is better for reproducing solution behavior across machines.

Lim_NewSuper

Maximum number of new superbasic variables added in one iteration.

When there has been a sufficient reduction in the reduced gradient in one subspace new non-basics can be selected to enter the superbasis. The ones with largest reduced gradient of proper sign are selected, up to a limit. If Lim_NewSuper is positive then the limit is min(500,Lim_NewSuper). If Lim_NewSuper is zero (the default) then the limit is selected dynamically by CONOPT depending on model characteristics.

Lim_SlowPrg

Limit on number of iterations with slow progress (relative less than Tol_Obj_Change).

The optimization is stopped if the relative change in objective is less than Tol_Obj_Change for Lim_SlowPrg consecutive well-behaved iterations.

Lim_RedHess

Maximum number of superbasic variables in the approximation to the Reduced Hessian.

CONOPT uses and stores a dense lower-triangular matrix as an approximation to the Reduced Hessian. The rows and columns correspond to the superbasic variable. This matrix can use a large amount of memory and computations involving the matrix can be time consuming so CONOPT imposes a limit on on the size. The limit is Lim_RedHess if Lim_RedHess is defined by the modeler and otherwise a value determined from the overall size of the model. If the number of superbasics exceeds the limit, CONOPT will switch to a method based on a combination of SQP and Conjugate Gradient iterations assuming some kind of second order information is available. If no second order information is available CONOPT will use a quasi-Newton method on a subset of the superbasic variables and rotate the subset as the reduced gradient becomes small.

Frq_Rescale

Rescaling frequency.

The row and column scales are recalculated at least every Frq_Rescale new point (degenerate iterations do not count), or more frequently if conditions require it.

Lim_StallIter

Limit on the number of stalled iterations.

An iteration is considered a stalled iteration if there is no change in objective because the linesearch is limited by nonlinearities or numerical difficulties. Stalled iterations will have Step = 0 and F in the OK column of the log file. After a stalled iteration CONOPT will try various heuristics to get a better basis and a better search direction. However, the heuristics may not work as intended or they may even return to the original bad basis, especially if the model does not satisfy standard constraints qualifications and does not have a KKT point. To prevent cycling CONOPT will therefore stop after Lim_StallIter stalled iterations and returns an Intermediate Infeasible or Intermediate Nonoptimal solution.

Lim_Pre_Msg

Limit on number of error messages related to infeasible pre-triangle.

If the pre-processor determines that the model is infeasible it tries to define a minimal set of variables and constraints that define the infeasibility. If this set is larger than Lim_Pre_Msg elements the report is considered difficult to use and it is skipped.

Lin_Method

Method used to determine if and/or which Linear Feasibility Models to use

The Linear Feasibility Model can use different objectives: Objective 1 is no objective, i.e. the first point that satisfies the Linear Feasibility Model is used as a starting point for the Full Model. Objective 2 minimizes a scaled distance from the initial point for all variables defined by the modeler. Objective 3 minimizes a scaled distance from the initial point for all variables including those not defined by the modeler. Objective 4 minimizes a scaled distance from random a point selected away from bounds.

Num_Rounds

Number of rounds with Linear Feasibility Model

Lin_Method defined which Linear Feasibility Model are going to be solved if the previous models end Locally Infeasible. The number of rounds is limited by Num_Rounds.

Flg_NoDefc

Flag for turning definitional constraints off. The default is false.

If Flg_NoDefc is on, the Preprocessor will not look for definitional constraints and variables.

Flg_ForDefc

Flag for forcing definitional constraints on.

If Flg_ForDefc is on, the Preprocessor will look for definitional constraints and variables. Is only relevant for models that appear to be CNS models (number of variables equal to number for constraints) where the search for definitional constraints otherwise is turned off.

Lim_Dbg_1Drv

Flag for debugging of first derivatives

Lim_Dbg_1Drv controls how often the derivatives are tested. Debugging of derivatives is only relevant for user-written functions in external equations defined with =X=. The amount of debugging is controlled by Mtd_Dbg_1Drv. See Lim_Hess_Est for a definition of when derivatives are considered wrong.

Flg_Dbg_Intv

Flag for debugging interval evaluations.

Flg_Dbg_Intv controls whether interval evaluations are debugged. Currently we check that the lower bound does not exceed the upper bound for all intervals returned, both for function values and for derivatives.

Mtd_Dbg_1Drv

Method used by the function and derivative debugger.

The function and derivative debugger (turned on with Lim_Dbg_1Drv) can perform a fairly cheap test or a more extensive test, controlled by Mtd_Dbg_1Drv. See Lim_Hess_Est for a definition of when derivatives are considered wrong. All tests are performed in the current point found by the optimization algorithm.

Mtd_RedHess

Method for initializing the diagonal of the approximate Reduced Hessian

Each time a nonbasic variable is made superbasic a new row and column is added to the approximate Reduced Hessian. The off-diagonal elements are set to zero and the diagonal to a value controlled by Mtd_RedHess:

Mtd_Step_Phase0

Method used to determine the step in Phase 0.

The steplength used by the Newton process in phase 0 is computed by one of two alternative methods controlled by Mtd_Step_Phase0:

Mtd_Step_Tight

Method used to determine the maximum step while tightening tolerances.

The steplength used by the Newton process when tightening tolerances is computed by one of two alternative methods controlled by Mtd_Step_Tight:

Mtd_Scale

Method used for scaling.

CONOPT will by default use scaling of the equations and variables of the model to improve the numerical behavior of the solution algorithm and the accuracy of the final solution (see also Frq_Rescale.) The objective of the scaling process is to reduce the values of all large primal and dual variables as well as the values of all large first derivatives so they become closer to 1. Small values are usually not scaled up, see Tol_Scale_Max and Tol_Scale_Min. Scaling method 3 is recommended. The others are only kept for backward compatibility.

Flg_SLPMode

Flag for enabling SLP mode.

If Flg_SLPMode is on (the default) then the SLP (sequential linear programming) sub-solver can be used, otherwise it is turned off.

Flg_SQPMode

Flag for enabling of SQP mode.

If Flg_SQPMode is on (the default) then the SQP (sequential quadratic programming) sub- solver can be used, otherwise it is turned off.

Flg_AdjIniP

Flag for calling Adjust Initial Point

If Flg_AdjIniP is on (the default) then the Adjust Initial Point routine is called after the pre-processor. Can be turned off if the routine is very slow.

Flg_Crash_Slack

Flag for pre-selecting slacks for the initial basis.

When turned on (1) CONOPT will select all infeasible slacks as the first part of the initial basis.

Flg_NoPen

Flag for allowing the Model without penalty constraints

When turned on (the default) CONOPT will create and solve a smaller model without the penalty constraints and variables and the minimax constraints and variables if the remaining constraints are infeasible in the initial point. This is often a faster way to start the solution process.

Rat_NoPen

Limit on ratio of penalty constraints for the No_Penalty model to be solved

The No-Penalty model can only be generated and solved if the number of penalty and minimax constraints exceed Rat_NoPen times the constraints in the Full Model.

Flg_NegCurve

Flag for testing for negative curvature when apparently optimal

When turned on (the default) CONOPT will try to identify directions with negative curvature when the model appears to be optimal. The objective is to move away from saddlepoints. Can be turned off when the model is known to be convex and cannot have negative curvature.

Flg_Convex

Flag for defining a model to be convex

When turned on (the default is off) CONOPT knows that a local solution is also a global solution, whether it is optimal or infeasible, and it will be labeled appropriately. At the moment, Flg_NegCurve will be turned off. Other parts of the code will gradually learn to take advantage of this flag.

Flg_Square

Flag for Square System. Alternative to defining modeltype=CNS in GAMS

When turned on the modeler declares that this is a square system, i.e. the number of non- fixed variables must be equal to the number of constraints, no bounds must be active in the final solution, and the basis selected from the non-fixed variables must always be nonsingular.

Flg_TraceCNS

Flag for tracing a CNS solution.

When turned on the model must, for fixed value of the objective variable, be a CNS model and must satisfy the conditions of a CNS. The model is first solved as a CNS with the initial value of the objective fixed and the objective is then minimized or maximized subject to the CNS constraints.

Trace_MinStep

Minimum step between Reinversions when using TraceCNS.

The optimization is stopped with a slow convergence message if the change in trace variable or objective is less than this tolerance between reinversions for more than two consecutive reinversions. The step is scaled by the distance from the initial value to the critical bound.

Lim_Variable

Upper bound on solution values and equation activity levels

If the value of a variable, including the objective function value and the value of slack variables, exceeds Lim_Variable then the model is considered to be unbounded and the optimization process returns the solution with the large variable flagged as unbounded. A bound cannot exceed this value.

Tol_BoxSize

Initial box size for trust region models for overall model

The new Phase 0 methods solves an LP model based on a scaled and linearized version of the model with an added trust region box constraint around the initial point. Tol_BoxSize defines the size of the initial trust region box. During the optimization the trust region box is adjusted based on how well the linear approximation fits the real model.

Tol_BoxSize_Lin

Initial box size for trust region models for linear feasibility model

Similar to Tol_BoxSize but applied to the linear feasibility model. Since this model has linear constraints the default initial box size is larger.

Tol_Box_LinFac

Box size factor for linear variables applied to trust region box size

The trust region box used in the new Phase 0 method limits the change of variables so 2nd order terms will not become too large. Variables that appear linearly do not have 2nd order terms and the initial box size is therefore larger by a factor Tol_Box_LinFac. Parameters related to growth factors and initial values in the definitional constraints

Tol_Def_Mult

Largest growth factor allowed in the block of definitional constraints

The block of definitional constraints form a triangular matrix. This triangular matrix can hide large accumulating growth factors that can lead to increases in the initial sum of infeasibilities and to numerical instability. Tol_Def_Mult is an upper bound on these growth factors. If it is exceeded some critical chains of definitional constraints will be broken leading to a larger internal model, that should be numerically better behaved. Parameters related to scaling

Tol_Jac_Min

Filter for small Jacobian elements to be ignored during scaling.

A Jacobian element is considered insignificant if it is less than Tol_Jac_Min. The value is used to select which small values are scaled up during scaling of the Jacobian. Is only used with scaling method Mtd_Scale = 0.

Tol_Scale_Min

Lower bound for scale factors computed from values and 1st derivatives.

Scale factors used to scale variables and equations are projected on the range Tol_Scale_Min to Tol_Scale_Max. The limits are used to prevent very large or very small scale factors due to pathological types of constraints. The default value for Tol_Scale_Min is 1 which means that small values are not scaled up. If you need to scale small value up towards 1 then you must define a value of Tol_Scale_Min < 1.

Tol_Scale_Max

Upper bound on scale factors.

Scale factors are projected on the interval from Tol_Scale_Min to Tol_Scale_Max. Is used to prevent very large or very small scale factors due to pathological types of constraints. The upper limit is selected such that Square(X) can be handled for X close to Lim_Variable. More nonlinear functions may not be scalable for very large variables.

Tol_Scale_Var

Lower bound on x in x*Jac used when scaling.

Rows are scaled so the largest term x*Jac is around 1. To avoid difficulties with models where Jac is very large and x very small a lower bound of Tol_Scale_Var is applied to the x-term. Largest Jacobian element and tolerance in 2nd derivative tests:

Tol_Feas_Max

Maximum feasibility tolerance (after scaling).

The feasibility tolerance used by CONOPT is dynamic. As long as we are far from the optimal solution and make large steps it is not necessary to compute intermediate solutions very accurately. When we approach the optimum and make smaller steps we need more accuracy. Tol_Feas_Max is the upper bound on the dynamic feasibility tolerance and Tol_Feas_Min is the lower bound. It is NOT recommended to use loose feasibility tolerances since the objective, including the sum of infeasibility objective, will be less accurate and it may prevent convergence.

Tol_Feas_Min

Minimum feasibility tolerance (after scaling).

See Tol_Feas_Max for a discussion of the dynamic feasibility tolerances used by CONOPT.

Tol_Feas_Tria

Feasibility tolerance for triangular equations.

Triangular equations are usually solved to an accuracy of Tol_Feas_Min. However, if a variable reaches a bound or if a constraint only has pre-determined variables then the feasibility tolerance can be relaxed to Tol_Feas_Tria.

Tol_Optimality

Optimality tolerance for reduced gradient when feasible.

The reduced gradient is considered zero and the solution optimal if the largest superbasic component of the reduced gradient is less than Tol_Optimality.

Tol_Opt_Infeas

Optimality tolerance for reduced gradient when infeasible.

The reduced gradient is considered zero and the solution infeasible if the largest superbasic component of the reduced gradient is less than Tol_Opt_Infeas.

Tol_Opt_LinF

Optimality tolerance when infeasible in Linear Feasibility Model

This is a special optimality tolerance used when the Linear Feasibility Model is infeasible. Since the model is linear the default value is smaller than for nonlinear submodels. Pivot tolerances

Tol_Piv_Abs

Absolute pivot tolerance.

During LU-factorization of the basis matrix a pivot element is considered large enough if its absolute value is larger than Tol_Piv_Abs. There is also a relative test, see Tol_Piv_Rel.

Tol_Piv_Rel

Relative pivot tolerance during basis factorizations.

During LU-factorization of the basis matrix a pivot element is considered large enough relative to other elements in the column if its absolute value is at least Tol_Piv_Rel * the largest absolute value in the column. Small values or Tol_Piv_Rel will often give a sparser basis factorization at the expense of the numerical accuracy. The value used internally is therefore adjusted dynamically between the users value and 0.9, based on various statistics collected during the solution process. Certain models derived from finite element approximations of partial differential equations can give rise to poor numerical accuracy and a larger user-value of Tol_Piv_Rel may help.

Tol_Piv_Abs_NLTr

Absolute pivot tolerance for nonlinear elements in pre-triangular equations.

The smallest pivot that can be used for nonlinear or variable Jacobian elements during the pre-triangular solve. The pivot tolerance for linear or constant Jacobian elements is Tol_Piv_Abs. The value cannot be less that Tol_Piv_Abs.

Tol_Piv_Rel_Updt

Relative pivot tolerance during basis updates.

During basischanges CONOPT attempts to use cheap updates of the LU-factors of the basis. A pivot is considered large enough relative to the alternatives in the column if its absolute value is at least Tol_Piv_Rel_Updt * the other element. Smaller values of Tol_Piv_Rel_Updt will allow sparser basis updates but may cause accumulation of larger numerical errors.

Tol_Piv_Abs_Ini

Absolute Pivot Tolerance for building initial basis.

Absolute pivot tolerance used during the search for a first logically non-singular basis. The default is fairly large to encourage a better conditioned initial basis.

Tol_Piv_Rel_Ini

Relative Pivot Tolerance for building initial basis

Relative pivot tolerance used during the search for a first logically non-singular basis.

Tol_Piv_Ratio

Relative pivot tolerance during ratio-test

During ratio-rests, the lower bound on the slope of a basic variable to potentially leave the basis is Tol_Piv_Ratio * the largest term in the computation of the tangent.

Tol_Bound

Bound filter tolerance for solution values close to a bound.

A variable is considered to be at a bound if its distance from the bound is less than Tol_Bound * Max(1,ABS(Bound)). The tolerance is used to build the initial bases and is used to flag variables during output.

Tol_Fixed

Tolerance for defining variables as fixed based on initial or derived bounds.

A variable is considered fixed if the distance between the bounds is less than Tol_Fixed * Max(1,Abs(Bound)). The tolerance is used both on the users original bounds and on the derived bounds that the preprocessor implies from the constraints of the model. Accuracies for linesearch and updates

Tol_Linesearch

Accuracy of One-dimensional search.

The onedimensional search is stopped if the expected decrease in then objective estimated from a quadratic approximation is less than Tol_Linesearch times the decrease so far in this onedimensional search.

Tol_Obj_Acc

Relative accuracy of the objective function.

It is assumed that the objective function can be computed to an accuracy of Tol_Obj_Acc * max( 1, abs(Objective) ). Smaller changes in objective are considered to be caused by round-off errors.

Tol_Obj_Change

Limit for relative change in objective for well-behaved iterations.

The change in objective in a well-behaved iteration is considered small and the iteration counts as slow progress if the change is less than Tol_Obj_Change * Max(1,Abs(Objective)). See also Lim_SlowPrg.

Tol_Zero

Zero filter for Jacobian elements and inversion results.

Contains the smallest absolute value that an intermediate result can have. If it is smaller, it is set to zero. It must be smaller than Tol_Piv_Abs / 10.

Lim_Time

Time Limit. Overwrites the GAMS Reslim option.

The upper bound on the total number of seconds that can be used in the execution phase. There are only tests for time limit once per iteration. The default value is 10000. Lim_Time is overwritten by Reslim when called from GAMS.

Lim_Hess_Est

Upper bound on second order terms.

The function and derivative debugger (see Lim_Dbg_1Drv) tests if derivatives computed using the modelers routine are sufficiently close to the values computed using finite differences. The term for the acceptable difference includes a second order term and uses Lim_Hess_Est as an estimate of the upper bound on second order derivatives in the model. Larger Lim_Hess_Est values will allow larger deviations between the user-defined derivatives and the numerically computed derivatives.

HessianMemFac

Memory factor for Hessian generation: Skip if "number of Hessian elements > (number of Nonlinear Jacobian elements)*HessianMemFac", 0 means unlimited.

The Hessian of the Lagrangian is considered too dense therefore too expensive to evaluate and use, and it is not passed on to CONOPT if the number of nonzero elements in the Hessian of the Lagrangian is greater than the number of nonlinear Jacobian elements multiplied by HessianMemFac. See also Flg_Hessian. If HessianMemFac = 0.0 (the default value) then there is no limit on the number of Hessian elements.

Flg_Hessian

Flag for computing and using 2nd derivatives as Hessian of Lagrangian.

If turned on, compute the structure of the Hessian of the Lagrangian and make it available to CONOPT. The default is usually on, but it will be turned off if the model has external equations (defined with =X=) or cone constraints (defined with =C=) or if the Hessian becomes too dense. See also Flg_2DDir and HessianMemFac.

Flg_2DDir

Flag for computing and using directional 2nd derivatives.

If turned on, make directional second derivatives (Hessian matrix times directional vector) available to CONOPT. The default is on, but it will be turned off of the model has external equations (defined with =X=) and the user has not provided directional second derivatives. If both the Hessian of the Lagrangian (see Flg_Hessian) and directional second derivatives are available then CONOPT will use both: directional second derivatives are used when the expected number of iterations in the SQP sub-solver is low and the Hessian is used when the expected number of iterations is large.

Flg_Interv

Flag for using intervals in the Preprocessor

If turned on (default), CONOPT will attempt to use interval evaluations in the preprocessor to determine if functions are monotone or if intervals for some of the variables can be excluded as infeasible.

Flg_Prep

Flag for using the Preprocessor

If turned on (default), CONOPT will use its preprocessor to try to determine pre- and post-triangular components of the model and find definitional constraints.

Flg_Range

Flag for identifying sets of ranged constraints

If turned on (default), CONOPT will as part of its preprocessor look for sets of parallel linear constraints and turn each set into a single ranged constraints. There is currently a potential problem with the duals on these constraints and if duals are important ranges can be turned off with this flag.