CONOPT
Loading...
Searching...
No Matches
The CONOPT algorithm

CONOPT is a solver for large-scale nonlinear optimization problems (NLP). CONOPT is a feasible path solver based on the proven GRG method with many newer extensions. CONOPT has been designed to be efficient and reliable for a broad class of models. The original GRG method helps achieve reliability and speed for models with a large degree of nonlinearity, i.e. difficult models, and CONOPT is often preferable for very nonlinear models and for models where feasibility is difficult to achieve. Extensions to the GRG method include, preprocessing, a special phase 0, linear mode iterations, sequential linear programming, and sequential quadratic programming. These extensions make CONOPT efficient on easier and mildly nonlinear models as well. The multi-method architecture of CONOPT combined with built-in logic for dynamic algorithm selection makes CONOPT a strong all-round NLP solver.

All components of CONOPT have been designed for large and sparse models. Models with over 10,000 constraints are routinely being solved. Specialized models with up to 1 million constraints have also been solved with CONOPT. The limiting factor is difficult to define. It is a combination of the number of constraints or variables with the number of super basic variables, a measure of the degrees of freedom around the optimal point. Models with over 500 super basic variables can sometimes be slow.

Assumptions

CONOPT is designed to solve smooth, continuous nonlinear optimization problems. This means CONOPT requires that all variables are continuous, and all constraints are smooth (i.e. with well-defined first derivatives). The Jacobian (matrix of first derivatives) is expected to be sparse. A sparse Jacobian allows for efficient numerical handling. Under these assumptions, CONOPT aims to find a local optimum that satisfies the Karush-Kuhn-Tucker (KKT) conditions.

To ensure reliable and efficient performance, we assume:

  • The nonlinear functions and their derivatives are computed with high numerical accuracy.
  • Models are generally expected to be reasonably scaled. CONOPT includes an automatic scaling feature, but in larger models, good user-defined scaling often leads to better results.

Note that while Second-order derivatives do not need to be provided, if provided they will used internally in some solver components. For models with many degrees of freedom, providing second derivatives can significantly enhance performance.

Considerations

While CONOPT is a powerful tool for a wide range of problems, the following should be kept in mind:

  • Discrete variables are not handled directly by CONOPT. However, modeling systems like AIMMS, AMPL, GAMS, and the LINDO API offer hybrid methods (e.g., Branch & Bound, Outer Approximation) that wrap around CONOPT to support such problems.
  • Non-differentiable functions can be submitted. Since CONOPT is optimized for smooth models, the performance or convergence reliability may be reduced in these cases.
  • While CONOPT can solve dense models, it is primarily optimized for sparse structures. Solve times may be slightly longer for dense formulations compared to solvers specialized in dense linear algebra.
  • Models that rely on noisy or approximate function evaluations (e.g., functions involving iterative solvers or numerical integration) may present challenges. In particular, numerical derivatives (e.g., via finite differences) may not offer sufficient accuracy for stable solver behavior.
  • Like most local solvers, CONOPT does not guarantee a global optimum. It will converge to a local solution that satisfies the KKT conditions. For models with multiple local optima or complex infeasibility landscapes, results may vary — and user insight is essential in interpreting the solution.