An Algorithmic Framework for MINLP with Separable Non-convexity

Global optimization algorithms, e.g., spatial branch-and-bound approaches like those implemented in codes such as BARON and COUENNE, have had substantial success in tackling complicated, but generally small scale, non-convex MINLPs (i.e., mixed-integer nonlinear programs having non-convex continuous relaxations). Because they are aimed at a rather general class of problems, the possibility remains that larger instances from a simpler class may be amenable to a simpler approach.

We focus on MINLPs for which the non-convexity in the objective and constraint functions is manifested as the sum of non-convex univariate functions. There are many problems that are already in such a form, or can be brought into such a form via some simple substitutions. In fact, the first step in spatial branch-and-bound is to bring problems into nearly such a form. For our purposes, we shift that burden back to the modeler. We have developed a simple algorithm, implemented at the level of a modeling language (in our case AMPL), to attack such separable problems. First, we identify subintervals of convexity and concavity for the univariate functions using external calls to MATLAB. With such an identification at hand, we develop a convex MINLP relaxation of the problem (i.e., as a mixed-integer nonlinear program having a convex continuous relaxation). Our convex MINLP relaxation differs from those typically employed in spatial branch-and-bound; rather than relaxing the graph of a univariate function on an interval to an enclosing polygon, we work on each subinterval of convexity and concavity separately, using linear relaxation on only the “concave side” of each function on the subintervals. The subintervals are glued together using binary variables. Next, we employ ideas of spatial branch-and-bound, but rather than branching, we repeatedly refine our convex MINLP relaxation by modifying it at the modeling level. We attack our convex MINLP relaxation, to get lower bounds on the global minimum, using the code BONMIN as a black-box convex MINLP solver. Next, by fixing the integer variables in the original non-convex MINLP, and then locally solving the associated non-convex NLP restriction, we get an upper bound on the global minimum, using the code IPOPT. We use the solutions found by BONMIN and IPOPT to guide our choice of further refinements in a way that overall guarantees convergence. Note that our proposed procedure is an exact algorithm, and not just a heuristic.

We have had substantial success in our preliminary computational experiments. In particular, we see very few major iterations occurring, so most of the time is spent in the solution of a small number of convex MINLPs. An advantage of our approach is that it can be implemented easily using existing software components, and that further advances in technology for convex MINLP will immediately give our approach a benefit.

By: Claudia D'Ambrosio; Jon Lee; Andreas Wächter

Published in: RC24810 in 2009

LIMITED DISTRIBUTION NOTICE:

This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.

rc24810.pdf

Questions about this service can be mailed to reports@us.ibm.com .