interalg (interval algorithm) is a free (even for commercial purposes) solver with specifiable accuracy (| f - f* | < user_defined_fTol) and possibility to handle general constraints (linear / nonlinear, equality / inequality), with some awesome ideas (unpublished yet) invented by Dmitrey, written in Python and NumPy, initially released in March 2011, for
- searching (preferably global) minimum/maximum of nonlinear problems: NLP, NSP, GLP (example)
- searching global extremum of nonlinear problems with some discrete variables (MINLP)
- searching full cover of Pareto front for MOP (multi-objective optimization problem), subjected to required tolerances on objective functions, (optionally) using parallel computations (see MOP page for examples)
- getting solution (or ALL solutions) of nonlinear equation / systems of equations (SNLE), see some examples
- categorical variables and general logical constraints (example), thus interalg became LogMIP competitor for solving GDP
- numerical integration, see IP page for examples
- ordinary differential equation dy/dt = f(t) (ODE)
- maximum stable set of a graph (STAB)
- future plans include some general ODE dy/dt = f(y,t), linear / nonlinear, possibly multiobjective TSP and other nonlinear NP-Hard problems, some speedup and parallel computations for other than MOP problems
A successful completion guarantees the accuracy specified, assuming that rounding errors are negligible. Thus uncertainties in the result due to roundoff errors are not accounted for now (it is in future plans, as soon as FuncDesigner interval analysis engine will be capable of taking them into account, as it is done in some other interval analysis software).
interalg requires finite box bounds on variables (lb_i <= x_i <= ub_i), but the bounds can be very huge (e.g. +/- 1015).
Since the declared problems are NP-Hard, sometimes it takes too long to get solution with required accuracy, but sometimes some problems with hundreds of variables were solved during several minutes by slow notebook (here's an example with 100 variables that takes less than 185 seconds on notebook Intel Atom 1.6 GHz, peak memory consumption 130 MB).
There are many excellent approximate approaches for searching global extremum of nonlinear functions (GLP): differential evolution, annealing, ant colony, particle swarm, multistart local optimization etc. Unfortunately, almost all of them cannot guarantee that you haven't stop too far from desired solution, and sometimes it matters.
There are much less software for searching exact global extremum with required tolerance. First of all I would mention:
Their number is much smaller in comparison with inexact solvers because their speed and required memory makes them absolutely inappropriate for rather mature problems, that generally are NP-Hard, for example even for quadratic objective with a single negative eigenvalue.
interalg has been written as absolutely free alternative the the solvers mentioned above.
Unlike BARON (that cannot handle even sin and cos), interalg is capable of handling bigger set of functions (see the table below).
Unlike LGO and BARON, interalg doesn't require functions to be Lipschitz-continuous (e.g. sqrt(x) is not Lipschitz-continuous if its domain includes zero). Moreover, even some discontinuous functions can be handled (floor and ceil had been tried).
Numerical tests show that interalg greatly outperforms Direct and intsolver, despite current Python version hasn't JIT yet, unlike MATLAB (yet PyPy developers say several months till full NumPy support will be enough). A small number of comparisons with BARON has been performed using NEOS server (I haven't their license and enough time, and they require input in GAMS and AMPL I'm not skilled in and use other default tolerances), so I can't say for sure, but it seems like on difficult enough tasks (where Lipschitz constant is huge or absent at all, e.g. log(x), sqrt(x), x**alpha, alpha < 1, on interval (epsilon, some_value)) interalg outperforms even them. You're welcome to publish your own bench results in our forum.
It seems like the ideas implemented in interalg also can be useful in solving ODEs and PDEs with guaranteed precision. It is in our FuturePlans.
Some elements mentioned in the documentation chapter are available since OO v. 0.50
This is a table of functions that FuncDesigner interval analysis engine is capable of handling, mostly for interalg solver (that uses them with vectorized computations). Constant approximation evaluates borders Lb, Ub: Lb <= f(x) for x from domain <= Ub; linear approximation (not available in FuncDesigner user API yet, used in FD kernel only for now) yields linear functions lin_l(x), lin_u(x): lin_l(x) <= f(x) for x from domain <= lin_u(x); also, some elements of quadratic approximation are intended to be implemented in future.
If function of constant-only approximation is invoked on result of linear approximation, it is preliminary rendered from linear bounds into constant bounds, e.g. sin(b**2 + a + 2*b + 3*(a-b)) will render b**2 + a + 2*b + 3*(a-b) into b**2 + 4*a - b and then invoke sin on lower and upper bounds of the obtained function (if linear approximation for sin is unimplemented yet). If a combination of constant and linear approximation types is involved, e.g. sin(a) + a + 2*b + b**4, then sin(a) will be rendered into pair (lb, ub) and added as constant to the linear approximation result of a + 2*b + b**4.
Linear approximation is helpful for:
- integration problems (IP) (especially for multidimensional integrals)
- optimization problems with bad quality of interval analysis because of repeated variables, e.g. a + 1/a + b + b**2
- ODE dy/dt = f(t) (Future plans include general ODE systems dy/dt = f(y,t))
(sometimes difference in speed may be many orders)
Also Future plans include: some types of N-dimensional splines, erf, cotan, maybe others (some monotone, unimodal functions or functions with all known local extrema could be connected on demand).
Mark (0) means interval analysis can be improved for the function involved on domain with zero strictly inside: lb < 0 < ub.
|Function||Constant approximation||Linear approximation||Quadratic approximation|
|+, -||+||+||+ since v 0.51|
|*, / scalar||+||+||+ since v 0.51|
|*, / oofun||+||+ (0)||(since v 0.51) for some simple cases (usually 1 variable)|
|pow (x**const)||+||+ since v 0.51||(since v 0.51) for some simple cases |
with const=2 or 0 < const < 1 (usually 1 variable)
|rpow (const ** x)||+||+||(since v 0.51) for some simple cases (usually 1 variable)|
|sin||+||+ partially (since v 0.51: improved)|
|cos||+||+ partially (since v 0.51: improved)|
currently implemented as IA for sin(arg+pi/2),
that can bring roundoff effects for
very small or very big numbers
|arcsin||+||+ since v 0.51|
|arccos||+||+ since v 0.51|
|arctan||+||+ since v 0.51|
|sinh||+||+ since v 0.51|
|exp||+||+||since v 0.51 for some simple cases (usually 1 variable)|
|sqrt||+||+||since v 0.51 for some simple cases (usually 1 variable)|
|log, log2, log10||+||+||since v 0.51 for some simple cases (usually 1 variable)|
|sum||+||+||+ since v 0.51|
|arcsinh||+||+ since v 0.51|
|tanh||+||+ since v 0.51|
|arctanh||+||+ since v 0.51|
|1st-order splines||+||+ for convex or concave|
|tan||+ for range (-pi/2, pi/2)||+ since v 0.51 for range (-pi/2, pi/2)|
|Name||Default value||Valid for problems||Description|
|maxActiveNodes||150||All||You definitely should try modifying the parameter to speedup calculations. It is too hardware-depended to be adjusted automatically (depends on speed and volume of your CPU cash level 1,2,3, RAM speed), also may depend on maturity of the problem involved. You can set it from 1 to inf (recommended values to try: 5, 15, 25, 50, 100, 250, 500, 1000, 2500, 5000, ...). I have some ideas how to set more exact automatic default, but haven't time to try them.|
|maxNodes||150000||NLP, NSP, GLP, SNLE, MINLP, future plans: IP||If number of nodes exceeds it (may happen with mature tasks), then set of nodes will be truncated and no global optimality of solution will not be guarantied. (unestablished) you can observe r.extras['isRequiredPrecisionReached'] to ensure you found global optimum subjected to required fTol. Fortunately, sometimes even after truncating interalg can yield solution with required precision and, moreover, sometimes the solver can determine that required fTol has been surely achieved. If p.iprint >= 0, relevant message will be shown; also, you could be interested in (unestablished) r.extras['extremumBounds'] that is a pair (extremum_inf, extremum_sup) such that extremum_inf < f* < extremum_sup (may be out of the region only due to rounding errors for rather noisy functions). Maybe, in future maxNodes will be replaced by max memory allowed for solver to occupy.|
|maxSolutions||1||SNLE, future plans: NLP, NSP, GLP||Maximal number of stored solutions (currently valid for systems of nonlinear equations only). You can use positive integer numbers (as well as real numbers like "1e5" - they will be converted to integers), numpy.inf and string value 'all'.|
|(unestablished) dataHandling||'auto'||NLP, NSP, GLP, MINLP, SNLE||Not important for SNLE "all solutions" mode. Valid (string) values: 'sorted', 'raw'. If objective function or a constraint can become very huge inside the involved box domain lbi <= xi <= ubi (for example, 1/a inside domain [10-6,1]), 'sorted' usually works better, otherwise 'raw' usually works better. This parameter is intended to be removed in future interalg versions, when its engine will be essentially reworked. For MOP only 'raw' mode works.|
|(unestablished) sigma||0.1||MOP||Allowed range [0...1]; increasing this parameter you'll got less number of puny solutions, but it will slow down your calculation (maybe very essentially)|
|fStart||-||NLP, NSP, GLP, MINLP||A good estimation to upper bound of objective value in minimum, i.e. lowest known Value: Value >= f* (for sure). Can speedup algorithm and reduce peak memory. If you know a point with value equal to fStart, don't use the parameter and pass that point as start point instead. Also, this parameter hasn't been properly tested with maximization problems. Pay attention that fStart is solver (not prob) parameter, use oosolver('interalg', fStart = some_val) or p.minimize(..., fStart = some_val), but not p.fStart = some_val or p = NLP(..., fStart = some_val).|
Parallel computations and other speed up
- Using a good start point can essentially speed up interalg and decrease amount of peak memory
- For MOP you can change parameter p.nProc (default: 1) to perform some computations in parallel mode
- Used algorithm by itself allows very effective parallelization, but it hasn't been done yet (it's difficult to implement with current state of Python language). Also, you may speedup your computations by linking NumPy with AMD ACML, Intel MKL, maybe other BLAS/LAPACK implementations with parallel calculations
- You could somehow estimate objective function value in optimal point and run several interalg instances with fStart, fStart - fTol, fStart - 2*fTol etc.
- interalg works with PyPy since v 0.5304 but it hasn't been adjusted by jitviewer yet, thus usually with PyPy it works 1.5-2 times slower than CPython yet.
- sometimes for problems with other than box-bound constraints other solvers may yield value F* that differs from interalg f* more than required fTol. Pay attention to obtained residual r.rf for that solver - if it is non-zero, probably that gap is due to a small constraint violation wrt allowed tolerance for a constraint. Probably interalg cut a piece of space containing that point because it was detected as non-feasible region.
- interalg description and algorithm probably will be published in nearest release of one of Ukrainian journals, and then it will became available in the webpage
- You can encounter difficulties on funcs like f1/f2 where interval of f2 contains zero, or even worse - both intervals of f1 and f2 contain zero
- You should try to avoid duplicating usage of variables (for to make interval calculations more precise):
- using a*(b+c) is better than a*b + a*c
- using 1+1/a is better than (a+1)/a, etc
- Presolving prob by inexact solver to get good start point could be very helpful. Depending on prob type, you could pay attention to the following GLP solvers: mlsl for smooth differentiable multiextremum; pswarm, de, asa (or interalg with small maxNodes, like 10-100) for any other.
See here for info on it
interalg benchmark with intsolver, Direct, commercial BARON
- See the benchmark
- interalg is relatively new software (initially released in March 2011) with already ~1500 Python language source code lines (it means in C or Fortran it would be several times more) and more relatively new code from FuncDesigner interval analysis engine, thus you may encounter a bug (especially for probs with general constraints). The work on revealing and fixing them is going on, along with writing new functionality and speedup.
- if you've got another one solution by alternative solver and f_alt is better than f_interalg by more than fTol, it may be due to a small constraint residual (inside required contol), while interalg could cut region containing point x_alt with the value f_alt. Also, if your problem has discrete variables, sometimes some solvers yield better value due to small discrtol violation (difference between solution and required discrete domain value), while interalg yields only exact values from domain. If problem is only box-bounded and x_alt is strictly inside domain (lb <= x <= ub), than it's interalg bug (except of issues with roundoff effects), you should submit bug report to openopt forum.