# GLP

### From OpenOpt

**Global Problems (GLP)****(global)***subjected to*

**Note!** GLP solvers are slower than NLP/NSP and cannot handle problems with large number of variables (typical sizes are 1..10..100), if you know either you have the only one local=global optimum in your problem, or you search for local one - you'd better use NLP/NSP, they will solve it much faster.

- OpenOpt
**GLP example** -
FuncDesigner GLP examples:
- basic example
- example with guaranteed precision (using interalg)
- (since v. 0.38) another interalg example with discrete variables

**GLP solvers connected to OpenOpt:**

Solver | License | Made by | Are finite box-bounds required | Constraints that can be handled | Info | Parameters |
---|---|---|---|---|---|---|

interalg | BSD | Dmitrey | Yes (but they can be very huge) | (since v. 0.36) all | Yields exact optimum subjected to required accuracy fTol: abs(f - f*) < fTol.See also interalg benchmark vs Direct, intsolver and commercial BARON. |
maxNodes = 15000, maxActiveNodes = 1500, fStart = None, (unestablished) dataHandling = 'auto' | 'raw' | 'sorted' |

de | BSD | Stepan Hlushak, stepanko - at - gmail - dot - com, connected to OO by Dmitrey | Yes | (since v. 0.37) All | Two array differential evolution algorithm (Feoktistov V. Differential Evolution. In Search of Solutions; Springer, 2006, ISBN 0387368965). Code is included into OO and Stepan has subversion commit rights for the one. | baseVectorStrategy = {'random'}/ 'best'; searchDirectionStrategy = {'random'}/'best'; differenceFactorStrategy = {'random'}/'constant'; population = 'default: 10*nVars will be used'; differenceFactor = 0.8; crossoverRate = 0.5; hndvi = 1 (probably hndvi will be changed to more informative name till next OO release) |

galileo | GPL | Donald Goodman | Yes | box bounds | GA-based solver. Cannot handle other than box-bound constraints. Code is included into OO. This solver doesn't work with Python 3.x yet. | population = 15; crossoverRate = 1.0; mutationRate = 0.05; useInteger = False (if useInteger = True or 1 then search solution with all integer variables) |

pswarm | LGPL | A. I. F. Vaz | Seems like no, mb constraints Ax <= b that provide optimization within finite volume are enough | box bounds, linear inequalities | Can handle user-provided x0. Download and install pswarm from the URL mentioned, ensure author-provided RunPSwarm.py works ok, and pswarm_py.so is inside PYTHONPATH. Pay attention: for installation from sources you should use "make py_linear" to enable general linear constraints (Ax <= b). Documentation says pswarm is capable of using parallel calculations (via MPI) but I don't know is it relevant to Python API. The algorithm combines pattern search and particle swarm. Basically, it applies a directional direct search in the poll step (coordinate search in the pure simple bounds case) and particle swarm in the search step. See also: resent paper on PSwarm published at optimization-online.org. |
social = 0.5; cognitial = 0.5; fweight = 0.4; iweight = 0.9; size = 42; tol = 1e-5; ddelta = 0.5; idelta = 2.0 |

stogo | LGPL | Kaj Madsen | Yes | lb, ub | Can use derivatives. Requires nlopt installed. | useRand = True (use GD_STOGO or STOGO_RAND routine, see here for details) |

isres | LGPL | S. G. Johnson | Yes | All | isres = "Improved Stochastic Ranking Evolution Strategy", by S. G. Johnson. Requires nlopt ver >= 2.2 installed. | population (default 20×(nVars+1)) |

mlsl | LGPL | S. G. Johnson | Yes | lb, ub | mlsl = "Multi-Level Single-Linkage". This one is for smooth multi-extrema funcs (derivatives are passed to local optimizer). Requires nlopt ver >= 2.2 installed. G_MLSL_LDS is used with LD_TNEWTON_PRECOND_RESTART as local optimizer | population (number of local solvers, default 4) |

**See also:**