Home Install Documentation Problems Tree Applications Forum

Increase your income
via IT outsourcing!
Ukrainian HI-TECH Initiative Ukrainian HI-TECH Initiative

OOFrameworkDoc

From OpenOpt

Jump to: navigation, search
OpenOpt framework documentation


Made by Dmitrey

Contents


Some "hello world" examples

Unconstrained one-dimensional nonlinear problem

Let's start with the unconstrained one-dimensional nonlinear problem (NLP)
(x-1)^2 -> min

from openopt import NLP
p = NLP(lambda x: (x-1)**2, 4)
r = p.solve('ralg')
print('optim point: coordinate=%f  objective function value=%f' % (r.xf, r.ff)) # prints (1, 0)

In this example, the objective function is (x-1)**2, the start point is x0=4, and 'ralg' specifies the name of solver involved.

3-dimensional constrained problem

(x-1)^2 + (y-2)^2 + (z-3)^4 -> min
subjected to
y > 5
4x-5z < -1
(x-10)^2 + (y+1)^2 < 50

from openopt import NLP
from numpy import *
 
x0 = [0,0,0] # start point estimation
 
# define objective function as a Python language function
# of course, you can use "def f(x):" for multi-line functions instead of "f = lambda x:"
f = lambda x: (x[0]-1)**2 + (x[1]-2)**2 + (x[2]-3)**4
 
# form box-bound constraints lb <= x <= ub
lb = [-inf, 5, -inf] # lower bound
 
# form general linear constraints Ax <= b
A = [4, 0, -5]
b = -1
 
# form general nonlinear constraints c(x) <= 0
c = lambda x: (x[0] - 10)**2 + (x[1]+1) ** 2 - 50
 
# optionally you can provide derivatives (user-supplied or from automatic differentiation)
# for objective function and/or nonlinear constraints, see further doc below
 
p = NLP(f, x0, lb=lb, A=A, b=b, c=c)
r = p.solve('ralg')
print r.xf # [ 6.25834211  4.99999931  5.20667372]

A simple problem coded in FuncDesigner

Let's define and solve the problem in the alternative way using FuncDesigner (with Automatic differentiation) (BTW FuncDesigner is capable of modeling large-scale problems, automatically involving SciPy sparse matrices if required)

from FuncDesigner import *
from openopt import NLP
x,y,z = oovars('x', 'y', 'z')
f = (x-1)**2 + (y-2)**2 + (z-3)**4
startPoint = {x:0, y:0, z:0}
constraints = [y>5, 4*x-5*z<-1, (x-10)**2 + (y+1)**2 < 50] 
p = NLP(f, startPoint, constraints = constraints)
r = p.solve('ralg')
x_opt, y_opt, z_opt = r(x,y,z)
print(x_opt, y_opt, z_opt) # x=6.25834212, y=4.99999936, z=5.2066737


For planning experiment series in physics, chemistry etc you may be interested in OpenOpt Factor analysis tool.

OpenOpt problems

Assignment

In OpenOpt problems assignment is performed in the following way:

from openopt import NLP

or other constructor names: LP, MILP, QP, etc (full list here)
Then use

p = NLP(*args, **kwargs)

You should read help(NLP) for more details, and reading nlp_1.py as well as other files from the examples directory is highly recommended.
See also: FuncDesigner - alternative syntax with ability to perform Automatic differentiation

Each class has some expected arguments
e.g. for NLP it's f and x0 - objective function and start point
thus using NLP(myFunc, myStartPoint) will assign myFunc to f and myStartPoint to x0 prob fields

alternatively, you could use it as kwargs, possibly along with some other kwargs:

p = NLP(x0=15, f = lambda x: x**2-0.4, df = lambda x: 2*x, iprint = 0, plot = 1)

after the problem is assigned, you could turn the parameters,
along with some other that have been set as defaults:

p.x0 = 0.15
p.plot = 0
 
p.f = lambda x: x if x>0 else x**2

(of course, you can use
def f(x):
...
p.f = f)

At last, you can modify any prob parameters in solve/manage functions:

r = p.solve('ralg', x0 = -1.5,  iprint = -1, plot = 1, color = 'r')
# or
r = p.manage('ralg', start = False, iprint = 0, x0 = -1.5)

Note that any kwarg passed to constructor will be assigned e.g.

p = NLP(f, x0, myName='JohnSmith')

is equivalent to

p.myName='JohnSmith'


Solving

After you have assigned a problem, you can solve it via

r = p.solve(nameOfSolver, otherParameters)
or
r = p.manage(nameOfSolver, otherParameters)
New! Since OpenOpt 0.27 you can use p.minimize(), p.maximize() as well (BTW one of its parameters can be manage = True)

Using manage yields a GUI window like this:
Image:OO_GUI.jpg
Tkinter was chosen among other competitors because it's lightweight and easy to install
(for Linux try using [sudo] apt-get install python-tk)

manage() can handle named argument start = {False}/True/0/1,
that means start w/o waiting for user-pressed "Run".

Currently there are only 3 buttons: "Run/Pause", "Exit" and "Enough".

As you probably know lots of solvers have troubles with stop criteria. Especially it's relevant to NSP solvers, where calculating derivatives only for to check stop criteria isn't a good idea. So pressing "Enough" button yields triggering of stop criterion, like this:
solver: ralg problem: GUI_example goal: minimum
iter objFunVal log10(maxResidual)
...
102 5.542e+01 -6.10
istop: 88 (button Enough has been pressed)
Solver: Time Elapsed = 1.19 CPU Time Elapsed = 1.12
Plotting: Time Elapsed = 6.86 CPU Time Elapsed = 5.97
objFunValue: 55.423444 (feasible, max constraint = 7.98936e-07)

Let me also note that

  • pressing "Exit" before "Enough" and before solver finish will return None, so there will be no fields r.ff, r.xf etc.
  • for some IDEs pressing "Exit" doesn't close matplotlib window (if you are using p.plot=1). You should either wait for newer matplotlib version (they intend to fix it) or try to fix it by yourself via choosing correct Agg, see here for details


Result structure

As mentioned above, solving of every problem in OpenOpt is performed via

r = p.solve(nameOfSolver) # nameOfSolver is string like 'ralg', 'nssolve', 'scipy_fsolve' etc

Let's check typical r fields:

>>> dir(r)
['__doc__', '__module__', 'advanced', 'elapsed', 'evals', 'ff', 'isFeasible', 
'istop', 'iterValues', 'msg', 'rf', 'solverInfo', 'stopcase', 'xf']

xf and ff are final point and objFun value (i.e. optimal ones, if solver has really obtained solution required).

rf is maximal residual of point xf. If r.rf > p.contol or objFun / any constraint equals to NaN, OpenOpt treats the solution as infeasible: r.isFeasible = False; otherwise True.

istop is stop case (integer or float number), msg is stop case message, like

 \| gradient F(X[k]) \| \le gradtol

r.elapsed and r.evals are Python dictionaries of elapsed functions evaluations and [cpu]time, for example

{'plot_time': 4.8099999999999996, 'solver_cputime': 0.44999999999999929, 
'solver_time': 1.0900000000000007, 'plot_cputime': 3.9600000000000009} 
{'c': 285, 'dh': 215, 'f': 285, 'df': 215, 'h': 285, 'dc': 208, 'iter': 248}

if some fields from (df, dc, dh) are negative, it means they were not supplied by user and have been obtained via finite-difference approximation. In the case we take into account for r.evals['f'] all f calls - both from objFunc and from finite-difference derivatives approximation.

r.iterValues currently has fields 'f', 'x', 'r', 'rt', 'ri'. r.iterValues.f is Python list of objFun values (iter 0, iter1, ...), r.iterValues.x is similar Python list for points, provided p.storeIterPoints = True (since OO rev.122 default is False - elseware it consumes too much memory for large-scale problems), r.iterValues.r is list of residuals, r.iterValues.rt and r.iterValues.ri are residuals type ('c', 'h', 'lb' etc) and index (nonnegative integer).

stopcase can be -1 (solver failed to solve the problem), +1 (solver reports he has solved the problem, and solution (checked by OO Kernel) is feasible), or 0 (maxIter, maxFuncEvals, maxTime or maxCPUTime have been exceeded, or the situation is unclear somehow else; currently it is 0 for both feasible and infeasible xf, but it may be changed in future, so you'd better to check it by yourself).

'solverInfo' is Python dictionary with alg, authors, license, homepage (and may be other) fields:

>>> r.solverInfo

{'alg': 'Augmented Lagrangian Multipliers', 'homepage': 'http://www.ime.usp.br/~egbirgin/tango/', 'license': 'GPL', 'authors': 'J. M. Martinez martinezimecc-at-gmail.com, Ernesto G. Birgin egbirgin-at-ime.usp.br, connected to Python by Jan Marcel Paiva Gentil jgmarcel-at-ime.usp.br'}


Text output

You can manage text output in OpenOpt via the following prob parameters:

Prob field name Default value
iprint 10
iterObjFunTextFormat'%0.3e'
finalObjFunTextFormat'%0.8g'


iprint: do text output each iprint-th iteration. You can use iprint = 0 for final output only or iprint < 0 to omit whole output. In future warnings are intended to be shown if iprint >= -1. However, some solvers like ALGENCAN have their own text output system, that's hard to suppress, it requires using different approach (example)

iterObjFunTextFormat: how iter output objFun values are represented for example, '%0.3e' yields lines like

iter    objFunVal
   0    1.947e+03               
  10    1.320e+03            
  ...

or, for constrained problems,

iter    objFunVal    log10(maxResidual)   
   0    8.596e+03         5.73 
  10    2.841e+02        -0.14 
  ...

finalObjFunTextFormat: how final output objFun value is represented. For example
finalObjFunTextFormat='%0.1f' yields
objFunValue: 7.9

See Python language documentation for text format specification.


Graphical output

NB! You should have matplotlib installed (BTW it is included in Python Scientific distributions, such as PythonXY, SAGE, EPD; also, in Linux you could try using "sudo apt-get install python-matplotlib") and ensure it works correctly (try to evaluate in python "import pylab; pylab.plot([1,2],[3,4]);pylab.show()").

Graphical output parameters:

Prob field name Default value Allowed values
plot False 0, 1, True, False
color 'b' (blue) 'r' (red), 'y' (yellow), 'k' (black),
'm' (magenta), 'g' (green), 'c' (cyan), 'w' (white)
showTrueTrue/False
(invoke or don't invoke pylab.show() after solving finish)
(False is required for sequential run of some solvers)
xlabel'time''time', 'cputime', 'iter' (case-insensitive) -
which values will be used for x axis


Output specifiers:

Type Explanation
Pentagram solver reports he finished OK, and solution, checked by OpenOpt kernel, is feasible
Cross either solver informes he failed to solve the problem, or OpenOpt kernel informs that obtained solution is infeasible
Circlesituation is unimplemented, unclear or undefined
Squarea error has been encountered
Down-arrowsolution is feasible, but maxTime, maxCPUTime, maxIter, maxFunEvals etc has been exceeded
Right-arrowmaxTime, maxCPUTime, maxIter, maxFunEvals etc has been exceeded, and solution is infeasible


This is example generated by nlp_1.py, it requires mere setting p.plot=1 (or p=NLP(..., plot=1), or r = p.solve(..., plot=1)).


This is example generated by nlp_bench_1.py
Maybe in future special function p.bench(some_solvers) will be created.


Additional parameters for user-provided non-linear objFunc and constraints

Note! For FuncDesigner handling user parameters is performed in the same way:
my_oofun.args = (...)
they will be passed to derivative function as well (if you have supplied it)

from openopt import NLP
from numpy import *
 
f = lambda x, a: (x**2).sum() + a * x[0]**4
x0 = [8, 15, 80]
p = NLP(f, x0)
 
# using c(x)<=0 constraints
p.c = lambda x, b, c: (x[0]-4)**2 - 1 + b*x[1]**4 + c*x[2]**4
 
# using h(x)=0 constraints
p.h = lambda x, d: (x[2]-4)**2 + d*x[2]**4 - 15
 
# here we use a=4
# so it's the same to "a = 4; p.args.f = a" or just "p.args.f = a = 4" 
p.args.f = 4
p.args.c = (1,2)
p.args.h = 15
 
# Note 1: using tuple p.args.h = (15,) is valid as well
# Note 2: if all your funcs use same args, you can just use
# p.args = (your args)
# Note 3: you could use f = lambda x, a: (...); c = lambda x, a, b: (...); h = lambda x, a: (...)
# Note 4: if you use df or d2f, they should handle same additional arguments;
# same to c - dc - d2c, h - dh - d2h
# Note 5: instead of myfun = lambda x, a, b: ...
# you can use def myfun(x, a, b): ...
 
r = p.solve('ralg')

If you will encounter any problems with additional args implementation, you can use the simple python trick

p.f = lambda x: other_f(x, <your_args>)

same to c, h, df, etc


Automatic derivatives check

For OpenOpt problems is performed by the functions checkdf, checkdc, checkdh, see example below.
Also, you may be interested in our new stand-alone package DerApproximator for getting/checking 1st derivatives via finite-differences approximation.

from openopt import NLP
from numpy import *
N = 30
M = 5
ff = lambda x: ((x-M)**2).sum()
p = NLP(ff, cos(arange(N)))
 
def df(x):
    r = 2*(x-M)
    r[0] += 15 #incorrect derivative
    r[8] += 8 #incorrect derivative
    return r
p.df =  df
 
p.c = lambda x: [2* x[0] **4-32, x[1]**2+x[2]**2 - 8]
 
def dc(x):
    r = zeros((2, p.n))
    r[0,0] = 2 * 4 * x[0]**3
    r[1,1] = 2 * x[1]
    r[1,2] = 2 * x[2] + 15 #incorrect derivative
    return r
p.dc = dc
 
h1 = lambda x: 1e1*(x[-1]-1)**4
h2 = lambda x: (x[-2]-1.5)**4
p.h = (h1, h2)
 
def dh(x):
    r = zeros((2, p.n))
    r[0,-1] = 1e1*4*(x[-1]-1)**3
    r[1,-2] = 4*(x[-2]-1.5)**3 + 15 #incorrect derivative
    return r
p.dh = dh
 
p.checkdf()
p.checkdc()
p.checkdh()
"""
you can use p.checkdF(x) for other point than x0 (F is f, c or h)
p.checkdc(myX)
or
p.checkdc(x=myX)
values with difference greater than
maxViolation (default 1e-5)
will be shown
p.checkdh(maxViolation=1e-4)
p.checkdh(myX, maxViolation=1e-4)
p.checkdh(x=myX, maxViolation=1e-4)
"""

Typical output (unfortunately, in terminal or other IDEs the blank space used in strings separation can have other lengths):

OpenOpt checks user-supplied gradient df (shape: (30,) )
according to:
    prob.diffInt = [  1.00000000e-07]
    |1 - info_user/info_numerical| <= prob.maxViolation = 0.01
df num         user-supplied     numerical               RD
    0             +7.000e+00     -8.000e+00              3
    8             -2.291e+00     -1.029e+01              2
max(abs(df_user - df_numerical)) = 14.9999995251
(is registered in df number 0)
========================
OpenOpt checks user-supplied gradient dc (shape: (2, 30) )
according to:
    prob.diffInt = [  1.00000000e-07]
    |1 - info_user/info_numerical| <= prob.maxViolation = 0.01
dc num   i,j:dc[i]/dx[j]   user-supplied     numerical               RD
    32        1 / 2          +1.417e+01     -8.323e-01               4
max(abs(dc_user - dc_numerical)) = 14.9999999032
(is registered in dc number 32)
========================
OpenOpt checks user-supplied gradient dh (shape: (2, 30) )
according to:
    prob.diffInt = [  1.00000000e-07]
    |1 - info_user/info_numerical| <= prob.maxViolation = 0.01
dh num   i,j:dh[i]/dx[j]   user-supplied     numerical               RD
    58       1 / 28          -4.474e+01     -5.974e+01               2
max(abs(dh_user - dh_numerical)) = 14.9999962441
(is registered in dh number 58)
========================

Note that RD (relative difference) is defined as
int(ceil(log10(abs(Diff) / maxViolation + 1e-150)))
where
Diff = 1 - (info_user+1e-8)/(info_numerical + 1e-8)


User-defined callback functions

Usage:
p = someOOclass(..., callback=MyIterFcn, ...)
or
p = ...
p.callback = MyIterFcn
or p.callback = (MyIterFcn1, MyIterFcn2, MyIterFcn3, ..., MyIterFcnN)
or p.callback = [MyIterFcn1, MyIterFcn2, MyIterFcn3, ..., MyIterFcnN]

Each user-defined function MyIterFunc should return one of the following:

  • a flag value - 0, 1, True, False. flag = True or 1 means user want to stop calculations (hence r.istop will be 80, r.msg will be 'user-defined' )
  • someRealValue like 15 or 80.15 or 1.5e4 (hence r.istop will be someRealValue, r.msg will be 'user-defined')
  • Python list (or tuple) - [istop, msg] (hence r.istop will be istop, r.msg will be msg)


def MyIterFcn(p):
    # observing non-feasible ralg iter points
 
    if p.rk > p.contol: # p.rk is current iter max residual
        print('--= non-feasible ralg iter =--')
        print('itn: %d' % p.iter)
 
        print('current f: %f' % p.fk)
        # print('curr x (5 first coords): %s' % p.xk[:5])
        print('max constraint value: %f' %  p.rk)
    """
    BTW you can store data in any unique field of p
    for example
    if some_cond:  p.JohnSmith = 15
    else: p.JohnSmith = 0
    """
 
    if p.fk < 1.5 and p.rk < p.contol:
        #NB! you could use p.fEnough = 15, p.contol=1e-5 in prob assignment instead
        return (15, 'value obtained is enough' )
        # or
        # return 15 (hence r.istop=15, r.msg='user-defined')
        # or return True (hence r.istop=80, r.msg='user-defined')
        # or return 1 (hence r.istop = 80, r.msg='user-defined')
    else:
        return False
        # or
        # return 0
 
from openopt import NSP
from numpy import *
N = 75
f = lambda x: sum(1.2 ** arange(len(x)) * abs(x))
df = lambda x: 1.2 ** arange(len(x)) * sign(x)
x0 = cos(1+asfarray(range(N)))
 
#non-linear constraint c(x) <= 0:
c = lambda x: abs(x[4]-0.8) + abs(x[5]-1.5) - 0.015 
 
p = NSP(f,  x0,  df=df,  c=c, callback=MyIterFcn,  contol = 1e-5,  maxIter = 1e4,  iprint = 100, xtol = 1e-8, ftol = 1e-8)
 
#optional:
#p.plot = 1
r = p.solve('ralg')
print(r.xf[:8])
----
solver: ralg   problem: unnamed   goal: minimum
 iter    objFunVal    log10(maxResidual)
    0  2.825e+06               0.02
--= non-feasible ralg iter =--
itn: 0
curr f: [ 2824966.83813157]
max constraint value 1.04116752789
--= non-feasible ralg iter =--
itn: 1
curr f: [ 2824973.2896607]
max constraint value 1.75725959686
--= non-feasible ralg iter =--
itn: 2
curr f: [ 2824966.83813157]
max constraint value 1.04116752789
--= non-feasible ralg iter =--
itn: 3
curr f: [ 2824970.22518437]
max constraint value 0.413756712605
--= non-feasible ralg iter =--
itn: 4
curr f: [ 2824969.02632034]
max constraint value 0.0818395397163
--= non-feasible ralg iter =--
itn: 5
curr f: [ 2824969.37414607]
max constraint value 0.0406513995891
--= non-feasible ralg iter =--
itn: 6
curr f: [ 2824969.20023321]
max constraint value 0.00849187556755
--= non-feasible ralg iter =--
itn: 7
curr f: [ 2824969.20119103]
max constraint value 0.00560799704173
--= non-feasible ralg iter =--
itn: 8
curr f: [ 2824969.2065267]
max constraint value 0.00416641026253
--= non-feasible ralg iter =--
itn: 9
curr f: [ 2824969.22185181]
max constraint value 0.0421905566026
--= non-feasible ralg iter =--
itn: 10
curr f: [ 2824969.2065267]
max constraint value 0.00416641026253
--= non-feasible ralg iter =--
itn: 11
curr f: [ 2824969.20952515]
max constraint value 0.00327175155207
  100  2.665e+04            -100.00
  200  4.845e+03            -100.00
  300  1.947e+02            -100.00
  400  9.298e+01            -100.00
  500  5.160e+01            -100.00
  600  2.600e+01            -100.00
  700  1.070e+01            -100.00
  800  6.994e+00            -100.00
  900  5.375e+00            -100.00
 1000  5.375e+00            -100.00
 1094  5.375e+00            -100.00
istop:  4 (|| F[k] - F[k-1] || < ftol)
Solver:   Time Elapsed = 4.62   CPU Time Elapsed = 4.48
objFunValue: 5.3748608 (feasible, max constraint =  0)
[ -1.06086135e-07   5.65437885e-08  -1.29682567e-07   6.12571176e-09
   7.95256506e-01   1.49731951e+00  -1.42518171e-09   4.15961658e-08]


Solver parameters modifying

It is performed via either kwargs for p.solve()/p.manage() (they can be solver or prob attributes) or using oosolver.

from numpy import *
from openopt import *
f = lambda x: (x[0]-1.5)**2 + sin(0.8 * x[1] ** 2 + 15)**4 + cos(0.8 * x[2] ** 2 + 15)**4 + (x[3]-7.5)**4
lb, ub = -ones(4), ones(4)
 
# example 1
p = GLP(f, lb=lb, ub=ub,  maxIter = 1e3, maxCPUTime = 3,  maxFunEvals=1e5,  fEnough = 80)
r = p.solve('galileo', crossoverRate = 0.80, maxTime = 3,  population = 15,  mutationRate = 0.15)
 
# example 2, via oosolver
solvers = [oosolver('ralg', h0 = 0.80, alp = 2.15, show = False), oosolver('ralg', h0 = 0.15, alp = 2.80, color = 'k')]
for i, solver in enumerate(solvers):
    p = NSP(f, [0]*4, lb=lb, ub=ub, legend='ralg'+str(i+1))
    r = p.solve(solver, plot=True)


Function oosolver

oosolver(solverName, <possibly_some_args>) returns solver instance.
Sometimes it is more convenient than using plain text name in p.solve() or p.manage()

You should pay special attention for "isInstalled" field.

Note - oosolver work is untested for converters (lp2nlp, qp2nlp etc), and for some solvers connected by CVXOPT (glpk, dsdp) parameter "isInstalled" is not turned yet.

from openopt import oosolver, NLP
 
ipopt = oosolver('ipopt', color='r') # oosolver can hanlde prob parameters
ralg = oosolver('ralg', color='k', alp = 4.0) # as well as solver parameters
asdf = oosolver('asdf') # no valid solver name
 
solvers = [ralg, asdf, ipopt]
# or just
# solvers = [oosolver('ipopt', color='r'), oosolver('asdf'), oosolver('ralg', color='k', alp = 4.0)]
 
for solver in solvers:
    if not solver.isInstalled:
        print('solver ' + solver.__name__ + ' is not installed')
        continue
    p = NLP(x0 = 15, f = lambda x: x**4, df = lambda x: 4 * x**3, iprint = 0)
    r = p.solve(solver, plot=True, show = solver == solvers[-1])


Non-linear functions defined over restricted domain

Some non-linear functions have much more restricted dom than R^nVars.
For example F(x) = log(x); dom F = R+ = {x: x>0}

For optimization solvers it is wont to expect user-provided F(x) = nan if x is out of dom.

It cannot be specified how successfully OO-connected solvers will handle a prob instance with restricted dom, because it seems to be too prob-specific.

But we can note that ralg handles the problems rather well, provided in every point x from R^nVars at least one inequality constraint is defined and active, i.e.  c_i(x)\mathbf{\in} R_+

Note also that some solvers require x0 inside dom objFunc (for ralg it doesn't matter).

from numpy import *
from openopt import NLP
 
n = 10
an = arange(n) # array [0, 1, 2, ..., n-1]
x0 = n+15*(1+cos(an))
 
# from all OO-connected NLP solvers
# only ralg can handle x0 out of dom objFunc:
# x0 = n+15*(cos(an))
 
f = lambda x: (x**2).sum() + sqrt(x**3-arange(n)**3).sum()
df = lambda x: 2*x + 0.5*3*x**2/sqrt(x**3-arange(n)**3)
c = lambda x: an**3 - x**3
dc = lambda x: diag(-3 * x**2)
 
# you can use splitting of constraints, for some solvers it yields speedup:
#c, dc = [], []
#for i in xrange(n):
#    c += [lambda x, i=i: i**3-x[i]**3]
#    dc += [lambda x, i=i: hstack((zeros(i), -3*x[i]**2, zeros(n-i-1)))]
 
lb = arange(n)
solvers = ['ralg', 'scipy_slsqp', 'scipy_cobyla', 'ipopt',  'algencan']
for solver in solvers:
    p = NLP(f, x0, df=df, lb=lb, c=c, dc=dc, iprint = 100, maxIter = 10000, maxFunEvals = 1e8)
    #p.checkdf()
    #p.checkdc()
    r = p.solve(solver)
# expected r.xf = [0, 1, 2, ..., n-1]


Handling of badly-scaled problems

It is performed via vectors p.scale or p.diffInt.
If you use FuncDesigner without custom user-defined oofuns you hardly need reading the doc chapter.

from numpy import *
from openopt import *
 
coeff = 1e-8
 
f = lambda x: (x[0]-20)**2+(coeff * x[1] - 80)**2 # objFun
c = lambda x: (x[0]-14)**2-1 # non-lin ineq constraint(s) c(x) <= 0
# for the problem involved: f_opt =25, x_opt = [15.0, 8.0e9]
 
x0 = [-4,4]
# even modification of stop criteria can't help to achieve the desired solution:
someModifiedStopCriteria = {'gradtol': 1e-15,  'ftol': 1e-15,  'xtol': 1e-15}
 
# using default diffInt = 1e-7 is inappropriate:
p = NLP(f, x0, c=c, **someModifiedStopCriteria)
r = p.solve('ralg')
print(r.ff,  r.xf) #  will print something like "6424.9999886000014 [ 15.0000005   4.       ]"
 
# for to improve the solution we will use
# changing either p.diffInt from default 1e-7 to [1e-7,  1]
# or p.scale from default None to [1,  1e-7]
 
# latter (using p.scale) is more recommended
# because it affects xtol for those solvers
# who use OO stop criteria
# (ralg, lincher, nsmm, nssolve and mb some others)
#  xtol will be compared to scaled x shift:
# is || (x[k] - x[k-1]) * scale || < xtol
 
# You can define scale and diffInt as
# numpy arrays, matrices, Python lists, tuples
 
p = NLP(f, x0, c=c, scale = [1,  coeff],  **someModifiedStopCriteria)
r = p.solve('ralg')
print(r.ff,  r.xf) # "24.999996490694787 [  1.50000004e+01   8.00004473e+09]" - much better
 
Full Output:
starting solver ralg (license: BSD)  with problem  unnamed
itn 0 : Fk= 6975.9999935999995 MaxResidual= 323.0
itn 10  Fk: 6424.9985147662055 MaxResidual: 2.96e-04 ls: 5
itn 20  Fk: 6424.9999835226936 MaxResidual: 2.02e-06 ls: 4
itn 30  Fk: 6424.9999885998468 MaxResidual: 1.00e-06 ls: 5
itn 40  Fk: 6424.999988599995 MaxResidual: 1.00e-06 ls: 5
itn 50  Fk: 6424.9999886000005 MaxResidual: 1.00e-06 ls: 78
itn 51  Fk: 6424.9999886000014 MaxResidual: 1.00e-06 ls: 0
ralg has finished solving the problem unnamed
istop:  2 (|| gradient F(X[k]) || < gradtol)
Solver:   Time Elapsed = 0.54   CPU Time Elapsed = 0.39
objFunValue: 6424.9999886000014 (feasible, max constraint =  1e-06)
6424.9999886000014 [ 15.0000005   4.       ]
starting solver ralg (license: BSD)  with problem  unnamed
itn 0 : Fk= 6975.9999935999995 MaxResidual= 323.0
itn 10  Fk: 6424.9985147649186 MaxResidual: 2.96e-04 ls: 5
itn 20  Fk: 6424.9999824449724 MaxResidual: 1.80e-06 ls: 4
itn 30  Fk: 6424.9959805950612 MaxResidual: 1.00e-06 ls: 99
itn 40  Fk: 25.121367939538644 MaxResidual: 0.00e+00 ls: 1
itn 50  Fk: 25.000287679235381 MaxResidual: 0.00e+00 ls: -1
itn 60  Fk: 24.999999424995089 MaxResidual: 1.47e-07 ls: 1
itn 62  Fk: 24.999996226675954 MaxResidual: 7.95e-07 ls: -1
ralg has finished solving the problem unnamed
istop:  2 (|| gradient F(X[k]) || < gradtol)
Solver:   Time Elapsed = 1.33   CPU Time Elapsed = 1.07
objFunValue: 24.999996489689014 (feasible, max constraint =  7.42082e-07)
24.999996489689014 [  1.50000004e+01   8.00004473e+09]


See also

Personal tools
Latest OOSuite 0.54

from 2014-06-15