       Save your money via IT outsourcing! # FuncDesignerDoc

FuncDesigner Documentation

# Creating oovars

oovars are 'Object-Oriented Variables'; this terminology is specific to FuncDesigner. (Previously, when FuncDesigner was a part of the OpenOpt module, these were called 'OpenOpt variables'). oovars are created by invoking oovar() or oovars(). Let's note - each oovar and oofun has field "name", that is used in error messages and for some parts of FuncDesigner kernel. Names should be unique.

from FuncDesigner import *
a = oovar() # create oovar a
a = oovar(size=15000) # create oovar of size 15000
b = oovar('b') # create oovar b named "b" instead of something like "unnamed_34" (useful for debug and results output)
a = oovar('a', size=15000) # create oovar named 'a' of size 15000
a, b, c, d = oovars(4) # create 4 oovars, each one can have different size
a, b, c = oovars('a', 'b', 'speed of sound in material') # create 3 oovars named "a", "b", "speed of sound in material", each one can have different size
(Since 0.33) a, b, c, d = oovars('a b c d')

# create ooarray of 80 oovars, each one can have different size
# in versions prior to 0.33 returns Python list instead of ooarray
myOOVars = oovars(80)
# for FuncDesigner ver prior to 0.33 in this case creating your start points for OpenOpt points should look like Point = dict([(myOOVars[i], val_of_i_th_variable) for i in range(80)])

Also you can define general n-dimensional numpy arrays of oovars (requires good knowledge of numpy masked arrays), each one of different size. However, for the sake of calculations speed it is recommended to create lots of oovars with size 1, as well as a small number of oovars with huge sizes and involving lots of slicing/indexation operations (like x[i], x[i:j], see Slicing and indexin below).

If you want to raise a error when in a point involved size of oovar differs from the expected one, you could use parameter "size" in oovar constructor, e.g.

a = oovar(size=some_expected_size)
# or with user-provided name
a = oovar('a', size=4) Unfortunately, lots of users misunderstood difference between definition FuncDesigner variables by oovar() and oovars(), that is extremely important.

• Using a = oovars(N) automatically means your code is NOT vectorized by variable a; a will be an ooarray (derivative type of numpy.ndarray) with N separate objects of type oovar. On the other hand, you could assign different sizes for each oovar from the ooarray.
• Using a = oovar(size=N) means problem is vectorized by variable a, that could bring essential speedup (if you don't deal with lots of indexations a[i] and slicing a[i:j]), especially for slow Python interpreter. Unfortunately, interalg cannot work with oovars of size different from 1 yet, thus you should use oovars() for the solver if you deal with some arrays there.

# Creating oofuns

Functions applied on oovars produce oofuns, eg

from FuncDesigner import *
a, b, c, d = oovars(4)
f1, f2 = a + 2*b + c, sin(a)+cos(b)+arctan(c)
f3 = f1 + 4*f2 + cos(f1*f2)
# you can involve in-place operations:
f3 += sin(f2)+cos(f3)
f3 /= 1024
f3 **= 2

All those overloaded by FuncDesigner funcs like +, *, /, **, sin, cos etc are vectorized, so user can produce new expressions like b = 2*sin(a) + cos(a) + a**2 etc, and b will be a vector of same size to a. Also, one can use c = FuncDesigner.dot(D, a), where D is m x N matrix, and result will be a vector of size M (D, where D is dense numpy arra or a matrix from scipy.sparse). Using sum() (u=v.sum(), u=FuncDesigner.sum(v)) or indexation (u=v, u = v[4:100]) is also available, see further for details.

To obtain value of oofun we should define a point that is a Python dictionary with keys oovars and values = their values, e.g.

from numpy import *
point1 = {a:1, b:[2,3,4], c:array([10,20,30])}
r = f3(point1)
print(r) # It should print [ 21.60063885  33.28742147  45.64660797]

Some notes:

• Python lists (like the one above with value [2,3,4]) are simpler to type but are not recommended - one can forget to cast them into arrays, involve [1,2] + [5,10] and obtain [1,2,5,10] instead of [6, 12]. In openopt, scipy, lots of other Python scientific packages lists are cast to numpy arrays, yet one cannot be sure there are no any bugs lurking. So, creating Python lists of oovars (eg a=oovars(N)) is not recommended as well unless you exactly know what are you doing.
• You can use summation of Python lists made of oovars/oofuns/numbers/arrays eg
datalist = [a, b, some_oofun_1, 2.0, N, 4, some_oofun_2] # (*)
# or
datalist = [a, [1,2,3], some_oofun_3] # (**)
c = sum(datalist)
provided all elements under sum() have same size (1 for (*) and 3 for (**)).
• You can use "for" cycle, eg
M = 15; for i in xrange(M): f3 = 0.1*f3+0.2*f2+1
However, currently using "for elem in an_Iterable: result += func(elem)" works slower than sum([func(elem) for elem in an_Iterable]). It is intended to be fixed in future FD versions.
• We can use "while" cycle as well, provided arguments in "for"/"while" loops are independent on any oovar/oofun.

To name an oofun (for some error messages and/or debug purposes) you can use one of the 2 ways below:

K = 0.5*m*V**2; K.name = 'kinetic energy'; P = g*h*m; P.name = 'potential energy'; TotalEnergy = K + P
TotalEnergy = (0.5*m*V**2)('kinetic energy') + (m*g*h)('potential energy')

Each oofun should return a single number or array with ndim <= 1; oovars also should be numbers or arrays with ndim <= 1.
Currently only basic numpy operators and functions have been overloaded, if you have something exotic like gamma-function or arctan2 you should create your own oofun (and provide analytical derivative for that one, if it is possible/has sense), see Creating special oofuns.

# Automatic differentiation

We can get oofun derivative in a point via automatic differentiation (not to be confused with Numerical differentiation via finite-differences derivatives approximation and symbolic differentiation provided by Maxima, SymPy etc).

from FuncDesigner import *
a, b, c = oovars('a', 'b', 'c')
f1, f2 = sin(a) + cos(b) - log2(c) + sqrt(b), sum(c) + c * cosh(b) / arctan(a) + c * c + c[-1] / (a * c.size)
f3 = f1*f2 + 2*a + sin(b) * (1+2*c.size + 3*f2.size)
f = (2*a*b*c + f1*f2 + f3)

point = {a:1, b:2, c:[3, 4, 5]} # however, you'd better use numpy arrays instead of Python lists

print(f(point))
print(f.D(point))
print(f.D(point, a))
print(f.D(point, [b]))
print(f.D(point, fixedVars = [a, c]))

""" Expected output:
[ 48.9337138   18.16255336 -11.32129756]
{a: array([ 51.75779959,  70.89020412,  91.93551537]),
b: array([-38.10565554, -54.41138045, -74.08378522]),
c: array([[-29.52297003,   2.03660168,   0.67886723],
[ -1.60462289, -31.15709726,  -0.42789944],
[ -4.82390384,  -3.85912307, -30.54104071]])}
[ 51.75779959,  70.89020412,  91.93551537] # <- prior to v 0.34: {a: array([ 51.75779959,  70.89020412,  91.93551537])
{b: array([-38.10565554, -54.41138045, -74.08378522])}
{b: array([-38.10565554, -54.41138045, -74.08378522])}
"""

You may be interested in using parameter "exactShape" (default: False). Set it True to prevent flattering/squeezing operations (DerApproximator has the parameter as well).
Pay attention: since FD ver 0.21 function .D() has optional parameter useSparse, default 'auto'. To allow/deny possibility of involving SciPy sparse matrices in AD engine and result output you can set it to True / False (sometimes automatic selection here is a damned hard task). e.g.

r = f.D(point, fixedVars = [a, c], useSparse=False, exactShape=True)

Also you can involve useSparse parameter for OpenOpt problems coded in FuncDesigner (it also has some autoselect possibilities but they are premature as well).

# Some oofuns to pay attention

## Operator ifThenElse

You can use operator "ifThenElse" where ordinary Python language operator if-then-else can't deal with condition because it has type oofun (that hasn't been invoked on a point yet and thus preliminary unknown) instead of boolean, e.g.

from FuncDesigner import *
a, b = oovars('a', 'b')
point = {a:10, b:[2, -5, -4]}
f1 = ifThenElse(a+sum(b)<1, 3*a, 15*(a+5/b))
f2 = 4*sin(a) + 5 * ifThenElse(a+sum(abs(b))>1, 3, 4*a)
f3 = 15 + 25 * ifThenElse(2>1, -100, 3*sum(log2(abs(b))))
f4 = 15 + 25 * ifThenElse(1>2, -100, 3*sum(log2(abs(b))))
for f in [f1, f2, f3, f4]:
print(f(point))
print(f.D(point))

#[ 187.5]
#{a: array([ 15.]), b: array([-18.75,  -0.  ,  -0.  ])}
#[ 12.82391556]
#{a: array([-3.35628612])}
#[-2485]
#{}
#[ 414.14460712]
#{b: array([ 54.10106403, -21.64042561, -27.05053202])}

If your condition has type boolean when Python loads it (for example when you have "if myOOFun(startPoint)"), then you can use ordinary python if-elif-...-then-else in spite of type of other data involved.

## min and max

min and max can work in 2 ways:

• on a single (vectorized) oofun, e.g.
from FuncDesigner import *
a = oovar()
M, m = max(a), min(a)
func = M*m + 1 + sum(sin(a))
point = {a: [100,2,3,1,-0.4,5,-1.6]}
print('min: %0.2f     max: %0.2f    func: %0.2f' % (m(point),M(point),func(point)))
# prints min: -1.60     max: 100.00    func: -159.96
• (since v. 0.33) You can use min(list_of_oofuncs) and max(list_of_oofuncs), e.g.
from FuncDesigner import *
a, b, c = oovars('a', 'b', 'c')
f1, f2, f3 = a+b, b+c, a+c
M = max((a, f1, b, f2))
m = min([c**i for i in xrange(2, 5)])
point = {a:0, b:1, c:2}
print(M(point), m(point)) # (array(3.0), array(4.0))
print(M.D(point)) # {b: 1.0, c: 1.0}
print(m.D(point)) # {c: 4.0}

You should pay attention that FuncDesinger min/max can override Python or numpy min/max; an appropriate error message is expected to be raised (omit using "from ... import *" in the case).

## Slicing and indexing

You can use the following slicing and indexing for oofuns/oovars: a[ind], a[ind1:ind2]. ind1 and ind 2 should be integers, positive, negative or zero; unlike numpy.ndarray, you can't use a[ind:] or a[:ind] for now if a is oofun/oovar (you can use it with a created as a = oovars(), but it has its own pros and cons). Currently ind cannot be oovar/oofun, like a.size (if you haven't assigned a.size to a known fixed number). Maybe latter will be implemented in future releases.

from FuncDesigner import *
N = 100
a = oovar()
b = a[1:N]
c = a[0:N-1]
f1 = abs(b-c)
d = f1 + b + a*c[N-5]
f2 = sum(d)
point = {a: cos(range(N))}
print('f2: ' + str(f2(point))) # prints 0.150446291231

## Angle

(available since v. 0.33)
Returns angle between 2 vectors. Currently the oofun is implemented as arccos(sum(inp1*inp2)/sqrt(sum(inp1**2)*sum(inp2**2))). Thus you should care by yourself about zero vector, e.g. via using ifThenElse or Attached constraints. Possibility of getting out of range [-1, 1] due to small numerical errors is handled via attached constraint with a negative tolerance 1e-7, see below for details (you can remove it and/or replace by another (possibly attached) constraint). Pay attention, that no check for (length of inp1) == (length of inp2) is performed, so you should care of it (e.g. to prevent inp1 and inp2 being scalar and vector) by yourself (if required). It is intended to be improved in future versions.

from FuncDesigner import *
x,y = oovars(2)
a=angle(x,y)
print(a({x:[0,1],y:[1,0]})) # should be pi/2
1.57079632679

# Passing oovars through ordinary Python functions

oovars can pass through ordinary Python functions
provided FuncDesigner has appropriate overloads for all functions/operators
used inside those ones

from FuncDesigner import *
a, b, c = oovars('a', 'b', 'c')

def func1(x, y):
return x + 4*y

func2 = lambda x, y, z: sin(x) + 4*y + 3*cos(z) + func1(x, z)

def func3(x, y, z):
return x + 2*y + 5*sin(z) + func2(x, z, y) + func1(z, y)

print(func3(2, 3, 4))

# Note that func3, func2, func1 could be imported from another file instead of been defined here
myFunc = func3(a, b, c) # now myFunc is instance of FuncDesigner class oofun
point = {a:2, b:3, c:4}

print(myFunc(point))
print('difference: %f'% (func3(2, 3, 4) - myFunc(point)))
print('Derivative obtained via Automatic differentiation:')
print(myFunc.D(point))

# Expected output:
# 48.1553074605
# [ 48.15530746]
# difference: 0.000000
# Derivative obtained via Automatic differentiation:
# {a: array([ 1.58385316]), b: array([ 9.57663998]), c: array([ 1.7317819])}

# Solving systems of linear equations

Currently

• for dense SLE numpy.linalg.solve (LAPACK routine dgesv) is used
• for sparse SLE scipy.sparse.linalg.spsolve is used. If you have SciPy installed without UMFPACK (license: GPL), then SuperLU will be used (license: BSD, is included into latest SciPy versions code). However, most of times UMFPACK yields speed of many times greater. Installation of SciPy from Linux soft channels has UMFPACK as optional dependence (thus is installed as well), as for presence in scientific Python distributions - check by yourself (probably via google search), maybe information placed here will be obsolete the time you're read it
• solving SLE by FuncDesigner requires OpenOpt installed, maybe in future the dependence will be ceased
• using FuncDesigner ooSystem can automatically determine is your system of equations linear or nonlinear, subjected to given set of free/fixed variables. For more details see the doc entry.
• you can use your own solver (see example with rendering below).

## Simple example of solving a SLE by FuncDesigner

from FuncDesigner import *

# create some variables
a, b, c = oovars('a', 'b', 'c')
# or just a, b, c = oovars(3)

# Python list of 3 linear equations with 3 variables
f = [2*a+3*b-2*c+5, 2*a+13*b+15, a+4*b+2*c-45]
# alternatively, you could pass equations (the example below is valid for FD 0.18+):
# f = [2*a+3*b-2*c == -5, 2*a+15 == -13*b, a+45==4*b+2*c]
# assign SLE
linSys = sle(f)

r = linSys.solve()
A, B, C =  r(a, b, c)
maxRes = r.ff # max residual
# Expected result:
# {b: array([-5.]), a: array([ 25.]), c: array([ 20.])}

from FuncDesigner import *
n = 100
# create some variables
a, b, c = oovar(), oovar(size=n), oovar(size=2*n)

# let's construct some linear functions R^i -> R^j
# in Python range(m, k) is [m, m+1, ..., m+k-1]
f1 = a + sum(b*range(5, n+5)) # R^(n+1) -> R
f2 = a + 2*b + c.sum() # R^(2n+1) -> R^n

# you could use size of oovars
f3 = a + a.size + 2*c.size # R^(2n+1) -> R; a.size and c.size will be resolved into 1 and 2*n

f4 = c + range(4, 2*n+4) + f1 + 0.5*f2.sum() + 4*f3

# We can use "for" cycle:
for i in xrange(4):
f4 = 0.5*f4 + a + f1 + 1

# Also, we could use matrix multiplication, eg f5 = dot(someMatrix, f4):
rng = 1.5 + cos(range(2*n)).reshape(-1, 1) # define 2n x 1 vector
R = dot(rng, rng.T) # create a matrix of shape 2n x 2n
f4 = dot(R, f4) # involve matrix multiplication

# Create Python list of linear equations
f = [a+f4+5, 2*a+b*range(10, n+10)+15, a+4*b.sum()+2*c.sum()-45]
# alternatively, you could pass equations:
#f = [a+f4==-5, 2*a+b==-15, a==-4*b.sum()-2*c.sum()+45]

linSys = sle(f)
r = linSys.solve()

# get result
A, B, C =  r(a,b,c)
# A, B, C will be numpy arrays of sizes 1,  n,  2*n
# if failed to solve - A, B, C will contain numpy.nan(s)

print('A=%f B=%f C=%f' % (A, B, C))
# for n=100 it prints A=-5.000000 B=-0.500000 C=-1883724.947909

# you could get residuals
# here it will be Python list of length 3
# that is number of (vectorized) equations
# with elements of type numpy.array
maxResidual = r.ff
residuals = [F(r) for F in f]

## Example of solving sparse SLE

(requires SciPy installed)

from FuncDesigner import *
from numpy import arange
from time import time
t = time()
N = 15000

a, b, c = oovar(), oovar(size=N), oovar(size=2*N)

f1 = a + sum(b*arange(5, N+5))
f2 = a + 2*b + c.sum()
f3 = a + a.size + 2*c.size
f4 = c + range(4, 2*N+4) + 4*f3

f = [a+f4+5, 2*a+b*range(10, N+10)+15, a+4*b.sum()+2*c.sum()-45]
# alternatively, you could pass equations:
#f = [a+f4==-5, 2*a+b==-15, a==-4*b.sum()-2*c.sum()+45]
linSys = sle(f)
print('Number of equations: ' + str(linSys.n)) # should print 3*N+1

r = linSys.solve() # i.e. using autoselect - solver numpy.linalg.solve for dense (for current numpy 1.4 it's LAPACK dgesv)
# and scipy.sparse.linalg.spsolve for sparse SLEs (for current scipy 0.8 it's LAPACK dgessv)
A, B, C =  a(r), b(r), c(r) # or A, B, C = r[a], r[b], r[c]
print('A=%f B=%f B[first]=%f C[last]=%f' % (A, B, B, C[-1]))
maxResidual = r.ff

# Note - time may differ due to different matrices obtained from SLE rendering
# because Python 2.6 doesn't has ordered sets (they are present in Python 3.x)
# maybe I'll implement fixed rendering in future for 3.x, I don't want to deal
# with quite difficult walkaround for 2.x
print('time elapsed: %f' % (time()-t))

#A=-50992.657068 B=7283.593867 B[first]=10197.031414 C[last]=-15048.714662
#time elapsed: 2.829374

## Rendering FuncDesigner SLE into ordinary SLE Ax=b

from FuncDesigner import *

# Create some variables
a, b = oovars('a', 'b')
c = oovar('c', size=2)

# Python list of linear equations
f = [2*a + 3*b - 2*(c+[1, 2]) * [3, 4] + 5, # R^nVars - > R^2
2*a+13*b+15, # R^nVars - > R
a+4*b+2*c.sum()-45# R^nVars - > R
]
# Alternatively, you could pass equations:
# f = [2*a + 3*b - 2*(c+[1, 2]) * [3, 4]==-5,
#            2*a+15==-13*b, a==-4*b-2*c.sum()+45]

# Assign SLE
linSys = sle(f)

# Rendered matrices for Ax=b are linSys.A (numpy.array of shape 4 x 4) and linSys.b (numpy.array of length 4)
A, B = linSys.A, linSys.b # B instead b for to prevent overriding the oovar b defined above
# However, for large sparse problems they can have other type than numpy.ndarray,
# e.g. scipy.sparse lil_matrix or coo_matrix (provided you have scipy installed);
# elseware sometimes they can be mere out of your computer RAM.
# You could cast it to ndarray for example via myArr = myMatrix.toarray()

"""
Now let's solve it, for this example I use numpy.linalg.solve
(that is default dense solver for FD SLEs)
but you can use any other linear solver
e.g. cvxopt, scipy.lapack or scipy.sparse solvers, PyAMG etc.
Also you can code it (via scipy.sparse or somehow else) into
one of common sparse matrices formats,
save it into a file and solve it via C, Fortran etc
(BTW f2py or Cython could be useful here)
Current drawback: for huge sparse SLEs you should have enough memory
to handle dense A before casting to sparse will be performed.
Maybe I'll fix it in future, currently for my purposes I have no deal with these situations
"""
from numpy import linalg
xf = linalg.solve(A, B)

# Decode solution
sol = linSys.decode(xf)
print(c(sol))
print(sol)
# Expected output:
# 10.2
# {b: array([-7.72]), c: array([ 10.2,   6.4]), a: array([ 42.68])}

## Eigenvalue analysis

(since v. 0.37)
See EIG for more details, including EIGs from automatic differentiation of general functions.

from FuncDesigner import *
from numpy import arange
n = 100
# create some variables
a, b, c = oovar('a'), oovar('b', size=n), oovar('c', size=2*n)

# let's construct some linear functions R^i -> R^j
# in Python range(m, k) is [m, m+1, ..., m+k-1]
f1 = a + sum(b*arange(5, n+5)) # R^(n+1) -> R
f2 = a + 2*b + c.sum() # R^(2n+1) -> R^n

# you could use size of oovars
f3 = a + a.size + 2*c.size # R^(2n+1) -> R; a.size and c.size will be resolved into 1 and 2*n

f4 = c + arange(4, 2*n+4) + f1 + 0.5*f2.sum() + 4*f3

# We can use "for" cycle:
for i in range(4):
f4 = 0.5*f4 + a + f1 + 1

# Also, we could use matrix multiplication, eg f5 = dot(someMatrix, f4):
rng = 1.5 + cos(range(2*n)).reshape(-1, 1) # define 2n x 1 vector
R = dot(rng, rng.T) # create a matrix of shape 2n x 2n
f4 = dot(R, f4) # involve matrix multiplication

# Create Python list of linear equations
f = [a+f4+5, 2*a+b*arange(10, n+10)+15, a+4*b.sum()+2*c.sum()-45]
# alternatively, you could pass equations:
f = [a+f4==-5, 2*a+b==-15, a==-4*b.sum()-2*c.sum()+45]

linSys = sle(f)
print('dimension of the SLE: %d' % linSys.n) # dimension of the SLE: 301
# let's search for 4 largest magnitude eigenvalues
r = linSys.eig(goal={'lm':4}, solver='arpack')
# or goal={'largest magnitude':4}, with or without space inside, case-insensitive
# for whole list of available goals see http://openopt.org/EIG

print(r.eigenvalues)
#[ -1.35516602e-05 -1.71948079e-05j  -6.93570858e-01 +0.00000000e+00j
#   1.73033511e+00 +0.00000000e+00j   4.88614250e+06 +0.00000000e+00j]

# let's print eigenvector for last of the obtained eigenvalues:
print(r.eigenvectors[-1])
#{a: (-0.10673471576669166+0j), b: array([  -0.08710844+0.j, -0.04627392+0.j, -0.02177422+0.j, -0.03613426+0.j,...-0.10085657+0.j],
# c: array([ -1.02123985e-01+0.j,  -6.83779258e-02+0.j, ..., -7.82334725e-06+0.j]}

print(r.eigenvectors[-1][a])
#(-0.10673471576669166+0j)
print(type(r.eigenvectors[-1][a]))
#<type 'numpy.complex128'>

# Solving systems of nonlinear equations

## Basic example

"""
Solving system of equations:
x**3 + y**3 - 9 = 0
x - 0.5*y = 0
cos(z) + x - 1.5 = 0
"""
from FuncDesigner import *
from openopt import SNLE

x, y, z = oovars(3)

equations = (x**3 + y**3 - 9, x - 0.5*y, cos(z) + x - 1.5)
# alternatively, since FD 0.32: you can use equations and require custom tolerances
equations = (x**3 + y**3 - 9==0, (x - 0.5*y==0)(tol=1e-9), (cos(z) + x == 1.5) (tol=1e-9))
# if no tol is assigned for an equation, p.ftol (default 10^-6) will be used for that one

startPoint = {x:8, y:15, z:80}

p = SNLE(equations, startPoint)

# optional: we could set some constraints
p.constraints = [z<70,  z>50,  z + sin(x) < 60]

r = p.solve('nssolve') # nssolve is name of solver involved, see OOF doc for more arguments
xs, ys, zs = r(x, y, z)
print('Solution: x = %f   y = %f  z = %f' % (xs, ys, zs))
#Solution: x = 0.999999   y = 2.000000  z = 57.595865

"""                 More complex example                 """
# Solving non-linear system of size 152
from FuncDesigner import *
from openopt import SNLE
from numpy import arange, ones
N = 150

x = oovar(size=N)
y,z = oovars(2)

f = (x.sum()**2 + y - 100*N, 1e-3*sum(x**2) - 5*y - 100*N - (z-1)**2, (x-15)*arange(N)/(100*N) - 4*z + 5)

startPoint = {x: ones(N), y:15, z:80}

p = SNLE(f, startPoint, maxIter=1e7)

r = p.solve('scipy_fsolve') # nssolve is name of solver involved, see OOF doc for more arguments
print('x:%f  y:%f  z:%f' % (x(r),  y(r),  z(r)))
#x:15.000000  y:-2107.773183  z:1.250000

## Getting ALL solutions

(since release version 0.34)
OpenOpt has an excellent solver interalg that is capable of obtaining ALL solutions of nonlinear equation(s).

from FuncDesigner import *
from openopt import *

x, y, z, u, v, t = oovars('x y z u v t')
F =  [
(log(x+5)+cos(y) == 1.0)(tol=1e-10),
(x**3 + z == -1.5)(tol=1e-5),
u**3 + sqrt(abs(v)) == 0.5,#unassigned tol will be taken from p.ftol, default 10^-6
abs(t)**1.5 + abs(y)**0.1 == 10,
sinh(v)+arctan(z) == 4,
z**15 == u + v
]

startPoint = {x:0.51, y:0.52, z:0.53, u:0.54, v:0.55, t:0.56} # doesn't matter for interalg, matters for other solvers

solver='interalg' # set it to scipy_fsolve to ensure that the solver cannot find any solution of the system
p = SNLE(F, startPoint, ftol = 1e-10)
# interalg requires finite box bounds on variables while scipy_fsolve cannot handle any constraints.
# To set box bounds you can do either
#p.constraints = (x>-10, x<20, y>-20, y<10, z>-30, z<30, u>-32, u<32, v>-21, v<20, t>-10, t<10)
# or
p.implicitBounds=[-10, 10] # to affect all variables without assigned bounds

# you can add some constraints, e.g. p.constraints = t**2+y>10 or p.constraints = [sin(t)>0, y<0]
# or [sin(t+i) + y < 2*i for i in range(5)], but they are not tested properly with interalg yet

# without ALL positive tolerances on ALL variables number of solutions is infinite
x.tol = y.tol = z.tol = u.tol = v.tol = t.tol = 1e-5
# alternatively, you could use x, y, z, u, v, t = oovars('x y z u v t', tol = 1e-5)
# solutions s1 and s2 are equivalent if and only if |s1_variable[i]-s2_variable[i]| <= variable[i].tol for all variables
r = p.solve(solver, maxSolutions = 1000)# also you can use 'all' or 0, but sometimes it can be out of memory
'''
solver: interalg_0.21   problem: unnamed    type: SNLE
iter   objFunVal
0  8.644e+00
10  3.943e+00
20  2.248e+00
30  8.349e-05
40  9.681e-08
50  4.327e-10
OpenOpt info: Solution with required tolerance 1.0e-10
is guarantied (obtained precision: 3.9e-11)
56  3.904e-11
istop: 1001 (optimal solutions obtained)
Solver:   Time Elapsed = 0.42   CPU Time Elapsed = 0.42
12 solutions have been obtained
'''
# r.solutions is Python dict with the obtained  solutions
# Let's perform some analysis on them:
from numpy import amin, amax
SolutionsCoords = [(v,  [v(s) for s in r.solutions]) for v in (x, y, z, u, v, t)]
SolutionsCoords.sort(key=lambda elem: amax(elem)-amin(elem))
for v, coords in SolutionsCoords:
print('variable %s is bounded in range of length %0.1e' % (v.name, amax(coords)-amin(coords)))
'''
variable v is bounded in range of length 0.0e+00
variable z is bounded in range of length 4.5e-12
variable u is bounded in range of length 9.1e-12
variable x is bounded in range of length 2.0e-07
variable t is bounded in range of length 8.6e+00
variable y is bounded in range of length 1.6e+01
'''
# So only t and y differ essentially.
# Let's plot them:
S = dict(SolutionsCoords)
from pylab import *
scatter(S[t], S[y], marker = (5, 1, 0), s=75)
xlabel('t'); ylabel('y')
grid(1)
show() ## Solving nonlinear equation

To solve an equation you can mere use the same syntax for nonlinear equations system, introduced above. Let's illustrate it with an example of searching all roots of the equation (1 + 10^(-10) + sin(150*x)) * cos(10*x) ** 2 = 0 by interalg solver in the line segment [0, 1.5]. As you see, it has lots of local extrema, that are very close to zero (in those points where sin(150*x) is close to -1). Using ordinary function evaluation based solvers, like those ones from scipy.optimize, is very ineffective for searching even a single root of the equation (especially would you replace 150 by much higher value).

from FuncDesigner import *
from openopt import *

x = oovar(tol=1e-3) # solutions x1 and x2 with |x1-x2| < tol will be considered as equivalent
y = (1 + 1e-10 + sin(150*x)) * cos(10*x) ** 2
equation = [y == 0]

# we will search equation roots in the line segment [0,15]
constraints = (x>0,  x<1.5)

startPoint = {x:0} # doesn't matter essentially for the solver "interalg"
p = SNLE(equation, startPoint, constraints = constraints, ftol = 1e-10) # required tolerance for solutions
solver = oosolver('interalg', maxSolutions = 10000)
r = p.solve(solver)
'''
------------------------- OpenOpt 0.34 -------------------------
solver: interalg_0.21   problem: unnamed    type: NLSP
iter   objFunVal   nSolutions
0  1.000e+00          0
10  7.367e-12          4
OpenOpt info: Solution with required tolerance 1.0e-10
is guarantied (obtained precision: 5.1e-11)
12  5.118e-11          5
istop: 1001 (solutions are obtained)
Solver:   Time Elapsed = 0.07   CPU Time Elapsed = 0.05
5 solutions have been obtained
'''

# Let's plot a related picture
from pylab import plot, scatter, show, ylim, grid
from numpy import arange
xx = arange(0, 1.5, 0.001)
plot(xx, y({x:xx}))
X = [x(s) for s in r.solutions]
scatter(X, *len(X), marker = (5, 1, 0), s=75)
ylim(-0.1, 2.1)
grid();show() # Creating special oofuns

If you have functions beyond current FuncDesigner overloads capabilities and/or want to provide oofun wrappers for funcs written in C, Fortran, other languages, you can create your own oofun(s).
Automatic differentiation will be available provided you have supplied it derivative by yourself or have DerApproximator installed (it will involve finite-differences stencil approximation). Generally creating special oofuns is performed via

from FuncDesigner import oofun
my_oofun = oofun(function, input=input, (optional)d=d, args=args)

where function is Python function (or connected to Python by f2py, Cython, ctypes etc),
input is oovar(s)/oofun(s) (list or single element) - input args for the function,
d is derivative (if known),
args are additional arguments for function (if required).
Example:

from FuncDesigner import *
from scipy import fft
from numpy import arange
a, b, c = oovars('a', 'b', 'c')
ff = lambda x, y, z: abs(fft((x+2*y+4*z)*arange(16))) # R^3 -> R^16
FFT  = oofun(ff, input=[a, b, c])
f = a**2+b**2+c**2 + FFT
point = {a:1, b:2, c:3}
print(f(point))
print(f.D(point))

#  Another way, via creating oofun constructor:

from FuncDesigner import *
from scipy import fft
from numpy import arange, hstack
a, b, c = oovars('a', 'b', 'c')
myFFT = lambda INPUT: oofun(lambda *args:abs(fft(hstack(args))), input=INPUT) # Fourier transform returns complex numbers, I take real part here
f = a**2+b**2+c**2 + myFFT([a+2*b, a+3*c, 2*b+4*c])
point = {a:1, b:2, c:3}
print(f(point))
print(f.D(point))

# Translator

The feature is available since FD v. 0.21.

You can use FuncDesigner code made from lots oovars with routines that has single input array x (numpy array of size n) (possibly along with a small fixed number of other input arrays), function fun: R^n -> R^m (possibly along with several other funcs like this), optionally with function(s) Dfun: R^n -> R^(m x n) that provide(s) user-supplied derivatives (we will involve FuncDesigner automatic differentiation instead).

Examples: routines from scipy.optimize, especially gradient-based; scipy odr; scipy.interate odeint; maybe some user-defined routines; etc.

Of course, some routines mentioned above are already connected to FuncDesigner but an awful lot of other is not yet and all those thousands of them (especially user-defined) will never be.

from FuncDesigner import *

#                                                         Creating translator:

# Way 1. You can define translator from a set (list, tuple) of oovars
a, b, c = oovar(size=3), oovar(size=10), oovar(size=100)
d = oovar() # for unsized oovars translator will consider it has size 1

T = ootranslator([a, b, c, d])

# Way 2. You can define translator from a point (Python dict of oovars and their values in the point)
a, b, c, d = oovars(4) # create 3 oovars
point = {a:[0, 1, 2], b:*10, c:*100, d:1} # size a is 3, size b is 10, size c is 100, size d is 1
# alternatively:
# from numpy import *
# point = {a:arange(3), b:ones(10), c:zeros(100), d:array(1)}
T = ootranslator(point)

#                                                               Using translator:

# First of all let's concider using the routines point2vector and vector2point
x = T.point2vector(point) # x is numpy array of length ooT.n = 114
# we can perform some operations on x, e.g. pass to a routine
x += 1 + sin(x)
# and decode it backward:
newPoint = T.vector2point(x)

# Now let's involve automatic differentiation
# create a func
func1 = (sum(a)+2*sum(b)+sum(sin(c)*cosh(d)))**2 # R^ 114 -> R
pointDerivative1 = func1.D(point) # Python dict {a: df/da, b:df/db, c:df/dc, d:df/dd}
func1_d = T.pointDerivative2array(pointDerivative1) # numpy array of size 114

func2 = a+2*sum(b)+sum(sin(c)*cosh(d)) # R^ 114 -> R^3, because size(a) is 3 and other summation elements are of size 1
pointDerivative2 = func2.D(point) # Python dict {a: df/da, b:df/db, c:df/dc, d:df/dd}
func2_d = T.pointDerivative2array(pointDerivative2) # numpy 2d array of shape (3, 114)

# Now you could use translator in a routine that consumes function, it derivative and start point as numpy array
# e.g. for scipy optimize fmin_bfgs you could write
from scipy.optimize import fmin_bfgs
objective = lambda x: func1(T.vector2point(x))
x0 = T.point2vector(point)
derivative = lambda x: T.pointDerivative2array(func1.D(T.vector2point(x))).flatten() # without flatten it may be 2d array, e.g. 1 x 114
x_opt = fmin_bfgs(objective, x0, derivative)
optPoint = T.vector2point(x_opt) # Python dict {a:a_opt, b:b_opt, c:c_opt, d:d_opt}
a_opt,  b_opt,  c_opt,  d_opt = optPoint[a], optPoint[b], optPoint[c], optPoint[d] # numpy arrays of size 3, 10, 100, 1 Involving sparsity in ootranslator is possible but is undocumented yet.

# Interpolation

This example uses scipy.interpolate.UnivariateSpline. This function has been wrapped by a routine from FuncDesigner and yielded the oofun FuncDesigner.interpolate.scipy_UnivariateSpline with exactly same args/kwargs. See scipy.interpolate.UnivariateSpline doc for whole list of possible parameters. splines implementation in SciPy is very premature, you better check your spline each time via graphical visualizaion, e.g. by matplotlib, to prevent possible errors.

You can easily get derivatives via automatic differentiation and involve the function into numerical optimization problems.
FuturePlans: 2-D and N-dimensional splines

from FuncDesigner import *
a, b, c = oovars('a', 'b', 'c')
point1 = {a:1, b: 0, c:[1, 2, 3]}

mySpline = interpolator([1, 2, 3, 4], [1.001, 4, 9, 16.01])

(see scipy UnivariateSpline constructor doc for other available arguments, pay attention for k (order) and s (smoothing factor))
The methods residual and plot described below are available since v. 0.37. plot plots your spline and points it is based on (requires matplotlib installed), and residual calculates in spline definition points maximal difference between initial and interpolated values.

# Let's check our spline - they are implemented quite prematurely in scipy
print('max residual in spline definition points: %e' % mySpline.residual())
mySpline.plot()

f = mySpline(a)

print(f(point1))
print(f.D(point1))

f2 = a + sin(b) + c + arctan(1.5*f)
F = mySpline(a + f2 + 2*b + (c**2).sum())

print(F(point1))
print(F.D(point1))

"""
Expected output:
max residual in spline definition points: 2.220446e-16 (it is not always that small)
[ 1.001]
{a: 2.0015}
[ 329.61795391]
{a: 108.51234473582626, b: 111.39024933951316, c: array([ 111.39024934,  148.52033245,  222.78049868])}
"""

# Integration

OpenOpt has interalg solver that (since r. 0.36) is capable of numerical integration with guarantied user-requested precision. For mature single- and multi-dimensional examples see IP.

Usage:

myOOFun = integrator(integration_oofun, domain)

where domain is tuple (integration_oovar, lower_bound, upper_bound)

Examples:

from FuncDesigner import *
a, b, c = oovars('a', 'b', 'c')
f1,  f2 = sin(a)+cosh(b), 2*b+3*c.sum()
f3 = 2*a*b*prod(c) + f1*cos(f2)

point1 = {a:1, b:2, c:[3, 4, 5]}
point2 = {a: -10.4, b: 2.5,  c:[3.2, 4.8]}

domain = (b, -1, 1)
f4 = integrator(f1+2*f2+3*f3, domain)
print(f4(point1), f4(point2)) # Expected output: 147.383792876, 102.143425528

#                               integral bounds can be oofuns as well:

f5 = integrator(f1+2*f2+3*f3, (a, 10.4, f2+5*sin(f1)))
print(f5(point1), f5(point2)) # Expected output: 404683.969794 107576.397664

from numpy import inf # you can use infinite bounds as well
f6 = integrator(1/(1+a**2+b**2+sum(c**2)), (a, 10+cos(f2+2*f3), inf))
print(f6(point1), f6(point2)) # Expected output: 0.0847234400308 0.0905041349188

f7 = integrator(f1+2*f2+3*sqrt(abs(f3)), (a, cos(f2+2*f3), f2+5*sin(f1)) )
print(f7(point1), f7(point2)) # Expected output: 9336.70442146 5259.53130904

# Uncertainty analysis

(available since v. 0.33)
Uncertainty is calculated by the formula Usage:

  result = oofun.uncertainty(point, deviations, actionOnAbsentDeviations)

point and deviations should be Python dicts of pairs (oovar, value_for_oovar)
actionOnAbsentDeviations =

'error' (raise FuncDesigner exception) |
'skip' (treat as fixed number with zero deviation) |
(default) 'warning' (print warning, treat as fixed number)

Large-scale (maybe sparse) examples haven't been tested, we could implement and test it properly on demand (some speedup can be implemented).

from FuncDesigner import *
from numpy import mat
a, b = oovars('a', 'b')
f1 = a + 2 * b
f2 = sin(a**2)
point1 = {a:1, b:1}
point2 = {a:0.5, b:0.15}
deviations1 = {a:0.1, b:0.1}
deviations2 = {a:0.01, b:0.015}
for f in [f1, f2]:
for point in (point1, point2):
for deviations in [deviations1, deviations2]:
print('%f+/-%f   ' % (f(point), f.uncertainty(point, deviations))), # comma will work in other way in Python3
print('\n')
# Vectorized example
point = {a:[1, 2, 3], b:0.1}
deviations1 = {a:0.01, b:0.015}
deviations2 = {a:[0.01, 0.2, 0.3], b:0.015}
M = mat('2 3 4; 3 4 5')
f = dot(M, a) + b
for deviations in [deviations1, deviations2]:
print('%s +/- %s     ' % (f(point), f.uncertainty(point, deviations))), # comma will work in other way in Python3

Output:

3.000000+/-0.223607    3.000000+/-0.031623    0.800000+/-0.223607    0.800000+/-0.031623
0.841471+/-0.108060    0.841471+/-0.010806    0.247404+/-0.096891    0.247404+/-0.009689
[ 20.1  26.1] +/- [ 0.0559017   0.07228416] [ 20.1  26.1] +/- [ 1.82006181  2.33004828]


# Interval analysis

Some interval analysis features became available since v. 0.33
See the wikipedia.org entry for details.
The following overwritten functions are currently available for interval analysis: +, -, *, /, pow (**), sin, cos, arcsin, arccos, arctan, sinh, cosh, exp, sqrt, abs, log, log2, log10, floor, ceil, sum, min, max, sorted monotone 1-d splines (R->R) with order 1 or 3; since v. 0.38: arcsinh, arccosh, tanh, arctanh; future plans: general 1-d splines, N-dimensional splines, erf, tan, cotan, maybe others (some monotone, unimodal functions or functions with all known local extrema could be connected on demand)

from FuncDesigner import *
from numpy import ones
a, b = oovars('a', 'b')
a_infsup = (-ones(3), [1, 2, 3]) # Python list or tuple with 2 elements LowerBound, UpperBound
b_infsup = [2, 50.5] # or as tuple: (2, 50.5)
f1 = 2 * a
f2 = b + 15
f = f1 + f2

f_interval = f.interval({a: a_infsup, b: b_infsup})
print(f_interval.lb, f_interval.ub) # (array([ 15.,  15.,  15.]), array([ 67.5,  69.5,  71.5]))

# for some fixed coords (b) and some interval coords (a):
f_interval = f.interval({a: a_infsup, b: -15})
print(f_interval.lb, f_interval.ub) # (array([-2., -2., -2.]), array([ 2.,  4.,  6.]))
print(f_interval) # "FuncDesigner interval with lower bound [-2. -2. -2.] and upper bound [ 2.  4.  6.]"

f = sin(b) + cos(b+0.15)
f_interval = f.interval({a: 1000, b: [-0.1, 3.14/2]})
print(f_interval.lb, f_interval.ub) # (-0.24848411691819181, 1.9987499433268008)

# Boolean oofuns

Boolean oofuns are created while using operations '>', '>=', == (equals, ".eq" before FD v 0.18), '<', '<=' on ordinary smooth oofuns.
Let's note: since FuncDesigner is intended primarily for continuous functions, currently operations '>' and '>=' are same, as well as '<' and '<=' - all they yield "true" for equality case.
Resulting oofun has output of type boolean (True/False); for vectorized comparisons it is False if any of vector coordinates equals to numpy.nan ("not a number", e.g. result of extracting logarithm or square root from negative number, result of user-defined function) or the constraint is violated:

from FuncDesigner import *
a, b = oovars(2)
point = {a:[2,4],b:[5,10]}
c = a+2*b; print(c(point)) # prints [12. 24.]
d = c > 15; print(d(point)) # prints '''False''' because c has coordinate 12 that is less than 15
z = c > -10; print(z(point)) # prints '''True''' because all c coordinates are greater than -10 in the point involved
z2 = log(b-7); print(z2(point)) # prints '''False''' because log([-2,3]) is [NaN,  1.09861229]

# Numerical optimization: using OpenOpt

Currently you can use only the classes mentioned below, FuturePlans: provide connections and examples for some other OO classes (mb (MI)(QC)QP, DFP, NLLSP).

Let me remember you - required constraints tolerance can be change during prob assignment, e.g. p = NLP(..., contol = 1e-7). See also the doc entry Personal tolerances for constraints.
Using FuncDesigner you don't have to provide 1st derivatives - they are calculated via Automatic differentiation.
If possible, constraints are rendered into box-bound and linear ones.
Since OOSuite v 0.31 you can involve useSparse={True/False} parameter (it has some autoselect possibilities but they are premature).
Suppose you have optimization variables a, b, c = oovars('a', 'b', 'c'). To extract optimal point coordinates anything below works:

a_opt,  b_opt, c_opt = r(a, b, c)
a_opt,  b_opt, c_opt = r(a), r(b), r(c)
r('a'), r('b'), r('c')
r('a', 'b', 'c')
a(r), b(r), c(r)

For the whole list of problems and solvers available in OpenOpt framework see OpenOpt problems tree: ## LP

# Example of creating LP in FuncDesigner
# and solving it via OpenOpt

from FuncDesigner import *
from openopt import LP

# Define some oovars
x, y, z = oovars(3)

# Let's define some linear functions
f1 = 4*x+5*y + 3*z + 5
f2 = f1.sum() + 2*x + 4*y + 15
f3 = 5*f1 + 4*f2 + 20

# Define objective; sum(a) and a.sum() are same as well as for numpy arrays
obj = x.sum() + y - 50*z + sum(f3) + 2*f2.sum() + 4064.6

# Define some constraints via Python list or tuple or set of any length, probably via while/for cycles
constraints = [x+5*y<15, x<4, f1<[25, 35], f1>-100, 2*f1+4*z<[80, 800], 5*f2+4*z<100, -5<x,  x<1, -20<y,  y<20, -4000<z, z<4]

# Start point - currently matters only size of variables, glpk, lpSolve and cvxopt_lp use start val = all-zeros
startPoint = {x:[8, 15], y:25, z:80} # however, using numpy.arrays is more recommended than Python lists

# Create prob
p = LP(obj, startPoint, constraints = constraints)

# Solve
r = p.solve('glpk') # glpk is name of solver involved, see OOF doc for more arguments

# Decode solution
print('Solution: x = %s   y = %f  z = %f' % (str(x(r)), y(r), z(r)))
# Solution: x = [-4.25 -4.25]   y = -20.000000  z = 4.000000

## NLP

from FuncDesigner import *
from openopt import NLP

a, b, c = oovars('a', 'b', 'c')
f = sum(a*[1, 2])**2+b**2+c**2
startPoint = {a:[100, 12], b:2, c:40} # however, you'd better use numpy arrays instead of Python lists
p = NLP(f, startPoint)
p.constraints = [(2*c+a-10)**2 < 1.5 + 0.1*b, (a-10)**2<1.5, a>8.9, a+b > [ 7.97999836, 7.8552538 ], \
a < 9, (c-2)**2 < 1, b < -1.02, c > 1.01, (b + c * log10(a).sum() - 1) ** 2==0]
r = p.solve('ralg')
a_opt, b_opt, c_opt = r(a, b, c)
print(a_opt, b_opt, c_opt)
# you can use any of the following as well:
# a_opt,  b_opt, c_opt = r(a), r(b), r(c)
# r('a'), r('b'), r('c') (provided you have assigned the names to oovars as above)
# a(r), b(r), c(r)

"""
Expected output:
...
objFunValue: 717.75631 (feasible, max constraint =  7.44605e-07)
array([ 8.99999881,  8.87525339]), -1.0199994691949312, 1.0614185705226642
"""

## NSP

# Example for NSP
# |x| + 1.2*|x| + 1.44*|x| + ... + 1.2^74*|x| + |y-15| +|y+15| + y^2 +  -> min
# with some constraints
# coded in FuncDesigner
# all 1st derivatives are obtained via Automatic Differentiation,
# not to be confused with finite-difference derivatives approximation

from numpy import cos, arange
from FuncDesigner import *
from openopt import NSP

x, y = oovars('x y')

N = 75
koeffs = arange(1, N+1) ** 1.2 # 1, 1.2, 1.44, ..., 1.2^m, ..., 1.2^N

objective = sum(abs(x) * koeffs) + abs(y-15) + abs(y+15) + y**2
constraints = [(y-1)**2<1, abs(y) < 0.5]
constraints.append((x - 0.01*arange(N))**2 < 0.1*arange(1, N+1)) # (x_0-0)**2 < 0.1, (x_1-0.01)**2 < 0.2, (x_2-0.02)**2 < 0.3,...
startPoint = {x: cos(1+arange(N)), y:80}

p = NSP(objective, startPoint, maxIter = 1e5, constraints = constraints)

r = p.solve('ralg')
x_opt, y_opt = x(r), y(r)
print(max(abs(x_opt)), y_opt)

'''
expected output:
[...]
from numpy import cos, arange
from FuncDesigner import *
from openopt import NSP

x, y = oovars('x y')

N = 75
koeffs = arange(1, N+1) ** 1.2 # 1, 1.2, 1.44, ..., 1.2^m, ..., 1.2^N

objective = sum(abs(x) * koeffs) + abs(y-15) + abs(y+15) + y**2
# Python list of some constraints
constraints = [(y-1)**2<1, abs(y) < 0.5, abs(x) < 1e-5, abs(x[N-1]) < 1e-5]
# we could use vectorized code (recommended, to speedup calculations)
constraints.append((x - 0.01*arange(N))**2 < 0.1*arange(1, N+1)) # (x_0-0)**2 < 0.1, (x_1-0.01)**2 < 0.2, (x_2-0.02)**2 < 0.2,...
startPoint = {x: cos(1+arange(N)), y:80}

p = NSP(objective, startPoint, maxIter = 1e5, constraints = constraints)

r = p.solve('ralg')
x_opt, y_opt = x(r), y(r)
print(max(abs(x_opt)), y_opt)

'''
expected output:
[...]
876  3.004e+01            -100.00
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver:   Time Elapsed = 7.98  CPU Time Elapsed = 7.97
objFunValue: 30.042539 (feasible, MaxResidual = 0)
(6.6277698279489041e-06, 0.20306221768582972)
'''

## MILP

# Example of creating MILP in FuncDesigner
# and solving it via OpenOpt

from FuncDesigner import *
from openopt import MILP

# Define some oovars
# old-style:
#x, y, z = oovars('x y z')
# new (OpenOpt v 0.37+):
x = oovar('x')
y, z = oovars('y z', domain = int)# also you may use domain = bool

# Let's define some linear functions
f1 = 4*x+5*y + 3*z + 5
f2 = f1.sum() + 2*x + 4*y + 15
f3 = 5*f1 + 4*f2 + 20

# Define objective; sum(a) and a.sum() are same as well as for numpy arrays
obj = sum(x) + y + 50*z + sum(f3) + 2*f2.sum() + 4064.6

# Start point - currently matters only size of variables
startPoint = {x:[8, 15], y:25, z:80} # however, using numpy.arrays is more recommended than Python lists

# Define some constraints
cons = [x+5*y<15, x<-5, f1<[25, 35], f1>-100, 2*f1+4*z<[80, 800], 5*f2+4*z<100, [-5.5, -4.5]<x,  x<1, -17<y,  y<20, -4000<z, z<4]

# Create prob
# old-style:
#p = MILP(obj, startPoint, intVars = [y, z], constraints=cons)
# new (OpenOpt v 0.37+):
p = MILP(obj, startPoint, constraints=cons)

# Solve
r = p.minimize('lpSolve', iprint=-1) # glpk is name of the solver involved, see OOF doc for more arguments

# Decode solution
s = r.xf
print('Solution: x = %s   y = %f  z = %f' % (str(s[x]), s[y], s[z]))
# Solution: x = [-5.25 -4.5 ]   y = 3.000000  z = -33.000000

## MINLP

(since v. 0.33)

# Example of creating MINLP in FuncDesigner
# and solving it via OpenOpt

from FuncDesigner import *
from openopt import MINLP
a, b, c = oovars('a', 'b', 'c')
d = oovar('d', domain = [1, 2, 3, 3.5, -0.5,  -4]) # domain should be Python list/set/tuple of allowed values
startPoint = {a:[100, 12], b:2, c:40, d:0} # however, you'd better use numpy arrays instead of Python lists

f = sum(a*[1, 2])**2 + b**2 + c**2 + d**2
constraints = [(2*c+a-10)**2 < 1.5 + 0.1*b, (a-10)**2<1.5, a>8.9, (a+b > [ 7.97999836, 7.8552538 ])('sum_a_b', tol=1.00000e-12), \
a < 9, b < -1.02, c > 1.01, ((b + c * log10(a).sum() - 1) ** 2==0)(tol=1e-6), b+d**2 < 100]

p = MINLP(f, startPoint, constraints = constraints)
r = p.minimize('branb', nlpSolver='ralg', plot=0, discrtol = 1e-6, xtol=1e-7)
a_opt,  b_opt, c_opt, d_opt = r(a, b, c, d)
# or any of the following:
# a_opt,  b_opt, c_opt,d_opt = r(a), r(b), r(c),r(d)
# r('a'), r('b'), r('c'), r('d') (provided you have assigned the names to oovars as above)
# r('a', 'b', 'c', 'd')
# a(r), b(r), c(r), d(r)
print(a_opt, b_opt, c_opt, d_opt)
"""
Expected output:
...
objFunValue: 718.00734 (feasible, max(residuals/requiredTolerances) = 0.135819)
(array([ 8.99999879,  8.87525369]), array([-1.01999986]), array([ 1.06177278]), array([-0.5]))
"""

## LLSP

from FuncDesigner import *
from openopt import LLSP

# create some variables
a, b, c = oovars('a', 'b', 'c')
# or just a, b, c = oovars(3)

# start point is unused by lapack_dgelss and lapack_dgelss
# but it is required to set dimensions of variables
# also, it is used for some converters e.g. r = p.solve('nlp:ralg')
startPoint = {a:0, b:0, c:0}
# in general case variables can be arrays,
# e.g. startPoint = {a:zeros(100), b:[0, 2], c:ones(20000)}

# overdetermined system of 4 linear equations with 3 variables
# you can use "for" cycle, operations of sum() eg
# startPoint = {a:[0,10], b:0, c:0}
# f = [sum(a) + i * b + 4 * sqrt(i) * c - 2 * i**2 for i in xrange(40)]
f = [2*a+3*b-4*c+5, 2*a+13*b+15, a+4*b+4*c-25, 20*a+30*b-4*c+50]

# alternatively, you can use the following vectorized form
measurements_a_koeff = [2, 2, 1, 20]
measurements_b_koeff = [3, 13, 4, 30]
measurements_c_koeff = [-4, 0, 4, -4]
d = [5, 15, -25, 50]
f = a * measurements_a_koeff + b * measurements_b_koeff + c * measurements_c_koeff + d
# you can set several oofuns with different output sizes,
# e.g. f = [myOOFun1, myOOFun2, ..., myOOfunN]

# assign prob
p = LLSP(f, startPoint)

# solve
r = p.solve('lsqr')
a_sol,  b_sol,  c_sol = r(a, b, c)
# print result
print(r.xf)

# Expected output:
# {a: array([-0.3091145]), b: array([-0.86376906]), c: array([ 4.03827441])}

## GLP

Only some of OpenOpt-connected GLP solvers are capable of using derivatives, hence FuncDesigner AD features will hardly benefit you, but for puny problems, where calculations speed isn't of big importance, you can use FuncDesigner for to make your code more clear and short (without code/decode all variables into/from a single optimization vector), also, handling of fixed/unfixed optimization variables is more convenient.

"""
GLP (GLobal Problem from OpenOpt set) example for FuncDesigner model:
searching for global minimum of the func
(x-1.5)**2 + sin(0.8 * y ** 2 + 15)**4 + cos(0.8 * z ** 2 + 15)**4 + (t-7.5)**4
subjected to some constraints
"""
from openopt import GLP
from FuncDesigner import *

x, y, z, t = oovars(4)

# define objective
f = (x-1.5)**2 + sin(0.8 * y ** 2 + 15)**4 + cos(0.8 * z ** 2 + 15)**4 + (t-7.5)**4

# define some constraints
constraints = [x<1, x>-1, y<1, y>-1, z<1, z>-1, t<1, t>-1, x+2*y>-1.5,  sinh(x)+cosh(z)+sinh(t) <2.0]

# add some more constraints via Python "for" cycle
M = 10
for i in xrange(M):
func = i*x+(M-i)*y+sinh(z)+cosh(t)
constraints.append(func < i+1)

# define start point. You can use variables with length > 1 as well
startPoint = {x:0, y:0, z:0, t:0}

# assign prob
p = GLP(f, startPoint, constraints=constraints,  maxIter = 1e3,  maxFunEvals = 1e5,  maxTime = 5,  maxCPUTime = 5)

#optional: graphic output
#p.plot = 1 or p.solve(..., plot=1) or p = GLP(..., plot=1)

# solve
r = p.solve('de', plot=1) # try other solvers: galileo, pswarm

x_opt, y_opt, z_opt, t_opt = r(x, y, z, t)
# or
# x_opt, y_opt, z_opt, t_opt = x(optPoint), y(optPoint), z(optPoint), t(optPoint)

## MOP

See MOP page for multi-objective optimization examples

## Optimization with some fixed variables

It is performed via using arguments fixedVars or freeVars.

If after fixing some variables a constraint becomes linear, e.g. with fixed x
z=5+4*cos(x)
sin(z)+2y<10

will be rendered as linear. Moreover, sometimes whole problem becomes linear and thus can be rendered as linear; you'd better use oosystem to involve automatic determination.

from FuncDesigner import *
from openopt import NLP

a, b, c = oovars('a', 'b', 'c')
startPoint = {a:[100, 12], b:-0.9, c:40} # however, you'd better use numpy arrays instead of Python lists

objective = ((a*[1, 2])**2).sum() + b**2 + c**2

# Optional: set come constraints
constraints = [a>[1, 1], # it means a>=1, a>=1
(a.sum()+b)**2<10,
(a+b+c)**2 < 15] # it means (a[i]+b+c)**2 < 15 for  i in {0,1}

fixedVars = b # or fixedVars = [b] / c / [b,c] / [a,c] etc
p = NLP(objective, startPoint, fixedVars = fixedVars, constraints = constraints)

r = p.solve('ralg')
print(r.xf)

# Alternatively, you can set freeVars ("optVars" for OOSuite versions <= 0.31) instead of fixedVars:
freeVars = [a, c] # or optVars = [a] / c / [b,c] / [a,c] etc
p = NLP(objective, startPoint, freeVars = freeVars, constraints = constraints)
r = p.solve('ralg')
print(r.xf)

"""
Expected output:
...
objFunValue: 5.809995 (feasible, max constraint =  7.62729e-07)
{b: -0.90000000000000002, a: array([ 1.00000032,  0.99999924]), c: array([ 0.00068568])}
"""

## Personal tolerances for constraints

Each boolean oofun created by comparison (less, greater, equal) of smooth oofuns has field tol (tolerance, threshold for triggering False), 0 by default (if the oofun is encountered in constraints passed to openopt, it will use value p.contol for these ones). If you change it to other value, FuncDesigner and OpenOpt will use that one. You can assign it directly (c.tol = myValue), but other ways are more convenient and recommended:

from FuncDesigner import *
a = oovar()
point = {a: 1.5001}
# you can use any of the ways below
c = a<1.5; c.tol = 1e-3 # direct way, inconvenient - you have to type 'c' 2 times, and sometimes name is much longer.
c = (a<1.5)(tol=0.001) # to change tolerance you can call it with keyword arg '''tol'''
# in the __call__ routine you can assign constraint name as well:
c = (a<1.5)('asdf', tol=0.001) #   (*)
c = (a<1.5)('asdf')(tol=0.001) #   (**)
c = (a<1.5)(name='my constraint', tol=0.001)
c = (a<1.5)(tol=0.001, name='asdf')
c = (a<1.5)(name='asdf')(tol=1e-3)
# Most recommended ways are (*) and (**)
d = (a<1.5)(tol=0.000001)
print('c: ' + str(c1(point)) + '   d: ' + str(d(point))) # c: True   d: False
print(a+2==3.50005)(tol=1e-10)(point) # False

Providing adequate constraints tolerances is highly important for most of numerical optimization solvers. Lots of them will have difficulties with constraints of huge typical values (like 103, 104 etc) while required tolerances for them are small (like 10-6, that is default for OpenOpt); or you have some of your constraints to be satisfied very exactly (like 10-20) while others are huge as above. Thus, when you assign your own tolerances, OpenOpt will scale it automatically to the working tolerance p.contol (via multiplying it with a known constant number).
If you have strict constraint, that cannot be violated for any tol (e.g. x>=0 for to keep x0.25 inside definition domain) but you still want to provide a non-default scaling factor for the constraint, you can use negative tol, e.g.

from FuncDesigner import *
from openopt import NLP
a = oovar()
objective = log(a-10000000000) + 100*exp(1/(a-10000000000)) # numbers around 10^10 dollars are involved
c = sqrt(a) > 1e5 + 0.1
c2 = (a>10000000000)(tol = -1000) # tolerance is 1000 dollars, strict inequation to keep objective defined
startPoint = {a:10000000001}
p = NLP(objective, startPoint, constraints = [c, c2], maxIter = 1e4, ftol = 1e-4)
r = p.minimize('ipopt')
print(r.ff)
# My computer with IPOPT 3.8.0 prints 109.90849 for tol = -1000
# that is better than 109.9183 for tol = 1e-6 (default) Known issues: currently assigning non-zero tol forces box-bound constraints (lb <= x <= ub) to be rendered as general linear (Ax<=b); to be fixed in future.

## Attached constraints

(unestablished)
You can use attach constraints to oovar, oofun or another constraint; then every time this element is involved in numerical optimization by openopt, the constraints will be connected and involved as well. It is automatically done for some FD functions (log, sqrt, pow, arcsin, arccos) (like (x>0)(tol = -1e-7), (x>1e-300)(tol = -1e-7) etc in FD versions prior to 0.33; in 0.33 you should pass to openopt prob parameter p.useAttachedConstraints = True; in future useAttachedConstraints probably will have other values (along with True/False) like "userOnly" or "kernelOnly". Negative tolerance means strict requirement, see related doc entry for details

Using attached constraints is illustrated by the openopt NLP example below, but it can be used with other classes as well - LP, MILP, SNLE, GLP etc.

from FuncDesigner import *
from openopt import NLP

a, b, c = oovars('a', 'b', 'c')

b.attach(b < -1.01)
a.attach(a<9, a>8.9)
# having sqrt(c) we don't need to mention c>0 - it will  be connected automatically
c.attach((c-2)**2 < 1, c*sqrt(c) <2)

sumAB=(a+b > [ 7.97999836, 7.8552538 ])('sum_a_b', tol=1.00000e-12)
# having log(a) we don't need to mention a>0 - it will  be connected automatically (a>10^-300)
sumAB.attach(((b + c * log10(a).sum() - 1) ** 2==0)(tol=1e-7))

f = (sum(a*[1, 2])**2+b**2+c**2).attach((2*c+a-10)**2 < 1.5 + 0.1*b)
# we can use operation "attach" several times
f.attach((a-10)**2<1.5, (a-10.5)**2<2.5)

startPoint = {a:[100, 12], b:2, c:-40} # however, you'd better use numpy arrays instead of Python lists
p = NLP(f, startPoint, constraints = sumAB)
r = p.minimize('ralg')

print(r.xf)
a_opt,  b_opt, c_opt = r(a, b, c)

"""
Expected output:
...
objFunValue: 721.86104 (feasible, max(residuals/requiredTolerances) = 0.257482)
{a: array([ 8.98999844,  8.91886164]), b: array([-1.00999974]), c: array([ 1.0555967])}
"""

To remove attached constraints you can use removeAttachedConstraints (available since FD 0.32):
f.removeAttachedConstraints()

• possibility to change tol / scale factor (currently mostly tol ~= -1e-7 is used for log/log2/log10/sqrt/arcsin/arccos)
• provide attached constraints for those splines where user want to have extrapolation disabled
• provide special operator for attaching, e.g. &=
• currently the element with some attached constraints is evaluating despite of their values; maybe attached constraints should be separated into 2 groups, e.g. "required" and "related". If any "required" constraints in a point isn't satisfied, then for the point we don't calculate the element it is attached to, and return numpy.nan instead.

# Solving ODE

• FuncDesigner Automatic differentiation makes the soft very suitable for solving ODE systems $\mathbf{dy/dt=f(y,t), y(t_0)=y_0\in R^n}$; also, you may be interested in eigenvalue analysis for derivatives of several functions, obtained via Automatic differentiation, see EIG entry for details.
• Another pro is having several variables instead of y and t only, as it is done in most ODE soft tools
• Currently the following solvers are available:
• scipy.integrate.odeint (lsoda from the FORTRAN library odepack)
• interalg - solver with guaranteed user-defined accuracy, currently can handle only equations dy/dt = f(t)
• any other could be connected
• If you have parts of code written in C, Fortran or undifferentiable in any other way, you could wrap them into oofun and finite-differences derivatives approximation will be involved, see Creating special oofuns

## Basic example

from FuncDesigner import *
from numpy import arange

# create some variables
x, y, z, t = oovars('x', 'y', 'z', 't')
# or just x, y, z, t = oovars(4)

# Python dict of ODEs
equations = {
x: 2*x + cos(3*y-2*z) + exp(5-2*t), # dx/dt
y: arcsin(t/5) + 2*x + sinh(2**(-4*t)) + (2+t+sin(z))**(1e-1*(t-sin(x)+cos(y))), # dy/dt
z: x + 4*y - 45 - sin(100*t) # dz/dt
}

startPoint = {x: 3, y: 4, z: 5}

timeArray = arange(0, 1, 0.01) # 0, 0.01, 0.02, 0.03, ..., 0.99

# assign ODE. 3rd argument (here "t") is time variable that is involved in differentiation.
myODE = ode(equations, startPoint, t, timeArray)

r = myODE.solve()
X,  Y,  Z = r(x, y, z)
print(X[50:55], Y[50:55], Z[50:55])
print(r.msg) # r.extras.infodict contains whole scipy.integrate.odeint infodict
"""
(array([  95.32215541,   97.80251715,  100.32319065,  102.88116657, 105.47545552]),
array([ 50.26075725,  52.20594513,  54.2008745 ,  56.24637088,  58.34441539]),
array([ 44.40064889,  46.96317981,  49.62269611,  52.38990533,  55.2741921 ]))
Integration successful.
"""

from FuncDesigner import *
from numpy import arange, zeros,  ones

N = 50

# create some variables
x, t = oovars('x', 't')
y = oovar('y', size = N)
z = oovar('z', size = 2*N)

# Python dict of ODEs
equations = {
x: 2*x + exp(5-2*t), # 1 equation dx/dt
y: arcsin(t/5) + z + 2*cos(y) + cos(sum(z)), # N equations dy/dt
z: cos(z/10) + y + sin(x) + 4*cos(y.sum()) - 0.001*sinh(2*t) # 2N equations dz/dt
}

startPoint = {x: 3, y: 4*sin(arange(N)), z: 5*cos(arange(2*N))}

timeArray = arange(0, 1, 0.01) # 0, 0.01, 0.02, 0.03, ..., 0.99

# assign ODE. 3rd argument (here "t") is time variable that is involved in differentiation.
myODE = ode(equations, startPoint, t, timeArray)

r = myODE.solve()
X,  Y,  Z = r(x, y, z) # X.size = 100, Y.shape = (50, 100), Z.shape = (100, 100)
print(r.msg) # r.extras.infodict contains whole scipy.integrate.odeint infodict

## Example with specifiable guaranteed accuracy

(since v. 0.37). Text and graphic output format is unestablished yet.
OpenOpt has solver interalg based on interval analysis, that is capable of solving ODE dy/dt = f(t) with specifiable accuracy, where scipy.integrate.odeint fails to solve a problem and lies with the output message "Integration successful" (see the next chapter "interalg vs scipy.integrate.odeint comparison"). Of course, it is expected behavior of any solver based on function evaluations only, w/o any other information about the involved function supplied - it can omit some essential regions.

from time import time
from numpy import linspace, pi, hstack
from FuncDesigner import *

sigma = 1e-15
StartTime, EndTime = 0, 10
times = linspace(StartTime, EndTime, 1000) # 1000 points between StartTime, EndTime

# required accuracy
# I use so big value for good graphical visualization below, elseware 2 lines are almost same and difficult to view
ftol = 0.00005 # this value is used by interalg only, not by scipy_lsoda

t = oovar()
f = exp(-(t-4.321)**2/(2*sigma)) / sqrt(2*pi*sigma) + 0.1*sin(t)
# optional, for graphic visualisation and exact residual calculation:
from scipy.special import erf
exact_sol = lambda t: 0.5*erf((t-4.321)/sigma) - 0.1*cos(t) # + const, that is a function from y0

y = oovar()
equations = {y: f} # i.e. dy/dt = f
startPoint = {y: 0} # y(t=0) = 0

# assign ODE. 3rd argument (here "t") is time variable that is involved in differentiation
myODE = ode(equations, startPoint, t, times, ftol = ftol)
T = time()
r = myODE.solve('interalg', iprint = -1)
print('Time elapsed with user-defined solution time intervals: %0.1f' % (time()-T))
Y = r(y)
print('result in final time point: %f sec' % Y[-1])

# now let interalg choose time points by itself
# we provide 4th argument as only 2 time points (startTime, endTime)
myODE = ode(equations, startPoint, t, (times, times[-1]), ftol = ftol)#
T = time()
r = myODE.solve('interalg', iprint = -1)
print('Time elapsed with automatic solution time intervals: %0.1f' % (time()-T))
Y = r(y)
print('result in final time point: %f sec' % Y[-1])

'''
Time elapsed with user-defined solution time intervals: 11.2
result in final time point: 1.183941 sec
Time elapsed with automatic solution time intervals: 3.1
result in final time point: 1.183908 sec

In time autoselect mode r(t) and r(y) will be arrays of times and values:
>>> r(t)
array([  1.90734863e-05,   5.72204590e-05,   9.53674316e-05, ...,
9.99990463e+00,   9.99994278e+00,   9.99998093e+00])
>>> r(y)
array([  7.27595761e-11,   2.91038304e-10,   6.54836184e-10, ...,
1.18391205e+00,   1.18390998e+00,   1.18390790e+00])
>>> r(t).size, r(y).size
(951549, 951549)
The time difference (11.2 - 3.1 = 8.1 sec) is due to calculating spline built over these arrays onto "times" array

unestablished yet (http://openopt.org/unestablished):
r.extras is Python dict with fields startTimes, endTimes, infinums, supremums (arrays of same size to r(t).size),
such that in interval (startTimes[i], endTimes[i])  infinums[i] <= y(t) <= supremums[i]
and supremums[i] - infinums[i] <= ftol
'''

# Now let's see a graphical visualization of splitted time intervals
# as you'll see, less time intervals are created where function is close to constanct
# and vise versa, more time intervals (with less size) are created
# near most problem regions, where function values change very intensively
# the peak near 4.321 (peak of our erf()) is near 750000 and doesn't fit into the resized picture
from pylab import hist, show, grid
hist(r(t), 5000)
grid('on')
show() ## interalg vs scipy.integrate.odeint comparison

(since v. 0.37)

from time import time
from numpy import linspace, pi, hstack
from FuncDesigner import *

sigma = 1e-7 # interalg works even with 1e-26, i.e. 10^-26
StartTime, EndTime = 0, 10
times = linspace(StartTime, EndTime, 100) # 0, 0.01, 0.02, 0.03, ..., 10

# required accuracy
# I use so big value for good graphical visualization below, elseware 2 lines are almost same and difficult to view
ftol = 0.05 # this value is used by interalg only, not by scipy_lsoda

t = oovar()
f = exp(-(t-4.321)**2/(2*sigma)) / sqrt(2*pi*sigma) + 0.1*sin(t)
# optional, for graphic visualisation and exact residual calculation:
from scipy.special import erf
exact_sol = lambda t: 0.5*erf((t-4.321)/sigma) - 0.1*cos(t) # + const, that is a function from y0

results = {}
for solver in ('scipy_lsoda', 'interalg'):
y = oovar()
equations = {y: f} # i.e. dy/dt = f
startPoint = {y: 0} # y(t=0) = 0
# assign ODE. 3rd argument (here "t") is time variable that is involved in differentiation
myODE = ode(equations, startPoint, t, times, ftol = ftol)#
T = time()
r = myODE.solve(solver, iprint = -1)
print('%s ODE time elapsed: % f' % (solver,  time()-T))
Y = r(y)
results[solver] = Y
print('%s result in final time point: %f' % (solver, Y[-1]))
if solver == 'scipy_lsoda':
print('scipy.integrate.odeint reports: %s' % r.extras['infodict']['message'])

'''
scipy_lsoda ODE time elapsed:  0.044204
scipy_lsoda result in final time point: 0.183907
scipy.integrate.odeint reports: Integration successful.
interalg ODE time elapsed:  0.110030 (here most of time was elapsed for spline interpolation of obtained results)
interalg result in final time point: 1.184044
'''

realSolution = exact_sol(times) - exact_sol(times) + startPoint[y]
print('max scipy.interpolate.odeint difference from real solution: %0.9f' \
% max(abs(realSolution - results['scipy_lsoda'])))
print('max interalg difference from real solution: %0.9f (required: %0.9f)' \
% (max(abs(realSolution - results['interalg'])), ftol))
'''
max scipy.interpolate.odeint difference from real solution: 1.000000020
max interalg difference from real solution: 0.025937095 (required: 0.050000000)
'''
# Now let's see a graphical visualization of results
from pylab import hist, show, plot, grid, legend, title
#hist(Times, 500)

p1,  = plot(times, results['interalg'], 'b')
p2,  = plot(times, results['scipy_lsoda'], 'r')
p3,  = plot(times, realSolution,'k')
legend([p1, p2, p3], ['interalg', 'scipy.interpolate.odeint', 'exact solution'], 'best')
grid('on')
show() # oosystem

oosystem is a set of some oofuns with set of some constraints; each one can have some attached constraints (some of them as well may have some attached constraints recursively). Maybe in future will be implemented possibility of creating oosystems recursively, merging, uniting etc.

To create an oosystem just pass some oofuns into constructor:

S = oosystem(f, f2, f3)
# or any iterable (list, tuple, set) of them:
dataToObserve = [f, f2, f3]
S = oosystem(dataToObserve)

Later you could attach some more oofuns to the oosystem via

S += elem<br>
# or any iterable:
S += [elem1,elem2,...]

To attach constraints use &=, e.g.

S &= a<10
S &= (a<15)('constraint A1')
cons_B,  cons_C = b>1, (c < 39.9999995)('cons_C', tol=1e-4)
S &= (cons_B, cons_C)

Let's investigate system state in a point:

ss = S(point)

To see state of the system in a point use print(ss), it would show something like

func3 = nan
func2 = 16993.0
func1 = 16980.0

You can extract values of a oofun from the system in a given point via using ss() on the oofun(s):

print(ss(f))
print(ss(f,f2))

To check are all constraints satisfied and no NaN, +/- inf in involved data use the field isFeasible:

print(ss.isFeasible) # True or False

To see names of active constraints you can use (unestablished) print(ss.activeConstraints)

# Numerical optimization of oosystems

• oosystems can automatically determine (subjected to the given set of free/fixed variables) is your problem
• LP or NLP (since OOSuite v 0.31)
• system of linear or nonlinear equations (available since v. 0.33)
• FuturePlans: add other classes (MILP, MINLP, QP, QCQP, MIQCQP, MIQP, SDP, maybe more)

Let's consider appropriate examples.

## Nonlinear optimization example

from FuncDesigner import *

a, b, c = oovars('a', 'b', 'c')
f = (sum(a*[1, 2])**2+b**2+c**2)('f')
another_func = 2*f + 100 + log(c-1.075) # this func has attached constraint c > 1.075 with a negative tol ~ -1e-7

startPoint = {a:[100, 12], b:2, c:40} # however, you'd better use numpy arrays instead of Python lists
S = oosystem(f, another_func)
S &= [(2*c+a-10)**2 < 1.5 + 0.1*b, (a-10)**2<1.5, c**c<3, a>8.9,
(a+b > [ 7.97999836, 7.8552538 ])('sum_a_b', tol=1.00000e-12), a < 9, b < -1.02]

r = S.minimize(f, startPoint)
# you could use S.maximize as well

# default NLP solver is ralg;
# to change solver you can use kwarg "solver", e.g.
# r = S.minimize(f, startPoint, solver = 'ipopt')
# also you can provide any openopt kwargs:
# r = S.minimize(f, startPoint, xtol=1e-7, ftol = 1e-7, maxTime = 1e3, ...)

print(r.xf)
a_opt,  b_opt, c_opt = r(a, b, c)

# Expected output:
# ...
# objFunValue: 717.78622 (feasible, max(residuals/requiredTolerances) = 0.786102)
# {a: array([ 8.99999758,  8.87525302]), b: array([-1.01999921]), c: array([ 1.07534812])}

## Linear optimization example

from FuncDesigner import *

a, b, c = oovars('a', 'b', 'c')
f = sum(a*[1, 2])+2*b+4*c
another_func = 2*f + 100 # doesn't matter in the example

startPoint = {a:[100, 12], b:2, c:40} # for to ajust sizes of the variables
S = oosystem(f, another_func)
S &= [2*c+a-10 < 1500+0.1*b, a-10<150, c<300, a>8.9, c>-100,
(a+2*b > [ 7.9, 7.8 ])('sum_a_b', tol=1.00000e-12), a > -10, b > -10, b<10]

for i in xrange(1000):
S &= b+i*c > 10*i + sum(a)

r = S.minimize(f, startPoint, solver='lpSolve')
# you could use S.maximize as well

# default LP solvers are (sequentially, if installed): glpk, lpSolve, cvxopt_lp, lp:ipopt, lp:algencan, lp:scipy_slsqp, lp:ralg
# to change solver you can use kwarg "solver", e.g.
# r = S.minimize(f, startPoint, solver = 'cvxopt_lp')
# also you can provide any openopt kwargs:
# r = S.minimize(f, startPoint, iprint=-1, ...)

print(r.xf)
a_opt,  b_opt, c_opt = r(a, b, c)
# or any of the following:
# a_opt,  b_opt, c_opt = r(a), r(b), r(c)
# r('a'), r('b'), r('c') (provided you have assigned the names to oovars as above)
# r('a', 'b', 'c')
# a(r), b(r), c(r)

"""
Expected output:
...
objFunValue: 45.883673 (feasible, max(residuals/requiredTolerances) = 0)
{a: array([  8.9, -10. ]), b: array([ 8.9]), c: array([ 9.79591837])}
"""

## System of equations

"""
Example: solving system of equations:
x + y - 9 = 0
x - 0.5 * y + z * sinh(t) = 0
z + x = 1.5
with a fixed variable, using oosystem
"""
from FuncDesigner import *
x, y, z, t = oovars(4)
S = oosystem()

# if no tol is assigned for an equation, p.ftol (default 10^-6) will be used for that one
S &= (x + y - 9==0, (x - 0.5*y + (z+4)*sinh(t)+t==0)(tol=1e-9), (z + x == 1.5)(tol=1e-9))
startPoint = {x:8, y:15, z:80, t:-0.5}
# for general linear/nonlinear systems start point can include arrays, e.g. {x:[1, 10, 100], y:15, z:[80,800], t:ones(1000)}
# and equations can have vector form (R^n_i - >R^m_i)
# No underdetermined or overdetermined linear/nonlinear systems are implemented in FD oosystem solve() yet, i.e.
# (at least currently) total number of unknown variables have to match total number of equations m_1 + ... + m+k

# fixedVars and freeVars are optional parameters and can be omitted (that means all variables are unknown)
# if we set fixedVars=t then FuncDesigner will solve the system as linear
# (sometimes autoselect involve as sparse matrices, you can overwrite it via useSparse = True/False),
# if we set fixedVars=(other variable) then FuncDesigner will solve the system as nonlinear
fixedVars=x

r = S.solve(startPoint, fixedVars=fixedVars)
# or
# r = S.solve(startPoint, fixedVars=fixedVars, nlpSolver = 'nssolve', iprint = 0)
# nlpSolver will be used if and only if the problem (wrt given set of fixed/free variables) is nonlinear

xs, ys, zs, ts = r(x, y, z, t)
print('Solution: x = %f   y = %f  z = %f t = %f' % (xs, ys, zs, ts))
"""
for fixedVars=t:
--------------------------------------------------
solver: defaultSLEsolver   problem: unnamed    type: SLE
iter    objFunVal    log10(MaxResidual/ConTol)
0  2.584e+01                      9.51
1  1.066e-14                     -5.85
istop: 10 (solved)
Solver:   Time Elapsed = 0.01   CPU Time Elapsed = 0.0
objFunValue: 1.0658141e-14 (feasible, max(residuals/requiredTolerances) = 1.42109e-06)
Solution: x = 3.891961   y = 5.108039  z = -2.391961 t = -0.500000

for fixedVars=x:
--------------------------------------------------
solver: nssolve   problem: unnamed    type: SNLE
iter    objFunVal
0  8.650e+04
813  7.911e-07
istop: 10
Solver:   Time Elapsed = 3.8    CPU Time Elapsed = 3.77
objFunValue: 7.9110148e-07
Solution: x = 8.000000   y = 0.999999  z = -6.500000 t = 2.050118
"""

# Some issues to be aware

• You can use f.size for oovars/oofuns but not len(f)
• some_oovar[some_ind] < some_val is rendered into general linear constraint, not box-bound; the one is intended to be solved in future FuncDesigner/OpenOpt releases, alone with some other features/enhancements to be done.
• FuncDesigner creates lots of auxilary functions and class objects. Since CPython interpreter is slow in comparison with C/C++/Fortran etc, if you have problems with calculations speed you could be interested in Unladen Swallow project, they say it already works with NumPy/SciPy and promise even more speed in release 2009Q3.

# Common FuncDesigner programmer mistakes

From time to time we get FuncDesigner code for review or bug search. Typical user mistakes:

• Forget about attached constraints in sqrt, log, pow, arcsin, arccos etc
• Using slow Python cycles where vectorization is available - can speed down in tens, hundreds or even more times. Using oovar is more recommended than oovars, latter is recommended only for IPOPT if your problem is out of RAM because of some issues in dependency matrix generation
• FD programmers don't check difference between passing argument (to optimization and other AD or AD-based funcs) useSparse = True / False / 'auto' (latter is default, but sometimes it work prematurely)
• Connecting C/Fortran/general Python code to FD via custom oofuns, many people forget to adjust parameter stencil (=1/2/3)
• Poor code overall

##### Latest OOSuite 0.38

from 2012-03-15

2012-06-15

Development