# Parallel

### From OpenOpt

**Parallel calculations**

## General info for Python language programmers

- You may be interested in Python multiprocessing module, see my example here
- Some more parallel processing and multiprocessing approaches are mentioned here
- There are some efforts in latest NumPy versions, provided they have been built with correct flags for parallel BLAS and LAPACK (via ATLAS or somehow else); particularly, it will run numpy.dot (that is intensively used in OpenOpt Kernel and some solvers - ralg, some scipy NLP solvers) on several CPU, some other matrix operations (multiplication, division etc can also gain sufficient speedup).
- It is
**highly recommended to link NumPy and SciPy with Intel MKL / AMD ACML libraries**during installation.

## Our soft

- You may create large (probably sparse) systems of linear equations in FuncDesigner and then solve them via SuperLU (is included into SciPy source code, license: BSD), UMFPACK (license: GPL, you should build/install SciPylinked to the library, IIRC some builds in Linux soft channels are already done in the way) or any other Python-connected parallel SLE solver. See some FD SLE examples here
- For solving MOP by interalg you can use parameter nProc
- You can use cplex solver (it has free for education and commercial licenses)
- You may use parallel calculations (in any way from those ones mentioned above) inside your code for calculating non-linear objective function or non-linear constraints for Non-Linear Problems
- You can export LP/MILP coded in OpenOpt or FuncDesigner to MPS format files and then solve them via commercial parallel LP/MILP solvers (e.g. cplex, GuRoBi, Mosek etc)
- NLP solver IPOPT uses SLE solvers (like MUMPS) that can be turned (during installation) to use several CPU
- GLP (global) solver pswarm can handle several CPU (via MPI) but I'm not sure it's relevant to his Python API