Why
From OpenOpt
Practice of solving optimization problems during many years made specialists think:
there can't be universal algorithm for nonlinear optimization,
that successfully enough solves ALL problems

Practical experience shows, that there are a sets of problems, where an algorithm,
that seems to be the most ineffective from the theoretical point of view,
turns out to yield good results, due to specific of problem.
Hence, a number of different algorithms is required.
"Linearization method" by Boris Pshenichny, Ukrainian academician, 1983
 Take a look at the epigraph or here. OpenOpt proposes lots of solvers, while many other optimization toolboxes, free as well as commercial ones, propose single solver for each problem type  linear, nonlinear, quadratic etc.
 Commercial optimization software is very costly (see for example here). Don't forget: you need to pay plus ~10% yearly to keep your software uptodate. Part of money that users spend for commercial closedcode software goes to systems of protection from themselves (i.e. from users)  all those license files linked to network card number, restrictions by CPU number and similar stuff.
 For nonlinear problems you can use Automatic Differentiation that is provided by FuncDesigner and don't waste lots of time for writing code for it; also, you have to modify usersupplied derivatives each time you have modified something in objective function and/or constraints. Also, it can provide more exact solution than the one obtained by finitedifference derivatives approximation, and in most cases it will economy your CPUTime spent for solving the problem involved.
 There are more and more free solvers appear during recent years, that are so powerful like commercial ones are. First of all I would mention IPOPT, DSDP, PSwarm and ALGENCAN (BTW they are already connected to OpenOpt).
 Time and CPUTime elapsed for solver is usually much more less than the one elapsed for objFunc and/or constraints evaluation (+ maybe some matrix operations; being performed by numpy Ccode, they are almost as fast as native C/Fortran one). If you want to speed up calculations the most efficient way is rewriting some (slowest) Python funcs (objfunc or some nonlinear constraints) to C or Fortran. Connecting C or Fortran code to Python (via f2py, cython, ctypes, SWIG etc) is much more simple than, for example, using MATLAB MEXfuncs.
 OpenOpt bugfixes require much less time: sometimes several minutes are enough to find, fix and commit changes to svn repository (and/or send the changes via email to user).
 BSD license allows connecting OpenOpt to any code, both free and closedsource.
 OpenOpt is developed in KUBUNTU Linux using Python 2.5, but AFAIK it works in many other OS (including Windows, MacOS) as well.
 Some commercial frameworks provide using of all those dozens of all solver parameters (like pcgtol etc). OpenOpt can't implement all those ones but you can find most appropriate one (using the framework) and then connect you code to solver directly (w/o using OpenOpt).
 There are lots of problems where objective functions and/or constraints are calculated during hours. Taking into account that sometimes it needs 1000,10000, 10000... number of function evaluations, we should be not surprised that optimization problems often become a project's bottleneck.

 Also, here could be mentioned problems that require realtime solution with short time limit, and problems, where optimization subproblems are deeply integrated into long, maybe nested, cycles.

 Do you know who is Elisha Gray? I'm not surprised if you don't. He was the one who applied for a phone patent 2 hours later than Alexander Graham Bell had done. Are you sure you'll not be just another one Elisha Gray?

 Using Python language RAD ability + choosing most efficient solver(s) for your problem(s) can greatly help you to suppress your rival(s).