Unfortunately, sparse matrices still remains one of most weak features in Python usage for scientific purposes, as I had mentioned here.
But if you strongly require it, here are some useful info.
Currently the following OpenOpt wrappers for CVXOPT and CVXOPT-connected (glpk, dsdp) solvers can use sparse matrices of type cvxopt.spmatrix:
If sparsity of a matrix ((nElems-nNonZeros)/nElems) exceeds certain threshold, for the cvxopt solvers mentioned above they will be converted to sparse type automatically, so using cvxopt.spmatrix can yield essential benefits if and only if you have a problem in storing huge arrays during prob creation (i.e. not enough memory to hold prob instance with the matrices in RAM).
As for SDP solvers (dsdp and cvxopt_sdp), using spmatrix hasn't been tested properly yet.
Current wrapper for lpSolve (LP) cannot handle sparse matrices, it converts general linear constraints (A, Aeq) into Python list and pass it into the solver, where lpSolve cast it into its own sparse matrix format (provided sparsity exceeds certain threshold).
Some other cases
- scipy.sparse stuff sometimes is used in FuncDesigner - for solving SLEs (systems of linear equations) and for some functions used in Automatic differentiation.
- In the case of solvers that take benefits of splitting (ralg, algencan), you could use nonlinear constraints splitting, i.e. providing several functions , and providing derivatives dci, dhj in accordance with dimensions of ci, hj. BTW, if the constraints dimensions m_i, k_j are not too large, you don't have to deal with scipy.sparse, use ordinary dense (numpy.array or numpy.matrix) instead.
- coming scipy.sparse matrices are used in OpenOpt solver ralg for handling general linear constraints (A x <= b, Aeq x = beq); you can either pass it from the very beginning of type scipy.sparse, or they will be cast to the format automatically, if a certain sparsity threshold has been exceeded. However, it can benefit for very large matrices only.