Best C++ Matrix Library for Sparse Unitary Matrices

Best C++ Matrix Library for sparse unitary matrices

Many people doing "serious" matrix stuff, rely on BLAS, adding LAPACK / ATLAS (normal matrices) or UMFPACK (sparse matrices) for more advanced math. The reason is that this code is well-tested, stable, reliable, and quite fast. Furthermore, you can buy them directly from a vendor (e.g. Intel MKL) tuned towards your architecture, but also get them for free. uBLAS mentioned in Manuel's answer is probably the standard C++ BLAS implementation. And if you need something like LAPACK later on, there are bindings to do so.

However, none of these standard libraries (BLAS / LAPACK / ATLAS or uBLAS + bindings + LAPACK / ATLAS) ticks your box for being templated and easy to use (unless uBLAS is all you'll ever need). Actually, I must admit, that I tend to call the C / Fortran interface directly when I use a BLAS / LAPACK implementation, since I often don't see much additional advantage in the uBLAS + bindings combination.

If I a need a simple-to-use, general-purpose C++ matrix library, I tend to use Eigen (I used to use NewMat in the past). Advantages:

  • quite fast on Intel architecture, probably the fastest for smaller matrices
  • nice interface
  • almost everything you expect from a matrix library
  • you can easily add new types

Disadvantages (IMO):

  • single-processor [Edit: partly fixed in Eigen 3.0]
  • slower for larger matrices and some advanced math than ATLAS or Intel MKL (e.g. LU decomposition) [Edit: also improved in Eigen 3.0]
  • only experimental support for sparse matrices [Edit: improved in upcoming version 3.1].

Edit: The upcoming Eigen 3.1 allows some functions to use the Intel MKL (or any other BLAS / LAPACK implementation).

Complex matrix exponential with Armadillo library

You need to use the expmat() function for a matrix exponential, exp() calculates an element-wise exponential.

For example, some code I'm currently using for a physics simulation:

arma::cx_mat f;  // A hermitian matrix
double delta_t ; // A time step
std::complex<double> i_imag(0.0,1.0) ; // i, the imaginary number

std::vector<arma::cx_mat> U; // A vector of complex unitary matrices.

U.push_back(arma::expmat(-i_imag * delta_t * f));

Have tested this code, taking the matrix exponential of anti-hermitian matrices to get unitary transformation and works fine.

ublas vs. matrix template library (MTL4)

With your requirements, I would probably go for BOOST::uBLAS. Indeed, a good deployment of uBLAS should be roughly on par with MTL4 regarding speed.

The reason is that there exist bindings for ATLAS (hence shared-memory parallelization that you can efficiently optimize for your computer), and also vendor-tuned implementations like the Intel Math Kernel Library or HP MLIB.

With these bindings, uBLAS with a well-tuned ATLAS / BLAS library doing the math should be fast enough. If you link against a given BLAS / ATLAS, you should be roughly on par with MTL4 linked against the same BLAS / ATLAS using the compiler flag -DMTL_HAS_BLAS, and most likely faster than the MTL4 without BLAS according to their own observation (example see here, where GotoBLAS outperforms MTL4).

To sum up, speed should not be your decisive factor as long as you are willing to use some BLAS library. Usability and support is more important. You have to decide, whether MTL or uBLAS is better suited for you. I tend towards uBLAS given that it is part of BOOST, and MTL4 currently only supports BLAS selectively. You might also find this slightly dated comparison of scientific C++ packages interesting.

One big BUT: for your requirements (extremely big matrices), I would probably skip the "syntactic sugar" uBLAS or MTL, and call the "metal" C interface of BLAS / LAPACK directly. But that's just me... Another advantage is that it should be easier than to switch to ScaLAPACK (distributed memory LAPACK, have never used it) for bigger problems. Just to be clear: for house-hold problems, I would not suggest calling a BLAS library directly.

cBLAS matrix multiply call not working for 1XN and NxN matrices

I simply needed to transpose matrix B or set CblasColMajor since my matrix B was already stored in col major order.



Related Topics



Leave a reply



Submit