Matrix %In% Matrix

Matrix Multiplications: Vector to Matrix Conversion

Using linear algebra you can achieve the same result using matrix*matrix multiplication

pic

Using MathNet the above is

static void Main(string[] args)
{
var fileMatrix = DelimitedReader.Read<double>("data.csv",
sparse: false,
delimiter: "\t",
hasHeaders: true
);

// 5.2 1.8
// 3.2 0.2
// 1.8 2.8
// 4.4 3.4
// 5.2 0.6
// ...

var coefMatrix = CreateMatrix.DiagonalOfDiagonalArray(new double[] { 5, 2 });

var resultMatrix = fileMatrix * coefMatrix;

// 26 3.6
// 16 0.4
// 9 5.6
// 22 6.8
// 26 1.2
// 16 2.8
// ...

}

Matrix of matrices in python

As others have noted, the matrix class is pretty much deprecated by now. They are more limited than ndarrays, with very little additional functionality. The main reason why people prefer to use numpy matrices is that linear algebra (in particular, matrix multiplication) works more naturally for matrices.

However, as far as I can tell you're using np.dot rather than the overloaded arithmetic operators of the matrix class to begin with, so you would not see any loss of functionality from using np.array instead. Furthermore, if you would switch to python 3.5 or newer, you could use the @ matrix multiplication operator that would let you write things such as

Alist[i] = Qbars[i] @ h[k]

In the following I'll use the ndarray class instead of the matrix class for the above reasons.

So, your question has two main parts: creating your block matrix and multiplying the result with a vector. I suggest using an up-to-date numpy version, since there's numpy.block introduced in version 1.13. This conveniently does exactly what you want it to do:

>>> import numpy as np
>>> A,B,C = (np.full((3,3),k) for k in range(3))
>>> A
array([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
>>> B
array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
>>> C
array([[2, 2, 2],
[2, 2, 2],
[2, 2, 2]])
>>> np.block([[A,B],[B,C]])
array([[0, 0, 0, 1, 1, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 0, 1, 1, 1],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2]])

Similarly, you can concatenate your two 3-length vectors using np.concatenate or one of the stacking methods (these are available in older versions too).

Now, the problem is that you can't multiply a matrix of shape (6,1) with a matrix of shape (6,6), so the question is what you're really trying to do here. In case you want to multiply each element of your matrix with the corresponding row of your vector, you can just multiply your arrays (of class np.ndarray!) and make use of array broadcasting:

>>> Q = np.block([[A,B],[B,C]])    # (6,6)-shape array
>>> v = np.arange(6).reshape(-1,1) # (6,1)-shape array
>>> v * Q
array([[ 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 1, 1, 1],
[ 0, 0, 0, 2, 2, 2],
[ 3, 3, 3, 6, 6, 6],
[ 4, 4, 4, 8, 8, 8],
[ 5, 5, 5, 10, 10, 10]])

The other option is that you want to do matrix-vector multiplication, but then either you have to transpose your vector (in order to multiply it with the matrix from the right) or swap the order of the matrix and the vector (multiplying the vector with the matrix from the left). Example for the former:

>>> v.T @ Q  # python 3.5 and up
array([[12, 12, 12, 27, 27, 27]])

>>> v.T.dot(Q)
array([[12, 12, 12, 27, 27, 27]])

Another benefit of arrays (rather than matrices) is that arrays can be multidimensional. Instead of putting numpy arrays inside a dict and summing them that way, you could define a 3d array (a collection of 2d arrays along a third axis), then you could sum along the third dimension. One huge benefit of numpy is its efficient memory need and performance, and these aspects are strongest if you use numpy objects and methods all through your code. Mixing native python objects (such as dicts, zips, loops) typically hinders performance.

is it possible to have a matrix of matrices in R?

1) list/matrix Yes, create a list and give it dimensions using matrix:

m <- matrix(1:4, 2)
M <- matrix(list(m, 2*m, 3*m, 4*m), 2)

so element 1,1 of M is m:

> M[[1,1]]
[,1] [,2]
[1,] 1 3
[2,] 2 4

2) list/dim<- This also works:

M <- list(m, 2*m, 3*m, 4*m)
dim(M) <- c(2, 2)

3) array This is not quite what you asked for but depending on your purpose it might satisfy your need:

A <- array(c(m, 2*m, 3*m, 4*m), c(2, 2, 2, 2)) # 2x2x2x2 array

so element 1,1 is:

> A[1,1,,]
[,1] [,2]
[1,] 1 3
[2,] 2 4

How to handle large matrices and matrix multiplication in Python

If B is diagonal, you don't need to use sparse to save memory. You can do the calculation with a 1d array, just the diagonal values.

I'll demonstrate with small dimensions, where making a full B is doesn't cause problems. Others can test this with large dimensions.

In [5]: A = np.arange(24).reshape(3,8)
In [7]: B = np.diag(np.arange(8))
In [8]: B.shape
Out[8]: (8, 8)

The double matmul:

In [10]: A@B@A.T
Out[10]:
array([[ 784, 1904, 3024],
[ 1904, 4816, 7728],
[ 3024, 7728, 12432]])

The equivalent einsum:

In [12]: np.einsum('ij,jk,lk',A,B,A)
Out[12]:
array([[ 784, 1904, 3024],
[ 1904, 4816, 7728],
[ 3024, 7728, 12432]])

The 1d diagonal of B:

In [15]: b = np.diag(B)

A broadcasted multiplication does the same thing as the matmul:

In [17]: np.allclose(A*b,A@B)
Out[17]: True
In [18]: np.allclose(A*b@A.T,A@B@A.T)
Out[18]: True

Expressing that with einsum:

In [19]: np.einsum('ij,j,lj',A,b,A)
Out[19]:
array([[ 784, 1904, 3024],
[ 1904, 4816, 7728],
[ 3024, 7728, 12432]])

Some comparative times:

In [20]: timeit np.einsum('ij,j,lj',A,b,A)
10.6 µs ± 22.1 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [21]: timeit np.einsum('ij,jk,lk',A,B,A)
15.2 µs ± 13.3 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [22]: timeit A@B@A.T
7.26 µs ± 20.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [23]: timeit A*b@A.T
8.17 µs ± 12.6 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

For einsum the 'condensed' version is faster. With matmul the full diagonal is marginally faster.

But with large arrays where creating a full B might a problem, using b might be faster. Also it's been observed in other SO that iterations on smaller arrays can be faster, due to better memory handling.

np.array([A[i,:]*b@A.T for i in range(3)])

Concatenate matrix of matrices in Julia

I think the official command for this is hvcat, which is what's used to digest literal matrix syntax. Although whether it's the best solution may depend e.g. on how big theproblem is.

julia> parts = [fill(i + 10j, 2,2) for i in 1:2, j in 1:3]
2×3 Matrix{Matrix{Int64}}:
[11 11; 11 11] [21 21; 21 21] [31 31; 31 31]
[12 12; 12 12] [22 22; 22 22] [32 32; 32 32]

julia> parts[end] = [91 92; 93 94] # secretly calls hvcat
2×2 Matrix{Int64}:
91 92
93 94

julia> hvcat(3, permutedims(parts)...) # note that it works along the rows!
4×6 Matrix{Int64}:
11 11 21 21 31 31
11 11 21 21 31 31
12 12 22 22 91 92
12 12 22 22 93 94

julia> parts = [sprand(2,2,0.7) for i in 1:2, j in 1:3];

julia> hvcat(3, permutedims(parts)...) |> summary # preserves sparseness
"4×6 SparseMatrixCSC{Float64, Int64} with 17 stored entries"

Why not (linearly) interpolate a matrix by matrix multiplication?

I figured out how a much simpler…

Well, you just described convolution in terms of matrix-matrix multiplication. Yes this works. But it's expensive and numerically unstable because it accumulates a lot of floating point rounding errors.

So why don't programmers use matrix multiplication to interpolate matrices?

Because it's inefficient. Matrix multiplication has complexity (N³).

A far more efficient, and at the same time also "perfect" (in the sense of signal processing) method is to perform a forward Fast Fourier Transform, complexity (N·log(N)), zero pad the spectrum to the desired output resolution, and perform the inverse FFT on that, again complexity (N·log(N)), so the total complexity is (2(N·log(N))). Constant factors are omitted in Big- notation, so the complexity is (N·log(N)).

The underlying math of this is the Convolution theorem.

This is far, far, FAR better than the (N³) of matrix multiplication. So that's your reason why we don't use matrix multiplication right there.

If you do a simple zero-pad you're doing interpolation with a sin(x)/x kernel. But by multiplying the spectrum with the spectrum of a different interpolation kernel you can implement any kind of interpolation you desire this way.

houldn't graphics cards make this a blazing fast calculation when compared to nested for loops?

GPUs have linear interpolators built in, as part of their texture sampler units hardware, which will outperform any software implementation.

How to optimize my C++ OpenMp Matrix Multiplication code

Following things can be done for speedup:

  1. Using OpenMP for parallelizing external loop, like you did (and like I also did in my following code). Or alternatively using std::async for multi-threading like it was used in another answer.

  2. Transpose B matrix, this will help to increase L1 cache hits, because you will read from sequential memory each B column (or row in transposed variant).

  3. Use vectorized SIMD instructions, this will allow to do several multiplications (and additions) within one CPU cycle. Often compilers do auto-vectorization of your loops well enough through SIMD instructions without your help, but I did it explicitly in my code.

  4. Run several independent SIMD instructions within loop. This will help to occupy whole CPU pipeline of SIMD. I did so in my code by using four SIMD registers r0, r1, r2, r3. In compilers this is usually called loop unrolling.

  5. Align your matrix starting address on 64-bytes boundary. This will help SIMD instructions to do fast aligned read/write.

  6. Align starting address of each matrix row on 64-bytes boundary. I did this in my code by padding each row with zeros till multiple of 64-bytes. This also helps SIMD instructions to do fast aligned read/write.

In my following code I did all 1. - 6. steps above. Memory 64-bytes alignment I did through implementing AlignmentAllocator that was used in std::vector. Also I did time measurements for float/double/int.

On my old 4-core laptop I got following time measurements for the case of 1000x1000 matrix multiplying by 1000x1000:

 float: time 0.1569 sec
double: time 0.3168 sec
int: time 0.1565 sec

To compare my hardware capabilities I did measurements of another answer of @doug for the case of int:

Threads w transpose   0.2164 secs.

As one can see my solution is 1.4x times faster that the other answer, I guess due to memory 64-bytes alignment and maybe due to using explicit SIMD (instead of relying on compiler auto-vectorization of a loop).

To compile my program, don't forget to add -fopenmp -lgomp options (for OpenMP support) and -march=native -O3 -std=c++20 options (for SIMD support, optimizations and standard) if you're compiling under GCC/CLang, while MSVC I guess adds OpenMP automatically and doesn't need any special options (use /O2 /GL /std:c++latest for optimizations and standard in MSVC).

In my code I only implemented SSE2/SSE4/AVX/AVX2 instructions for SIMD, if you have more powerful machine you may tell me and I implement also FMA/AVX-512, they will give even twice more speed boost.

My Mul() function is quite generic, it is templated, and you just pass pointers to matrices and row/col count, so your matrices may be stored on calling side in any way (through std::vector or std::array or plain 2D array).

At start of Run() function you may change number of rows and columns if you need a bigger test. Notice that all my functions support any rows and columns, you may even multiply matrix of size 1234x2345 by 2345x3456.

Try it online!

#include <cstdint>
#include <cstring>
#include <stdexcept>
#include <iostream>
#include <iomanip>
#include <vector>
#include <memory>
#include <string>

#include <immintrin.h>

#define USE_OPENMP 1
#define ASSERT_MSG(cond, msg) { if (!(cond)) throw std::runtime_error("Assertion (" #cond ") failed at line " + std::to_string(__LINE__) + "! Msg '" + std::string(msg) + "'."); }
#define ASSERT(cond) ASSERT_MSG(cond, "")
#if defined(_MSC_VER)
#define IS_MSVC 1
#else
#define IS_MSVC 0
#endif

#if USE_OPENMP
#include <omp.h>
#endif

template <typename T, std::size_t N>
class AlignmentAllocator {
public:
typedef T value_type;
typedef std::size_t size_type;
typedef std::ptrdiff_t difference_type;
typedef T * pointer;
typedef const T * const_pointer;
typedef T & reference;
typedef const T & const_reference;

public:
inline AlignmentAllocator() throw() {}
template <typename T2> inline AlignmentAllocator(const AlignmentAllocator<T2, N> &) throw() {}
inline ~AlignmentAllocator() throw() {}
inline pointer adress(reference r) { return &r; }
inline const_pointer adress(const_reference r) const { return &r; }
inline pointer allocate(size_type n);
inline void deallocate(pointer p, size_type);
inline void construct(pointer p, const value_type & wert);
inline void destroy(pointer p) { p->~value_type(); }
inline size_type max_size() const throw() { return size_type(-1) / sizeof(value_type); }
template <typename T2> struct rebind { typedef AlignmentAllocator<T2, N> other; };
bool operator!=(const AlignmentAllocator<T, N> & other) const { return !(*this == other); }
bool operator==(const AlignmentAllocator<T, N> & other) const { return true; }
};

template <typename T, std::size_t N>
inline typename AlignmentAllocator<T, N>::pointer AlignmentAllocator<T, N>::allocate(size_type n) {
#if IS_MSVC
auto p = (pointer)_aligned_malloc(n * sizeof(value_type), N);
#else
auto p = (pointer)std::aligned_alloc(N, n * sizeof(value_type));
#endif
ASSERT(p);
return p;
}
template <typename T, std::size_t N>
inline void AlignmentAllocator<T, N>::deallocate(pointer p, size_type) {
#if IS_MSVC
_aligned_free(p);
#else
std::free(p);
#endif
}
template <typename T, std::size_t N>
inline void AlignmentAllocator<T, N>::construct(pointer p, const value_type & wert) {
new (p) value_type(wert);
}

template <typename T>
using AlignedVector = std::vector<T, AlignmentAllocator<T, 64>>;

template <typename T>
struct RegT;

#ifdef __AVX__
template <> struct RegT<float> { static size_t constexpr bisize = 256; using type = __m256; static type zero() { return _mm256_setzero_ps(); } };
template <> struct RegT<double> { static size_t constexpr bisize = 256; using type = __m256d; static type zero() { return _mm256_setzero_pd(); } };

inline void MulAddReg(float const * a, float const * b, __m256 & c) {
c = _mm256_add_ps(c, _mm256_mul_ps(_mm256_load_ps(a), _mm256_load_ps(b)));
}
inline void MulAddReg(double const * a, double const * b, __m256d & c) {
c = _mm256_add_pd(c, _mm256_mul_pd(_mm256_load_pd(a), _mm256_load_pd(b)));
}

inline void StoreReg(float * dst, __m256 const & src) { _mm256_store_ps(dst, src); }
inline void StoreReg(double * dst, __m256d const & src) { _mm256_store_pd(dst, src); }
#else // SSE2
template <> struct RegT<float> { static size_t constexpr bisize = 128; using type = __m128; static type zero() { return _mm_setzero_ps(); } };
template <> struct RegT<double> { static size_t constexpr bisize = 128; using type = __m128d; static type zero() { return _mm_setzero_pd(); } };

inline void MulAddReg(float const * a, float const * b, __m128 & c) {
c = _mm_add_ps(c, _mm_mul_ps(_mm_load_ps(a), _mm_load_ps(b)));
}
inline void MulAddReg(double const * a, double const * b, __m128d & c) {
c = _mm_add_pd(c, _mm_mul_pd(_mm_load_pd(a), _mm_load_pd(b)));
}

inline void StoreReg(float * dst, __m128 const & src) { _mm_store_ps(dst, src); }
inline void StoreReg(double * dst, __m128d const & src) { _mm_store_pd(dst, src); }
#endif

#ifdef __AVX2__
template <> struct RegT<int32_t> { static size_t constexpr bisize = 256; using type = __m256i; static type zero() { return _mm256_setzero_si256(); } };
//template <> struct RegT<int64_t> { static size_t constexpr bisize = 256; using type = __m256i; static type zero() { return _mm256_setzero_si256(); } };

inline void MulAddReg(int32_t const * a, int32_t const * b, __m256i & c) {
c = _mm256_add_epi32(c, _mm256_mullo_epi32(_mm256_load_si256((__m256i*)a), _mm256_load_si256((__m256i*)b)));
}
//inline void MulAddReg(int64_t const * a, int64_t const * b, __m256i & c) {
// c = _mm256_add_epi64(c, _mm256_mullo_epi64(_mm256_load_si256((__m256i*)a), _mm256_load_si256((__m256i*)b)));
//}

inline void StoreReg(int32_t * dst, __m256i const & src) { _mm256_store_si256((__m256i*)dst, src); }
//inline void StoreReg(int64_t * dst, __m256i const & src) { _mm256_store_si256((__m256i*)dst, src); }
#else // SSE2
template <> struct RegT<int32_t> { static size_t constexpr bisize = 128; using type = __m128i; static type zero() { return _mm_setzero_si128(); } };
//template <> struct RegT<int64_t> { static size_t constexpr bisize = 128; using type = __m128i; static type zero() { return _mm_setzero_si128(); } };

inline void MulAddReg(int32_t const * a, int32_t const * b, __m128i & c) {
c = _mm_add_epi32(c, _mm_mullo_epi32(_mm_load_si128((__m128i*)a), _mm_load_si128((__m128i*)b)));
}
//inline void MulAddReg(int64_t const * a, int64_t const * b, __m128i & c) {
// c = _mm_add_epi64(c, _mm_mullo_epi64(_mm_load_si128((__m128i*)a), _mm_load_si128((__m128i*)b)));
//}

inline void StoreReg(int32_t * dst, __m128i const & src) { _mm_store_si128((__m128i*)dst, src); }
//inline void StoreReg(int64_t * dst, __m128i const & src) { _mm_store_si128((__m128i*)dst, src); }
#endif

template <typename T>
void Mul(T const * A0, size_t A_rows, size_t A_cols, T const * B0, size_t B_rows, size_t B_cols, T * C) {
size_t constexpr reg_cnt = RegT<T>::bisize / 8 / sizeof(T), block = 4 * reg_cnt;
ASSERT(A_cols == B_rows);
size_t const A_cols_aligned = (A_cols + block - 1) / block * block, B_rows_aligned = (B_rows + block - 1) / block * block;

// Copy aligned A
AlignedVector<T> Av(A_rows * A_cols_aligned);
for (size_t i = 0; i < A_rows; ++i)
std::memcpy(&Av[i * A_cols_aligned], &A0[i * A_cols], sizeof(Av[0]) * A_cols);
T const * A = Av.data();
// Transpose B
AlignedVector<T> Bv(B_cols * B_rows_aligned);
for (size_t j = 0; j < B_cols; ++j)
for (size_t i = 0; i < B_rows; ++i)
Bv[j * B_rows_aligned + i] = B0[i * B_cols + j];
T const * Bt = Bv.data();
ASSERT(uintptr_t(A) % 64 == 0 && uintptr_t(Bt) % 64 == 0);
ASSERT(uintptr_t(&A[A_cols_aligned]) % 64 == 0 && uintptr_t(&Bt[B_rows_aligned]) % 64 == 0);

// Multiply
#pragma omp parallel for
for (size_t i = 0; i < A_rows; ++i) {
// Aligned Reg storage
AlignedVector<T> Regs(block);

for (size_t j = 0; j < B_cols; ++j) {
T const * Arow = &A[i * A_cols_aligned + 0], * Btrow = &Bt[j * B_rows_aligned + 0];

using Reg = typename RegT<T>::type;
Reg r0 = RegT<T>::zero(), r1 = RegT<T>::zero(), r2 = RegT<T>::zero(), r3 = RegT<T>::zero();

size_t const k_hi = A_cols - A_cols % block;

for (size_t k = 0; k < k_hi; k += block) {
MulAddReg(&Arow[k + reg_cnt * 0], &Btrow[k + reg_cnt * 0], r0);
MulAddReg(&Arow[k + reg_cnt * 1], &Btrow[k + reg_cnt * 1], r1);
MulAddReg(&Arow[k + reg_cnt * 2], &Btrow[k + reg_cnt * 2], r2);
MulAddReg(&Arow[k + reg_cnt * 3], &Btrow[k + reg_cnt * 3], r3);
}

StoreReg(&Regs[reg_cnt * 0], r0);
StoreReg(&Regs[reg_cnt * 1], r1);
StoreReg(&Regs[reg_cnt * 2], r2);
StoreReg(&Regs[reg_cnt * 3], r3);

T sum1 = 0, sum2 = 0, sum3 = 0;
for (size_t k = 0; k < Regs.size(); ++k)
sum1 += Regs[k];

//for (size_t k = 0; k < A_cols - A_cols % block; ++k) sum3 += Arow[k] * Btrow[k];

for (size_t k = k_hi; k < A_cols; ++k)
sum2 += Arow[k] * Btrow[k];

C[i * A_rows + j] = sum2 + sum1;
}
}
}

#include <random>
#include <thread>
#include <chrono>
#include <type_traits>

template <typename T>
void Test(T const * A, size_t A_rows, size_t A_cols, T const * B, size_t B_rows, size_t B_cols, T const * C, T eps) {
for (size_t i = 0; i < A_rows / 16; ++i)
for (size_t j = 0; j < B_cols / 16; ++j) {
T sum = 0;
for (size_t k = 0; k < A_cols; ++k)
sum += A[i * A_cols + k] * B[k * B_cols + j];
ASSERT_MSG(std::abs(C[i * A_rows + j] - sum) <= eps * A_cols, "i " + std::to_string(i) + " j " + std::to_string(j) +
" C " + std::to_string(C[i * A_rows + j]) + " ref " + std::to_string(sum));
}
}

double Time() {
static auto const gtb = std::chrono::high_resolution_clock::now();
return std::chrono::duration_cast<std::chrono::duration<double>>(
std::chrono::high_resolution_clock::now() - gtb).count();
}

template <typename T>
void Run() {
size_t constexpr A_rows = 1000, A_cols = 1000, B_rows = 1000, B_cols = 1000;

std::string const tname = std::is_same_v<T, float> ? "float" : std::is_same_v<T, double> ?
"double" : std::is_same_v<T, int32_t> ? "int" : "<unknown>";
bool const is_int = tname == "int";
std::mt19937_64 rng{123};
std::vector<T> A(A_rows * A_cols), B(B_rows * B_cols), C(A_rows * B_cols);
for (size_t i = 0; i < A.size(); ++i)
A[i] = is_int ? (int64_t(rng() % (1 << 11)) - (1 << 10)) : (T(int64_t(rng() % (1 << 28)) - (1 << 27)) / T(1 << 27));
for (size_t i = 0; i < B.size(); ++i)
B[i] = is_int ? (int64_t(rng() % (1 << 11)) - (1 << 10)) : (T(int64_t(rng() % (1 << 28)) - (1 << 27)) / T(1 << 27));
auto tim = Time();
Mul(&A[0], A_rows, A_cols, &B[0], B_rows, B_cols, &C[0]);
tim = Time() - tim;
std::cout << std::setw(6) << tname << ": time " << std::fixed << std::setprecision(4) << tim << " sec" << std::endl;
Test(&A[0], A_rows, A_cols, &B[0], B_rows, B_cols, &C[0], tname == "float" ? T(1e-7) : tname == "double" ? T(1e-15) : T(0));
}

int main() {
try {
#if USE_OPENMP
omp_set_num_threads(std::thread::hardware_concurrency());
#endif
Run<float>();
Run<double>();
Run<int32_t>();
return 0;
} catch (std::exception const & ex) {
std::cout << "Exception: " << ex.what() << std::endl;
return -1;
}
}

Output:

 float: time 0.1569 sec
double: time 0.3168 sec
int: time 0.1565 sec

Dimensions of matrix multiplication

You have 2 problems. One is that R eagerly converts single-column matrices to vectors, so K[1:2,1] is interpreted as a vector, not a matrix. We can fix this with the drop = FALSE argument to [.

There's also an order-of-operations problem, %*% has higher precedence than *, so we need parentheses to make the * happen first:

(K[1:2, 1, drop = F] * FF[1]) %*% K1[1, 1:2]
# [,1] [,2]
# [1,] 4e+07 2e+07
# [2,] 2e+07 1e+07

Unrelated to your problem, but it seems strange to make FF an array, seems like a length-100 vector would be simpler.



Related Topics



Leave a reply



Submit