Multiplying Across in a Numpy Array

Multiplying across in a numpy array

Normal multiplication like you showed:

>>> import numpy as np
>>> m = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> c = np.array([0,1,2])
>>> m * c
array([[ 0, 2, 6],
[ 0, 5, 12],
[ 0, 8, 18]])

If you add an axis, it will multiply the way you want:

>>> m * c[:, np.newaxis]
array([[ 0, 0, 0],
[ 4, 5, 6],
[14, 16, 18]])

You could also transpose twice:

>>> (m.T * c).T
array([[ 0, 0, 0],
[ 4, 5, 6],
[14, 16, 18]])

How to multiply each row in matrix by its scalar in NumPy?

You can use broadcasting: A * B[:, None]:

array([[ 1,  2,  3],
[ 8, 10, 12],
[21, 24, 27]])

numpy: multiply arrays rowwise

add an axis to b:

>>> np.multiply(a, b[:, np.newaxis])
array([[ 1, 2],
[ 6, 8],
[15, 18],
[28, 32]])

multiplying across in two numpy arrays

In [185]: a = np.random.rand(2, 25)
...: b = np.random.rand(2)

The multiplication is possible with broadcasting:

In [186]: a.shape
Out[186]: (2, 25)
In [187]: a.T.shape
Out[187]: (25, 2)
In [189]: (a.T*b).shape
Out[189]: (25, 2)

(25,2) * (2,) => (25,2) * (1,2) => (25,2). The transpose is a moveaxis, changing the result to (2,25)

In your second case.

In [191]: c = np.moveaxis([a * bb for bb in b.T], -1, 0)
In [192]: c.shape
Out[192]: (2, 4, 25)
In [193]: np.array([a * bb for bb in b.T]).shape
Out[193]: (4, 25, 2)

b.T is (4,2), so bb is (2,); with the (25,2) a, produces (25,2) as above. add in the (4,) iteration.

(25,1,2) * (1,4,2) => (25,4,2), which can be transposed to (2,4,25)

In [195]: (a[:,None]*b.T).shape
Out[195]: (25, 4, 2)
In [196]: np.allclose((a[:,None]*b.T).T,c)
Out[196]: True

(2,4,1) * (2,1,25) => (2,4,25)

In [197]: (b[:,:,None] * a.T[:,None]).shape
Out[197]: (2, 4, 25)
In [198]: np.allclose((b[:,:,None] * a.T[:,None]),c)
Out[198]: True

Multiplying a 2D numpy array with every row of a 2D numpy array (Without using for loop)

Ok so what you do require from the explanation you added is for every row of MatrixB to do Matrix A * Matrix B[0] ,..., Matrix A * Matrix B[n].
Remember when multiplying matrices the order of the multiplication is important.

Essentialy this is equlevant to (Matrix A* (Matrix B )^T)^T ( ^T meaning the transpose of the matrix). In numpy code it can be done like this assuming the matrices are numpy arrays

matrixC = MatrixA.dot(MatrixB.T).T

Numpy, multiply array with scalar

You can multiply numpy arrays by scalars and it just works.

>>> import numpy as np
>>> np.array([1, 2, 3]) * 2
array([2, 4, 6])
>>> np.array([[1, 2, 3], [4, 5, 6]]) * 2
array([[ 2, 4, 6],
[ 8, 10, 12]])

This is also a very fast and efficient operation. With your example:

>>> a_1 = np.array([1.0, 2.0, 3.0])
>>> a_2 = np.array([[1., 2.], [3., 4.]])
>>> b = 2.0
>>> a_1 * b
array([2., 4., 6.])
>>> a_2 * b
array([[2., 4.],
[6., 8.]])

Multiplying a matrix with array of vectors in NumPy

There are a few different ways to solve this problem.

Option 1:

The most straightforward is to reshape the array vectors so that it has shape (3, 128 * 128), then call the builtin np.dot function, and reshape the result back to your desired shape.

(Note that the (128, 128) part of the array's shape is not really relevant to the rotation; it's an interpretation that you probably want to make your problem clearer, but makes no difference to the linear transformation you want to apply. Said another way, you are rotating 3-vectors. There are 128 * 128 == 16384 of them, they just happen to be organized into a 3D array like above.)

This approach would look like:

>>> v = vectors.reshape(-1, 3).T
>>> np.dot(rotation_matrix, v).shape
(3, 16384)
>>> rotated = np.dot(rotation_matrix, v).T.reshape(vectors.shape)
>>> rotated.shape == vectors.shape
True

Option 2:

Another method that does not involve any reshaping is to use NumPy's Einstein summation. Einstein summation is very flexible, and takes a while to understand, but its power justifies its complexity. In its simplest form, you "label" the axes that you want to multiply together. Axes that are omitted are "contracted", meaning a sum across that axis is computed. For your case, it would be:

>>> np.einsum('ij,klj->kli', rotation_matrix, vectors).shape
(128, 128, 3)
>>> np.allclose(rotated, np.einsum('ij,klj->kli', rotation_matrix_vectors))
True

Here's a quick explanation of the indexing. We are labeling the axes of the rotation matrix i and j, and the axes of the vectors k, l, and j. The repeated j means those are the axes multiplied together. This is equivalent to right-multiplying the reshaped array above with the rotation matrix (i.e., it's a rotation).

The output axes are labeled kli. This means we're preserving the k and l axes of the vectors. Since j is not in the output labels, there is a summation across that axis. Instead we have the axis i, hence the final shape of (128, 128, 3). You can see above that the dot-product method and the einsum method agree.

It can take a while to wrap your head around Einstein summation, but it is super awesome and powerful. I highly recommend learning more about it, especially if this sort of linear algebra is a common problem for you.

Elementwise multiplication of several arrays in Python Numpy

Your fault is in not reading the documentation:

numpy.multiply(x1, x2[, out])

multiply takes exactly two input arrays. The optional third argument is an output array which can be used to store the result. (If it isn't provided, a new array is created and returned.) When you passed three arrays, the third array was overwritten with the product of the first two.



Related Topics



Leave a reply



Submit