68545 # Matrix vector multiplication along array axes

in a current project I have a large multidimensional array of shape (I,J,K,N) and a square matrix of dim N.

I need to perform a matrix vector multiplication of the last axis of the array with the square matrix.

So the obvious solution would be:

```for i in range(I): for j in range(J): for k in range(K): arr[i,j,k] = mat.dot(arr[i,j,k]) ```

but of course this is rather slow. So I also tried numpy's tensordot but had little success. I would expect that something like:

```arr = tensordot(mat,arr,axes=((0,1),(3))) ```

should do the trick but I get a shape mismatch error.

Has someone a better solution or knows how to correctly use tensordot?

Thank you!

This should do what your loops, but with vectorized looping:

```from numpy.core.umath_tests import matrix_multiply arr[..., np.newaxis] = matrix_multiply(mat, arr[..., np.newaxis]) ```

`matrix_multiply` and its sister `inner1d` are hidden, undocumented, gems of numpy, although a full set of linear algebra gufuncs should see the light with numpy 1.8. `matrix_multiply` does matrix multiplication on the last two dimensions of its inputs, and broadcasting on the rest. The only tricky part is setting an additional dimension, so that it sees column vectors when multiplying, and adding it also on assignment back into array, so that there is no shape mismatch.

I think your for loop is wrong, and for this case `dot` seems to be enough:
```# a is your IJKN # b is your NN c = dot(a, b) ```
Here `c` will be a `IJKN` array. If you want to sum over the last dimension to get the `IJK` array:
```arr = dot(a,b).sum(axis=3) ```