Is there an API (or accepted standard way) for an outer product in MXNet? So something similar to NumPy’s `numpy.outer()`

, PyTorch’s torch.ger(), and TensorFlow’s tf.einsum()?

Appologies if I’ve missed something incredibly obvious.

Is there an API (or accepted standard way) for an outer product in MXNet? So something similar to NumPy’s `numpy.outer()`

, PyTorch’s torch.ger(), and TensorFlow’s tf.einsum()?

Appologies if I’ve missed something incredibly obvious.

I see that there is the `mxnet.ndarray.khatri_rao()`

in release `1.1.0b20180211`

, but is there an easy way to get the outer product from that?

@dmadeka, given that it seems you developed this work-around do you have any advice here?

Do you want to do this for more than 3 dimensions? What would be a use case for that?

FWIW - if I had to do this, I would do it specifically, writing einsum is a lot of parsing:

```
einsum('ji,jk->ijk',x,x)
mx.sym.swapaxes(mx.sym.broadcast_mul(mx.sym.expand_dims(x, 1), mx.sym.expand_dims(x, 2)), 0, 1)
```

Maybe @mseeger has something better - I’ve never really needed to do it

`mx.nd.linalg.gemm2`

or `mx.sym.linalg.gemm2`

do the job in a limited way. They take two tensors of arbitrary ranks but only contracts one of the last two indices. As far as I know, the general tensor contraction is not supported.

```
>>> x = mx.nd.ones((3,1))
>>> y = mx.nd.ones((1,3))
>>> mx.nd.linalg.gemm2(x, y)
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
<NDArray 3x3 @cpu(0)>
>>> mx.nd.linalg.gemm2(y, x)
[[3.]]
<NDArray 1x1 @cpu(0)>
```

You can choose the index to contract among the last two using `transpose_a`

and `transpose_b`

options. For general contractions, you need to use `gemm2`

with `swapaxes`

and `reshape`

functions, the former needs memory allocation and copy while the later does not. So, if possible, it would be better try to use `reshape`

. If you need an addition after contraction, `gemm`

is the right operation instead of `gemm2`

.