Embedding with sparse_grad gets " input contains data out of bound"

Hi,
I’m trying to use sparse_grad=True in a Gluon Embedding model . When sparse_grad=False all works well while when I switch to sparse_grad=True then the job fails with

MXNetError: [22:06:29] src/operator/tensor/indexing_op.cu:284: Check failed: is_valid Embedding input contains data out of bound

I have checked and the input to the embeddings are indexes of size < input_dim and the data is the exact same in both cases. I am suspecting that the error is actually somewhere else and is wrongly caught.
Any idea how to approach this?

Are there any other parameters that should be different when calling Embedding with sparse_grad=True?

Also, this happens when I use a SCE loss function:

loss = gluon.loss.SoftmaxCrossEntropyLoss(axis=1, sparse_label=True, batch_axis=0)

I think there’s might be some inconsistency in error handling when sparse_grad is set to different value. When it’s set to False, the op handles invalid index by documentation:
By default, if any index mentioned is too large, it is replaced by the index that addresses the last vector in an embedding matrix.

But when sparse_grad=True, it raises an exception, which is not documented at all

issue solved. Problem was with row indexes generated using a mx.nd.random_uniform function that generates floating points instead of integers. randint was not included in previous mxnet versions and it is not. Using randint solved the problem. Thanks Haibin and mxnet team for helping offline