Error when porting keras-tf LSTM to keras-mxnet

Hi, I’d like to use MXNet for text classification. My tokens are not exactly text, but product IDs and my use-case is product recommendation: given a sequence of product IDs (browse or purchase), I want to predict next item in the sequence.
As a start I’m trying to run that simple snippet from this blog post

# LSTM for sequence classification in the IMDB dataset
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence

# fix random seed for reproducibility

# load the dataset but only keep the top n words, zero the rest
top_words = 5000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)

# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)

# create the model
embedding_vecor_length = 32
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary()), y_train, epochs=3, batch_size=64)

# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))

I’m testing in Amazon SageMaker notebook instance (p2.xlarge). It works like a charm in tensorflow kernel, and in mxnet kernel it errors:

ValueError                                Traceback (most recent call last)
<ipython-input-45-e268cc4eb5b2> in <module>()
      4 model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
      5 #model.add(LSTM(100, return_sequences=True))
----> 6 model.add(Dense(32, activation='sigmoid'))
      7 model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

~/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/keras/ in add(self, layer)
    330                  output_shapes=[self.outputs[0]._keras_shape])
    331         else:
--> 332             output_tensor = layer(self.outputs[0])
    333             if isinstance(output_tensor, list):
    334                 raise TypeError('All layers in a Sequential model '

~/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/keras/engine/ in __call__(self, x, mask)
    527             # Raise exceptions in case the input is not compatible
    528             # with the input_spec specified in the layer constructor.
--> 529             self.assert_input_compatibility(x)
    531             # Collect input shapes to build layer.

~/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/keras/engine/ in assert_input_compatibility(self, input)
    460                                 + ': expected ndim >= ' +
    461                                          str(ndim) + ', found ndim=' +
--> 462                                          str(K.ndim(x)))
    463                 else:
    464                     if K.ndim(x) != spec.ndim:

ValueError: Input 0 is incompatible with layer dense_4: expected ndim >= 2, found ndim=0

Why is that?

Are you using the latest keras-mxnet that was released 12 days ago? Release notes mention " Added support for RNN with unrolling set to False by default #168, requires latest mxnet (1.3.1 or newer), see rnn examples under examples folder (imdb_lstm, addition_rnn, mnist_irnn)"

Also checkout this tutorial on RNN with MXNet Keras.