Error while running prediction on the fasttext text classification model from the gluon-nlp model zoo trained on yelp review polarity dataset

Description

I am trying to run a prediction on the Fast-text Word N-gram model trained with Yelp review data set. I am getting this error message below. The rest of entire training process is working fine. I have run for only one epoch and a truncated version of the data-set for quicker reproduction purposes. Can’t figure out how to solve it.

Error Message

INFO:root:Ngrams range for the training run : 1
INFO:root:Loading Training data
INFO:root:Opening file yelp_review_polarity/train_m.csv for reading input
INFO:root:Loading Test data
INFO:root:Opening file yelp_review_polarity/test_m.csv for reading input
INFO:root:Vocabulary size: 298608
INFO:root:Training data converting to sequences…
INFO:root:Done! Sequence conversion Time=4.39s, #Sentences=56294
INFO:root:Done! Sequence conversion Time=2.81s, #Sentences=3749
INFO:root:Encoding labels
INFO:root:Label mapping:{‘1’: 0, ‘2’: 1}
INFO:root:Done! Preprocessing Time=0.39s, #Sentences=56294
INFO:root:Done! Preprocessing Time=0.16s, #Sentences=3749
INFO:root:Number of labels: 2
INFO:root:Initializing network
INFO:root:Running Training on ctx:gpu(0)
INFO:root:Embedding Matrix Length:298608
INFO:root:Number of output units in the last layer :1
INFO:root:Network initialized
INFO:root:Changing the loss function to sigmoid since its Binary Classification
INFO:root:Loss function for training:SigmoidBinaryCrossEntropyLoss(batch_axis=0, w=None)
INFO:root:Starting Training!
INFO:root:Training on 56294 samples and testing on 3749 samples
INFO:root:Number of batches for each epoch : 3518.375, Display cadence: 352
INFO:root:Epoch : 0, Batches complete :0
INFO:root:Epoch : 0, Batches complete :352
INFO:root:Epoch : 0, Batches complete :704
INFO:root:Epoch : 0, Batches complete :1056
INFO:root:Epoch : 0, Batches complete :1408
INFO:root:Epoch : 0, Batches complete :1760
INFO:root:Epoch : 0, Batches complete :2112
INFO:root:Epoch complete :0, Computing Accuracy
INFO:root:Epochs completed : 0 Test Accuracy: 0.9007735396105628, Test Loss: 0.3058077375582877
Traceback (most recent call last):
File “fasttext_word_ngram.py”, line 422, in
train(arguments)
File “fasttext_word_ngram.py”, line 416, in train
logging.info(net(mx.nd.reshape(mx.nd.array(train_vocab[[‘This’, ‘movie’, ‘is’, ‘awful’]], ctx=ctx), shape=(-1, 1)), mx.nd.array([4], ctx=ctx)).sigmoid())
File “/home/ubuntu/.local/lib/python3.6/site-packages/mxnet/gluon/block.py”, line 693, in call
out = self.forward(*args)
File “/home/ubuntu/.local/lib/python3.6/site-packages/mxnet/gluon/block.py”, line 1148, in forward
return self._call_cached_op(x, *args)
File “/home/ubuntu/.local/lib/python3.6/site-packages/mxnet/gluon/block.py”, line 1020, in _call_cached_op
out = self._cached_op(*cargs)
File “/home/ubuntu/.local/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py”, line 170, in call
ctypes.byref(out_stypes)))
File “/home/ubuntu/.local/lib/python3.6/site-packages/mxnet/base.py”, line 255, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: Error in operator fasttextclassificationmodel0_meanpoolinglayer0_sequencemask0: Shape inconsistent, Provided = [1], inferred shape=(4,)

To Reproduce

Just added this line at the end of the train function. It is inspired from the sentiment analysis tutorial where prediction was being done in a similar way.

logging.info(net(mx.nd.reshape(mx.nd.array(train_vocab[[‘This’, ‘movie’, ‘is’, ‘awful’]], ctx=ctx), shape=(-1, 1)), mx.nd.array([4], ctx=ctx)).sigmoid())