Hello,

I used MXNet previously to beat keras+tensorflow accuracy in CNN regression models.

Now I am trying to implement LSTM, which in keras runs fine:

```
from keras.layers import LSTM,Flatten,Input
from keras import Model
import numpy as np
def make_keras_stacked_lstm():
inp = Input(shape=(100,1))
lstm1 = LSTM(16, return_sequences=True)(inp)
lstm2 = LSTM(1, return_sequences=True)(lstm1)
outp = Flatten()(lstm2)
return Model([inp], outp)
def keras_main():
ins = np.random.uniform(size=(1000,100,1))
outs = np.random.uniform(size=(1000,100))
model = make_keras_stacked_lstm()
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(ins, outs, epochs=1, validation_split=.1)
if __name__ == '__main__':
# main()
keras_main()
```

How can I translate this to MXNet, in either Symbol or gluon dialect? I found no “return_sequences=True” analogue.