mxnet.ndarray.LeakyReLU in custom Gluon block

I tried to create a Gluon building block using mxnet.ndarray’s functions in the manner described in:
https://mxnet.incubator.apache.org/tutorials/gluon/gluon.html

All act_type’s work for mxnet.ndarray.LeakyReLU except ‘prelu’ ??
https://mxnet.incubator.apache.org/versions/0.12.0/api/python/ndarray/ndarray.html#mxnet.ndarray.LeakyReLU

It supports the following act_type:
• elu: Exponential Linear Unit. y = x > 0 ? x : slope * (exp(x)-1)
• leaky: Leaky ReLU. y = x > 0 ? x : slope * x
• prelu: Parametric ReLU. This is same as leaky except that slope is learnt during training.
• rrelu: Randomized ReLU. same as leaky but the slope is uniformly and randomly chosen from [lower_bound, upper_bound) for training, while fixed to be (lower_bound+upper_bound)/2 for inference.

I can get it to work fine for every function except ‘prelu’ where I get the following error:

Operator LeakyReLU expects 2 inputs, but got 1 instead.

The documentation does not make it clear what is the other input that is needed. Ideas?

# Trying to use ndarray LeakyReLU class of functions in Gluon
# https://github.com/apache/incubator-mxnet/blob/master/src/operator/leaky_relu-inl.h
class Net(gluon.Block):
    def __init__(self, **kwargs):
        super(Net, self).__init__(**kwargs)
        with self.name_scope():
            # layers created in name_scope will inherit name space
            # from parent layer.
            self.conv1 = nn.Conv2D(channels=K1, kernel_size=3)
            self.conv2 = nn.Conv2D(channels=K2, kernel_size=3)
            self.conv3 = nn.Conv2D(channels=K3, kernel_size=3)
            self.conv4 = nn.Conv2D(channels=K4, kernel_size=3)
            self.fc1   = nn.Dense(num_fc)
            self.fc2 = nn.Dense(num_outputs)

    def forward(self, x):
        x = F.LeakyReLU(self.conv1(x), act_type='elu')
        x = F.LeakyReLU(self.conv2(x), act_type='leaky')
        x = F.LeakyReLU(self.conv3(x), act_type='rrelu')
# if I change the 'prelu' on the next line to 'elu', 'leaky', or 'rrelu' it works
        x = F.LeakyReLU(self.conv4(x), act_type='prelu')
        x = x.reshape((0, -1))
        x = F.LeakyReLU(self.fc1(x))
        x = self.fc2(x)
        return x

net = Net()

Here is the full stack trace that I get back:

[08:07:54] /Users/travis/build/dmlc/mxnet-distro/mxnet-build/dmlc-core/include/dmlc/logging.h:308: [08:07:54] src/c_api/c_api_ndarray.cc:76: Check failed: num_inputs == infered_num_inputs (1 vs. 2) Operator LeakyReLU expects 2 inputs, but got 1 instead.

Stack trace returned 5 entries:
[bt] (0) 0   libmxnet.so                         0x0000000109c958d8 _ZN4dmlc15LogMessageFatalD2Ev + 40
[bt] (1) 1   libmxnet.so                         0x000000010aae7d2a _Z13SetNumOutputsPKN4nnvm2OpERKNS_9NodeAttrsERKiPiS8_ + 730
[bt] (2) 2   libmxnet.so                         0x000000010aae8658 _Z22MXImperativeInvokeImplPviPS_PiPS0_iPPKcS5_ + 232
[bt] (3) 3   libmxnet.so                         0x000000010aae8ba4 MXImperativeInvokeEx + 164
[bt] (4) 4   _ctypes.cpython-36m-darwin.so       0x00000001092e549f ffi_call_unix64 + 79

Traceback (most recent call last):
  File "gluon-mnist-v6.py", line 195, in 
    output = net(data)
  File "/Users/bc/mxnet/lib/python3.6/site-packages/mxnet/gluon/block.py", line 290, in __call__
    return self.forward(*args)
  File "gluon-mnist-v6.py", line 106, in forward
    x = F.LeakyReLU(self.conv4(x), act_type='prelu')
  File "", line 61, in LeakyReLU
  File "/Users/bc/mxnet/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py", line 92, in _imperative_invoke
    ctypes.byref(out_stypes)))
  File "/Users/bc/mxnet/lib/python3.6/site-packages/mxnet/base.py", line 146, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [08:07:54] src/c_api/c_api_ndarray.cc:76: Check failed: num_inputs == infered_num_inputs (1 vs. 2) Operator LeakyReLU expects 2 inputs, but got 1 instead.

Stack trace returned 5 entries:
[bt] (0) 0   libmxnet.so                         0x0000000109c958d8 _ZN4dmlc15LogMessageFatalD2Ev + 40
[bt] (1) 1   libmxnet.so                         0x000000010aae7d2a _Z13SetNumOutputsPKN4nnvm2OpERKNS_9NodeAttrsERKiPiS8_ + 730
[bt] (2) 2   libmxnet.so                         0x000000010aae8658 _Z22MXImperativeInvokeImplPviPS_PiPS0_iPPKcS5_ + 232
[bt] (3) 3   libmxnet.so                         0x000000010aae8ba4 MXImperativeInvokeEx + 164
[bt] (4) 4   _ctypes.cpython-36m-darwin.so       0x00000001092e549f ffi_call_unix64 + 79

Do you see the same problem with version 1.0.0?

Thanks for responding. I have verified that mx.sym.LeakyReLU works in MXNet alone, print(mx.version ) returns 0.12.1

conv1 = mx.sym.LeakyReLU(data=conv1, act_type=‘prelu’, name=‘conv1_act’)

What is the best way to install Gluon 1.0.0? I’m using virtualenv.

pip install mxnet should work.

I did the following…
pip install gluon==1.0.0

It failed with exactly the same error AFAIK:

mxnet.base.MXNetError: [10:56:25] src/c_api/c_api_ndarray.cc:76: Check failed: num_inputs == infered_num_inputs (1 vs. 2) Operator LeakyReLU expects 2 inputs, but got 1 instead.

full error messages below, which to me looks like the Gluon 1.1.0 error in my orig question:

[[11:04:52] /Users/travis/build/dmlc/mxnet-distro/mxnet-build/dmlc-core/include/dmlc/logging.h:308: [11:04:52] src/c_api/c_api_ndarray.cc:76: Check failed: num_inputs == infered_num_inputs (1 vs. 2) Operator LeakyReLU expects 2 inputs, but got 1 instead.

Stack trace returned 5 entries:
[bt] (0) 0 libmxnet.so 0x000000010be968d8 _ZN4dmlc15LogMessageFatalD2Ev + 40
[bt] (1) 1 libmxnet.so 0x000000010cce8d2a Z13SetNumOutputsPKN4nnvm2OpERKNS_9NodeAttrsERKiPiS8 + 730
[bt] (2) 2 libmxnet.so 0x000000010cce9658 Z22MXImperativeInvokeImplPviPS_PiPS0_iPPKcS5 + 232
[bt] (3) 3 libmxnet.so 0x000000010cce9ba4 MXImperativeInvokeEx + 164
[bt] (4) 4 _ctypes.cpython-36m-darwin.so 0x000000010b4e549f ffi_call_unix64 + 79

Traceback (most recent call last):
File “gluon-mnist-v6.py”, line 192, in
output = net(data)
File “/Users/bc/mxnet/lib/python3.6/site-packages/mxnet/gluon/block.py”, line 290, in call
return self.forward(*args)
File “gluon-mnist-v6.py”, line 108, in forward
x = self.pool4(F.LeakyReLU(self.conv4(x), act_type=‘prelu’))
File “”, line 61, in LeakyReLU
File “/Users/bc/mxnet/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py”, line 92, in _imperative_invoke
ctypes.byref(out_stypes)))
File “/Users/bc/mxnet/lib/python3.6/site-packages/mxnet/base.py”, line 146, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [11:04:52] src/c_api/c_api_ndarray.cc:76: Check failed: num_inputs == infered_num_inputs (1 vs. 2) Operator LeakyReLU expects 2 inputs, but got 1 instead.

Stack trace returned 5 entries:
[bt] (0) 0 libmxnet.so 0x000000010be968d8 _ZN4dmlc15LogMessageFatalD2Ev + 40
[bt] (1) 1 libmxnet.so 0x000000010cce8d2a Z13SetNumOutputsPKN4nnvm2OpERKNS_9NodeAttrsERKiPiS8 + 730
[bt] (2) 2 libmxnet.so 0x000000010cce9658 Z22MXImperativeInvokeImplPviPS_PiPS0_iPPKcS5 + 232
[bt] (3) 3 libmxnet.so 0x000000010cce9ba4 MXImperativeInvokeEx + 164
[bt] (4) 4 _ctypes.cpython-36m-darwin.so 0x000000010b4e549f ffi_call_unix64 + 79

sorry I’m not sure this entry that I just posted was correct, I’m deleting it now, and trying to investigate…

OK, perhaps someone else can please duplicate my issue with using the jupyter notebook at:
http://gluon.mxnet.io/chapter03_deep-neural-networks/mlp-gluon.html

I use all of the previous entries, but I change “In [55]:” to the following and with “elu” and it runs with no errors:

class MLP(gluon.Block):
def init(self, **kwargs):
super(MLP, self).init(**kwargs)
with self.name_scope():
self.dense0 = gluon.nn.Dense(64)
self.dense1 = gluon.nn.Dense(64)
self.dense2 = gluon.nn.Dense(10)

def forward(self, x):
    x = nd.LeakyReLU(self.dense0(x), act_type='elu')
    x = nd.LeakyReLU(self.dense1(x), act_type='elu')
    x = self.dense2(x)
    return x

Simply changing ‘elu’ to ‘prelu’, which seems top be OK in the documentation for LeakyReLU, I get this error:

MXNetError Traceback (most recent call last)
in ()
1 data = nd.ones((1,784))
----> 2 net(data.as_in_context(model_ctx))

~/mxnet/lib/python3.6/site-packages/mxnet/gluon/block.py in call(self, *args)
288 def call(self, *args):
289 “”“Calls forward. Only accepts positional arguments.”""
–> 290 return self.forward(*args)
291
292 def forward(self, *args):

in forward(self, x)
8
9 def forward(self, x):
—> 10 x = nd.LeakyReLU(self.dense0(x), act_type=‘prelu’)
11 x = nd.LeakyReLU(self.dense1(x), act_type=‘prelu’)
12 x = self.dense2(x)

~/mxnet/lib/python3.6/site-packages/mxnet/ndarray/register.py in LeakyReLU(data, act_type, slope, lower_bound, upper_bound, out, name, **kwargs)

~/mxnet/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py in _imperative_invoke(handle, ndargs, keys, vals, out)
90 c_array(ctypes.c_char_p, [c_str(key) for key in keys]),
91 c_array(ctypes.c_char_p, [c_str(str(val)) for val in vals]),
—> 92 ctypes.byref(out_stypes)))
93
94 if original_output is not None:

~/mxnet/lib/python3.6/site-packages/mxnet/base.py in check_call(ret)
144 “”"
145 if ret != 0:
–> 146 raise MXNetError(py_str(_LIB.MXGetLastError()))
147
148 if sys.version_info[0] < 3:

MXNetError: [13:01:15] src/c_api/c_api_ndarray.cc:76: Check failed: num_inputs == infered_num_inputs (1 vs. 2) Operator LeakyReLU expects 2 inputs, but got 1 instead.

Stack trace returned 5 entries:
[bt] (0) 0 libmxnet.so 0x00000001092a98d8 _ZN4dmlc15LogMessageFatalD2Ev + 40
[bt] (1) 1 libmxnet.so 0x000000010a0fbd2a Z13SetNumOutputsPKN4nnvm2OpERKNS_9NodeAttrsERKiPiS8 + 730
[bt] (2) 2 libmxnet.so 0x000000010a0fc658 Z22MXImperativeInvokeImplPviPS_PiPS0_iPPKcS5 + 232
[bt] (3) 3 libmxnet.so 0x000000010a0fcba4 MXImperativeInvokeEx + 164
[bt] (4) 4 _ctypes.cpython-36m-darwin.so 0x00000001084d449f ffi_call_unix64 + 79

Diving into the MXnet code:

I find that LeakyReLU with ‘prelu’ has a gamma parameter which serves as its initial weight, and then is learned
https://mxnet.incubator.apache.org/versions/0.12.0/api/python/ndarray/ndarray.html#mxnet.ndarray.LeakyReLU

case leakyrelu::kPReLU: {
weight = in_data[leakyrelu::kGamma].get<xpu, 1, real_t>(s);
grad_weight = in_grad[leakyrelu::kGamma].get<xpu, 1, real_t>(s);
grad_weight = sumall_except_dim<1>(F<prelu_grad>(data) * grad);
gdata = F<mshadow_op::xelu_grad>(data, mshadow::expr::broadcast<1>(weight, data.shape_))
* grad;
break;

I’ve tried but I’m sorry I don’t have enough understanding on how to correctly provide gamma in either of my examples above. Being new at Gluon/MXNet I could use some help. I also saw in some posting that kGamma is the same used in BatchNorm ???

I believe that the documentation for mxnet.ndarray.LeakyReLU should be updated. It makes no mention of a Gamma parameter or how to initialize it. I’m more than happy to take a stab at updating the documentation once I have a good working example.

I might suggest that an ‘prelu’ example use case could be written up for other users, since I’ve seen some unanswered questions and confusion around the web on a Prelu for MXNet. I am also more than happy to do this once I have something at works.

Finally just an observation:
in MXNet all of the ReLU-like functions are called LeakyReLU which is a bit strange. While this may be easier to implement it isn’t quite as friendly in user code, it seems to me.
mxnet.ndarray.LeakyReLU (with the activation functions: “rrelu”, “leaky”, “prelu”, “elu”)
mxnet.symbol.LeakyReLU (with the activation functions: “rrelu”, “leaky”, “prelu”, “elu”)

mxnet.gluon.nn.LeakyReLU (which is only max(x, a*x) or “leaky” above