How to make custome operator as middle layer using symbol not gluon

i am a chinese in Beijing, the website below is more info about my question: https://discuss.gluon.ai/t/topic/7499 1
i need to realize a function which add the extra term as part of ReLU layer’s output’s gradient to the original back-propagating, what that mean is that i want to change the original network back-propagating, add something to effect the normal parameter iteration.
i ever try the multi-mask idea, just write two loss-function based on same layer ReLU, outcome is not good, so now i just want to write a custome operator as middle layer using module not gluon api, this layer is based on layer ReLU , and connect to layer FulyConnected,just like below

the custome layer do not chage anything only add extra gradient in the backward. like this
image
obviously, i do not know to implement it completely, like how to handle the label or else,can u give me some advice …
keep touch for bettter forward!

Though I don’t know this well, I have seen an example in other’s code. This code defines an identity operation. Maybe you can checkout the names involved below.

class IdentityOp(mx.operator.CustomOp):
def init(self, logging_prefix=“identity”, input_debug=False, grad_debug=False):
super(IdentityOp, self).init()
self.logging_prefix=logging_prefix
self.input_debug = input_debug
self.grad_debug = grad_debug

def forward(self, is_train, req, in_data, out_data, aux):
    if(self.input_debug):
        logging.info("%s: in_norm=%f, in_max=%f, in_mean=%f, in_min=%f, in_shape=%s"
                      %(self.logging_prefix, np.linalg.norm(in_data[0].asnumpy()), in_data[0].asnumpy().max(), np.abs(in_data[0].asnumpy()).mean(), in_data[0].asnumpy().min(), str(in_data[0].shape)))
    self.assign(out_data[0], req[0], in_data[0])

def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
    self.assign(in_grad[0], req[0], out_grad[0])
    if (self.grad_debug):
        logging.info("%s: grad_norm=%f, grad_shape=%s"
                      % (self.logging_prefix, np.linalg.norm(in_grad[0].asnumpy()), str(in_grad[0].shape)))

@mx.operator.register(“identity”)
class IdentityOpProp(mx.operator.CustomOpProp):
def init(self, logging_prefix=“identity”, input_debug=False, grad_debug=False):
super(IdentityOpProp, self).init(need_top_grad=True)
self.input_debug = safe_eval(input_debug)
self.grad_debug = safe_eval(grad_debug)
self.logging_prefix = str(logging_prefix)

def list_arguments(self):
    return ['data']

def list_outputs(self):
    return ['output']

def infer_shape(self, in_shape):
    data_shape = in_shape[0]
    output_shape = in_shape[0]
    return [data_shape], [output_shape], []

def create_operator(self, ctx, shapes, dtypes):
    return IdentityOp(input_debug=self.input_debug,
                      grad_debug=self.grad_debug,
                      logging_prefix=self.logging_prefix)

Thank god for your help, before i have the trouble


learn from your code ,i rewrite the customeop register
image
yes, it work, and i will learn more to understand why!

but the last outcome of accuracy is not good ,much worse than paper ,it need more work!
keep touch for bettter forward!