Define a custom function with mxnet model

I am trying to use a custom function with an mxnet neural network model. This custom function is supposed to create a fuzzy representation of the final layer activation vector.

I am confused how to make this work as regular python functions are working in an imperative manner, while mxnet is working in a declarative manner (i.e. Symbols). When I try to use my function with the defined model it raises an exception as the parameter is a Symbol not a real array during model declaration.

Any ideas regarding how to make my custom function works in a declarative manner (i.e. like mxnet.sym.concat for example)?

Here is my custom function definition:

def getFuzzyRep(arr):
    fuzzRep = ""
    x_qual = np.arange(0, 11, 0.1)
    qual_lo = fuzz.trimf(x_qual, [0, 0, 0.5])
    qual_md = fuzz.trimf(x_qual, [0, 0.5, 1.0])
    qual_hi = fuzz.trimf(x_qual, [0.5, 1.0, 1.0])
    FuzzVals=["Low","Medium","High"]
    i =0
    for val in arr:
        if i == 0:
            fuzzRep = FuzzVals[np.argmax([fuzz.interp_membership(x_qual, qual_lo, val),fuzz.interp_membership(x_qual, qual_md, val),fuzz.interp_membership(x_qual, qual_hi, val)])]
        else:
            fuzzRep = fuzzRep +","+FuzzVals[np.argmax([fuzz.interp_membership(x_qual, qual_lo, val),fuzz.interp_membership(x_qual, qual_md, val),fuzz.interp_membership(x_qual, qual_hi, val)])]
        i+=1
    return fuzzRep 

Hi,
you can define your model using gluon which uses real ndarrays during model declaration. Here’s a couple resources to get started with gluon:

1 Like

Hi, if you go down the gluon API (highly recommended), you can create a custom gluon.nn.HybridBlock, encapsulating your function in the hybrid_forward method. Something like

class FuzzyRepClass(gluon.nn.HybridBlock):
    def __init__(self, some_arguments, **kwargs):
        gluon.nn.HybridBlock.__init__(self,**kwargs)
        
        # define stuff 

    # Here F plays the role of NDArray or Symbol operation
    # Change np with F if this is supported in mxnet 
    def hybrid_forward(self, F, arr):
        fuzzRep = ""
        x_qual = F.arange(0, 11, 0.1) 
        qual_lo = fuzz.trimf(x_qual, [0, 0, 0.5])
        qual_md = fuzz.trimf(x_qual, [0, 0.5, 1.0])
        qual_hi = fuzz.trimf(x_qual, [0.5, 1.0, 1.0])
        FuzzVals=["Low","Medium","High"]
        i =0
        for val in arr:
            if i == 0:
                fuzzRep = FuzzVals[np.argmax([fuzz.interp_membership(x_qual, qual_lo,val),fuzz.interp_membership(x_qual, qual_md, val),fuzz.interp_membership(x_qual, qual_hi, val)])]
            else:
                fuzzRep = fuzzRep +","+FuzzVals[np.argmax([fuzz.interp_membership(x_qual, qual_lo, val),fuzz.interp_membership(x_qual, qual_md, val),fuzz.interp_membership(x_qual, qual_hi, val)])]
            i+=1
         return fuzzRep 

etc.

From your code it is not clear what the objects fuzz do. If they can be written in term of primitive operations supported both by Symbol and NDArray, then your function can be translated to a static Symbol. If not, you’ll need to go with the imperative model (in the gluon API, both Symbol and NDArray are unified, one passes from NDArray to Symbol using hybridize, but that is not always easy/possible, depending on the definitions of the various objects). Also, make sure all operations are differentiable (I don’t think you can differentiate argmax). Hope this helps.

Thanks for your reply. The fuzz object is referreing to the SciKit-Fuzzy package (import skfuzzy as fuzz) I am sorry for not including this import in the post. I used Custom operator to implement this function for the Symbol API as follows:

class Fuzzify(mx.operator.CustomOp):
def forward(self, is_train, req, in_data, out_data, aux):
x = in_data[0].asnumpy()
y = getFuzzyRep(x)
self.assign(out_data[0], req[0], y)

def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
    y = out_data[0].asnumpy()
    self.assign(in_grad[0], req[0], y)

@mx.operator.register(“fuzzify”)
class FuzzifyProp(mx.operator.CustomOpProp):
def init(self):
super(FuzzifyProp, self).init(need_top_grad=False)
def list_arguments(self):
return [‘data’]
def list_outputs(self):
return [‘output’]
def infer_shape(self, in_shape):
data_shape = in_shape[0]
output_shape = 1
return [data_shape], [output_shape],
def infer_type(self, in_type):
return in_type, np.int32,
def create_operator(self, ctx, shapes, dtypes):
return Fuzzify()

def getFuzzyRep(arr):
fuzzRep = “”
fuzztot = 0
x_qual = np.arange(0, 11, 0.1)
qual_lo = fuzz.trimf(x_qual, [0, 0, 0.5])
qual_md = fuzz.trimf(x_qual, [0, 0.5, 1.0])
qual_hi = fuzz.trimf(x_qual, [0.5, 1.0, 1.0])
FuzzVals=[“Low”,“Medium”,“High”]
i =0
for val in arr:
tmp = FuzzVals[np.argmax([fuzz.interp_membership(x_qual, qual_lo, val),fuzz.interp_membership(x_qual, qual_md, val),fuzz.interp_membership(x_qual, qual_hi, val)])]
if i == 0:
fuzzRep = tmp
else:
fuzzRep = fuzzRep + “,” + tmp
if tmp == “Low”:
fuzztot += 1
elif tmp == “Medium”:
fuzztot += 2
else:
fuzztot += 3
i+=1
return fuzztot

I would be grateful if you can validate this implementation for me. Thanks.

1 Like