hello, my env are : linux64 + python2.7 + mxnet1.3 ( 4 x i5 CPU 16G memory )
I got a strange behaviour when use topk and nd.asscalar() after resnet model predict
the base code like this, i calculate the time needed for topk and asscalar and print it:
image_net = gluoncv.model_zoo.get_model( “ResNet50_v1”, pretrained=True,ctx=mx.cpu())
fake_x = nd.zeros(shape=(1,3,500,500))
y = image_net( fake_x )
t1 = time.time()
idx = nd.topk(y ,k=1)
idx = idx[0][0].asscalar()
t2 = time.time()
print("{}ms".format( 1000*(t2-t1) ) )
the elapsed time is about 900ms !! Obviously it’s got wrong somewhere I guess
but when i add a sleep(1) after the model predict, like this:
image_net = gluoncv.model_zoo.get_model( “ResNet50_v1”, root="./param", pretrained=True,ctx=mx.cpu())
fake_x = nd.zeros(shape=(1,3,500,500))
y = image_net( fake_x )
time.sleep(1)
t1 = time.time()
idx = nd.topk(y ,k=1)
idx = idx[0][0].asscalar()
t2 = time.time()
print("{}ms".format( 1000*(t2-t1) ) )
the elapsed time is about 0.8 ms !!! can you believe it ?
i try to disable python gc ( use gc.set_threshold(0) and gc.disable() ),but it doesn’t work
call for help