I am really new here but I have managed to train (i think) my own DeepLab model with my custom dataset in coco format:
(Epoch 3, training loss 0.165: 100%|█████████████████████████████| 70/70 [12:10<00:00, 10.44s/it]
Epoch 3, validation pixAcc: 0.872, mIoU: 0.436: 100%|█████████████████████████████| 5/5 [01:04<00:00, 12.89s/it]
Epoch 3 validation pixAcc: 0.872, mIoU: 0.436)
Howewer I cannot run it successfully
I’ve not been able to find tutorial about training and using my own DeepLab model so I have adapdet some found logic to get it processing my images on cpu without exeptions
But now I am stuck
Every time a get the same empty result
Maybe the problem is the same as here:
About running on CPU and stuff, but I have managed to use GPUs only in training process, not in the running. And I don’t know how to do that
Maybe I have done all wrong
Could you get me some advice?
Thank you
Running script:
import mxnet as mx
from mxnet import image
from mxnet.gluon.data.vision import transforms
import gluoncv
# using cpu
ctx = mx.cpu(0)
#%%
filepath = r"F:\mxnet-cu100\gebrei(2)-1-2_06.jpg"
#%%
img = image.imread(filepath)
from matplotlib import pyplot as plt
plt.imshow(img.asnumpy())
plt.show()
#%%
from gluoncv.data.transforms.presets.segmentation import test_transform
img = test_transform(img, ctx)
#%%
CAT_LIST = [0, 1]
NUM_CLASS = 2
CLASSES = ("background", "building")
model = gluoncv.model_zoo.get_model(
'deeplab_resnet50_coco', pretrained=False, num_class=NUM_CLASS, classes=CLASSES)
model.load_parameters(r"F:\mxnet-cu100\runs\coco\deeplab\resnet50\epoch_0001_mIoU_0.4396.params",
allow_missing=True)
#%%
output = model.predict(img)
predict = mx.nd.squeeze(mx.nd.argmax(output, 1)).asnumpy()
#%%
from gluoncv.utils.viz import get_color_pallete
import matplotlib.image as mpimg
mask = get_color_pallete(predict, 'ade20k')
mask.save('output.png')
#%%
mmask = mpimg.imread('output.png')
plt.imshow(mmask)
plt.show()