Prepare custom data set for object detection

Is there a Python code available to convert a set of image annotation .txt files to LST format preferred by GluonCV
The existing files are in format that is used by YOLOv3 original (Redmon’s) code where each line contains one object_id and its bbox:
object_Id, xmin, xmax, ymin, ymax \n

simply add index, some header and copy the existing lines as the label, don’t forget to append the image path, the full tutorial:

Hi, did you find a converter script - i’m doing the same thing. moving a dataset from the yolo/darknet format to mxnet. any pointers appreciated, before i write my one converter…

Hi there!
No unfortunately I could not find any ready to use script for conversion of annotation files. Like changing YOLO format to LST format. Have started the script not tested yet.

I’ve almost got it working - just having an issue getting the rec file to load -
ImageDetRecordIOParser: label_pad_width: 5 smaller than estimated width: 34

I must be misreading part of this

once i get it figured out will put in git for others to use.

@jlwebuser Can you share your files so I can have a look? thanks

got it working tonight , the python script is called

it works on jpg files with labels in a txt file of the same basename - the kind generated by the yolo_mark program used by darknet.

let me know if you find any issues - its a simple script. I just had to understand the details of the format, how to deal with the corner case of an image that has no lables, and getting the aws sagemaker hyperparameters to work with it.