VideoClsCustom's new_lenght parameter when the number of frames in the videos is fewer than new_lenght

Hi Everyone, my apologies is that has been asked/answered someplace else, I couldn’t find it.

followed gluon CV tutorial on action recognition " fine tuning with your custom dataset". I am using the model slowfast_4x16_resnet50_custom, and following advice from this post I was able to make it work. I am using as shown there VideoClsCustom class to load the dataset:

train_dataset = VideoClsCustom(root=YOUR_ROOT_PATH, setting=YOUR_SETTING_FILE, train=True, new_length=64, slowfast=True, slow_temporal_stride=16, fast_temporal_stride=2, transform=transform_train)

In that post there is a clarification that said: “Basically this means, we randomly select 64 consecutive frames. For fast branch, we use a temporal stride of 2 to sample the 64 frames into 32 frames. For slow branch, we use a temporal stride of 16 to sample the 64 frames into 4 frames. Then we concatenate them together and feed it to the network as input.”

The thing is that most of my videos are short duration clips of 1 seconds, with 30 frames (30FPS), What happens when the dataset have videos that have fewer than 64 frames? (and sometimes even fewer than 32 frames)

Is the data loader duplicating some frames? (upsampling?) or is it filling them with noise or what?

Thanking you in advance,

Hi, the data loader just duplicates the last frame, like 1,2,3,4,5,6,6,6,6,6,6,…

If you have lots of videos that are less than 64 frames, I suggest using I3D models with a 32 frame input. Or you can preprocess your video to have more frames, e.g., interpolating frames.

I see, thank you for your response and all the help zhuyi490!