GPU memory garbage collection

I train sequentially different networks, and like to rely on the GPU Free Memory value to dynamically compute some heuristics such as about the batch size. However the GPU memory is not released (even after delete, gc.collect(), nvidia-smi,…) and is re-used by mxnet for efficiency - which prevents measuring the real free memory.

Is there a know way to explicitly reclaim unused GPU memory?
Or any alternate idea?

This need appeared a few times some time ago, but I haven’t found any solution since (,

Many thanks,

Hi AL, I’m not really sure how you would go about doing this because like you said mxnet GPU memory deallocation is asynchronous. Looks like maybe this merged pr attempted to address some of those issues. You can try playing around with some of the environment variables here particularly MXNET_GPU_MEM_POOL_RESERVE