Finetuning and access net layers

Can we please have chapter on how to finetune a pretrained network. Also be nice to know how to drop layers, add layers cut the network off at a certain point and add our own layers from that point on. Change the number of classes ect. We can no longer access the old tutorials that showed how to perform some of this. I know gluoncv has some information but a good chapter on how to perform net “surgery” would be really nice and very help full.
Thank you for all the work you have put into Dive into Deep Learning.

There are example tutorials from gluoncv:

https://gluon-cv.mxnet.io/build/examples_detection/finetune_detection.html

Also from the deep learning book here:

https://d2l.ai/chapter_computer-vision/fine-tuning.html

I’m aware of those I was more interested in taking a net apart at an arbitrary point and adding my own layers from that point on down or even grabbing two networks and splicing them together again at some point that makes sense of course. Also if it was possible to take a net used for images and one for NLP and combing them in a way that makes sense. So not necessarily fine-tuning but taking what you want and discarding the rest, or combining them in interesting ways. I think of it as a kind of experimenting with networks to see what can be accomplished. In my post heading I shouldn’t of put fine-tuning I was really more interested in the latter part I was just reading gluon-cv which you pointed out so that was the frame of mind I was in when I wrote the post. I’m taking a short break from NLP which I find very interesting, but challenging.