In transfer learning, we use a pre-made model (like VGG or Resnet), and update only the top-most one or two layers.

I began to work on an example project to classify dogs and cats. It uses Resnet for transfer learning. When I was training the model, PyTorch reports the number of frozen and trainable parameters separately. Frozen parameters are around millions in number whereas the trainable are around a few thousand.

This made me to think about the frozen params of my life. We come to this world with many frozen parameters, anything of major importance like where you were born or family are frozen. You can't do anything about it. There is a layer of trainable ones though and if find a niche, a good functionality that we can fit into, we can update ourselves to succeed with these frozen ones.

If you fight to change the frozen parameters, that will probably be futile. If you lose hope, though, that you cannot change anything, it will drain your energy and you'll become irrelevant. You have to identify which parameters can be updated, make experiments with them, use backpropagation to update your beliefs and move on.