I feel that the neural code/neural coding (how neurons or biases enode the symbolic concepts of the chains of concepts, e.g. each feature is chain of symbolic functions and their parameters) is the key to understanding neural networks and hence - the path towards the selection of optimal architecture (and its parameters - shape, width, length) for the task. Neural code could also be used directly in the knowledge distillation from the NN and the fine-tuning and improving of this distillation process. Neural code could also be used to the injection of biases (e.g. preexisting knowledge, expert knowledge, ethical restriction) by changing weights/biases in the precomputed manner and not by generation of the large corpus of training data and subsequent training of the model.
I feel that the neural code is the key question of the theory or artificial neural networks.
That is why it is surprising that there is almost no work in this are. There are somehow large literture in the neuroscience, there is some work on the coding in spiking artificial neuronal network. And there is this nice https://arxiv.org/abs/2106.14587 preprint that can be considered as the effort to define the entier category of neural codes (from the category theory point of view - they are keen to consider the entire universes and then do some classification or finding individual/opitmal objects - i.e. optimal codes in this case). But this preprint is not explicitly considering this coding question as well.
So - my question is: why there is so few research (almost non-existant) on the neural code/coding of artificial neural networks? And additionaly - are there other approachers that tackle the questions which I mentioned at the start of my question and for which the neural coding was the answer (or big part of it) in my opinion - and hence that can explain the scarcity of neural coding research?