1

I feel that the neural code/neural coding (how neurons or biases enode the symbolic concepts of the chains of concepts, e.g. each feature is chain of symbolic functions and their parameters) is the key to understanding neural networks and hence - the path towards the selection of optimal architecture (and its parameters - shape, width, length) for the task. Neural code could also be used directly in the knowledge distillation from the NN and the fine-tuning and improving of this distillation process. Neural code could also be used to the injection of biases (e.g. preexisting knowledge, expert knowledge, ethical restriction) by changing weights/biases in the precomputed manner and not by generation of the large corpus of training data and subsequent training of the model.

I feel that the neural code is the key question of the theory or artificial neural networks.

That is why it is surprising that there is almost no work in this are. There are somehow large literture in the neuroscience, there is some work on the coding in spiking artificial neuronal network. And there is this nice https://arxiv.org/abs/2106.14587 preprint that can be considered as the effort to define the entier category of neural codes (from the category theory point of view - they are keen to consider the entire universes and then do some classification or finding individual/opitmal objects - i.e. optimal codes in this case). But this preprint is not explicitly considering this coding question as well.

So - my question is: why there is so few research (almost non-existant) on the neural code/coding of artificial neural networks? And additionaly - are there other approachers that tackle the questions which I mentioned at the start of my question and for which the neural coding was the answer (or big part of it) in my opinion - and hence that can explain the scarcity of neural coding research?

TomR
  • 141
  • 4

1 Answers1

0

I'm not sure there can be an objective answer to this question.

Be careful, the fact that these are called "neural networks" in ML is just an analogy: artificial neural networks are not the same as biological neural networks. It's tempting to imagine that one can straightforwardly apply what we know from biological neurons to ML neurons, but there's no evidence that things can work like this. To take a ridiculous comparison, it's as if one tries to study how trees work in nature to improve decision trees.

In general, there are various apparently intuitive ideas about following how nature works to improve technology which have been proved wrong in the past. Also there is still a lot that that we don't know about how the brain works, afaik. Finally there is a crucial reason why artificial neural networks can't work like the brain: they don't have any way to experience the world, they can only mimic the training data.

Erwan
  • 24,823
  • 3
  • 13
  • 34