1

Language-to-code transformation/generation require multiple skills - language and reasoning skills to digest the core problem from the natural language specification. And programming language knowledge. There are separate pret-trained models for the language and code. And there are some multimodal language+code models (e.g. from stackoverflow, Github issues, etc.). My question is - is there traning of multimodal models that rely on the supervision by unimodal models solely?

TomR
  • 141
  • 4

0 Answers0