1

I was wondering about the differences between "multi-task learning" and "domain generalization". It seems to me that both of them are types of inductive transfer learning but I'm not sure of their differences.

1 Answers1

1
  • Domain generalization: Aims to train a model using multi-domain source data, such that it can directly generalize to new domains without need of retraining. Focusing, Multiple domains on same task

  • Multi-task learning (MTL): MTL is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In other words, same domain on multiple tasks

Main Difference:

Domain generalization Multi-task learning
Multiple domain dataset on same task Same domain dataset on multiple tasks
As its a single task, no need for parallel execution Multiple tasks are executed in parallel
Archana David
  • 1,189
  • 4
  • 21
  • Thanks for your answer. What do you mean by executing in parallel? Training is done in parallel? – Milad Sikaroudi Jul 27 '21 at 05:47
  • Supposing there are 3 multiple tasks A,B and C each task takes around 5 minutes then multi-task learning ensures task A,B, & C are executed in parallel taking same time(5minutes) for all the tasks to be completed as compared to sequential execution which happens one after another. For sequential execution once task A is completed only then task B is executed, post completion of task B then execution of task C commences ideally taking 15 minutes to execute (5+5+5) – Archana David Jul 27 '21 at 07:08