1

What do you think will be the future of deep learning? A lot of people talk about deep learning and I can see that it provides various possibilities. For instance, this question gives a nice explanation.

What's the future of deep learning? Will it become the main branch of machine learning or will the other branches of machine learning remain relevant as well? And what about different fields of research? Will it be more suitable for certain applications (like natural language processing) and less suitable for some other types of applications?

I'd love to hear your views on this, both from a practical as from a scientific point of view. Any practical experience or research that can help to resolve this question? More specifically: should we introduce deep learning in our company?

Guido
  • 119
  • 4
  • The future will be hard to predict but I summed up few pin points, that clearly shows, its dying a slow death. It will be interesting to see whether it will survive or fade away. – Abhishek Oct 17 '16 at 15:01
  • Thanks for the response. I'm curious to hear more views on it from people with different backgrounds / fields of work :) – Guido Oct 18 '16 at 10:04
  • No worries man. I personally heated-up that topic on the facebook and got many mix views on that. Some are saying that it will survive while according to other it may fade away. Cheers! :) – Abhishek Oct 18 '16 at 11:15
  • It's a pity this discussion is put on a hold. I think it can be relevant for a lot of people, both from an academic and from a business point of view. Any way to continue this discussion (here or elsewhere)? – Guido Oct 19 '16 at 13:34

1 Answers1

1

That's very interesting topic. So thumbs up from my side. Now coming to the point. Deep learning may be hot now but some variants of it or something new all together may emerge later. Let me point out the reasons I feel deep learning maybe getting old or will get.

  • Slow Learner

Slow learner as in, it converges to an optimal solution slowly but with GPU acceleration the training speed can be improved dramatically. The slowness is affected by the learning rate.

Adjusting the learning rate affects the reliability of the resulting deep neural net.

  • Huge Training Data Requirement

Deep learning requires very huge training data to achieve good performance. The presence of a huge number of parameters to adjust requires some huge example sets. The fact that deep learning requires such a large number of training examples makes it dull in some way, despite such huge training set requirement deep neural nets do have error rates of about 10%.

  • Overfitting

Deep learning tends to overfit easily but with the new dropout algorithm, this problem can be avoided but with some consequences such as an increase in error rates.

  • Poor Initial State

Deep neural nets have parameters that need to be initialized. The most used method is random initialization, this results in neural nets with very poor initial states. Compare this to a mammalian brain, the brain is born with some rigid instincts such as basic survival behavioral patterns.

  • Sensory Data Transformations

For example in visual object recognition tasks, images undergo various geometric and photometric transformations that need to be modeled by a recognition system to rectify new image observations. Deep learning as used today does not take such transformations into account, this is one of the reasons why deep neural nets still suffer from relatively high error rates (relative to a human).

Some other reason can be the slow transformation, algorithms mayb emerge together in future, strcture and the approaches. These are some pin points from my side. You can also go through these posts, they're also on it and to the point-

Cheers! :)

Abhishek
  • 1,949
  • 3
  • 14
  • 21