0

The insurance company I work for has a computationally intensive process to estimate future earnings based on tables of assumptions regarding price and probability of cancelation. I would like to train a model to approximate this process. I have tried a number of models, including xgboost and various configurations of neutral networks. The problem is that even when the model shows good performance on both training and test sets, it doesn't succeed in estimating the effect of a change in the assumptions that are inputted. For example, if the probability of cancelation goes up, the value of the future earnings should decrease - but not in a simple linear fashion. Understanding the sensitivity of the calculations to various assumptions is one of the goals of the proxy model. What is the best way forward? Is this even a reasonable problem to try to tackle with machine learning?

JJ Levine
  • 1
  • 2
  • 4
    Hi and welcome to the community! do u mind a bit more context on "estimating the effect of a change in the assumptions that are inputted"? – Kasra Manshaei Jun 23 '21 at 07:08
  • Thank you. I hope my edit clarifies a bit more – JJ Levine Jun 23 '21 at 15:12
  • 1
    The universal approximation theorem says this is possible if certain assumptions about your company's function are met (and I suspect they are). Whether or not it can be done with the amount of data you have remains to be seen. – Dave Jun 23 '21 at 15:44

0 Answers0