0

I am studying the paper https://arxiv.org/pdf/1505.05424.pdf and there is a formula I don't get page 4:

image

I don't understand how they obtain this formula. Moreover, with chain rule, I get $\frac{\partial f(\mathrm w, \theta)}{\partial\mathrm w} = \frac{\partial f(\mathrm w, \theta)}{\partial \mu}$.

Could someone tell me the proof behind the formula? I am not very good with differentiation

Jack21
  • 1
  • 1

1 Answers1

-1

import numpy as np from scipy.stats import norm

Generate some random data

np.random.seed(42) data = np.random.normal(0, 1, size=100)

Define prior distribution

prior_mean = 0 prior_std = 1

Define variational posterior distribution

posterior_mean = 0 posterior_std = 1

Perform variational inference

for _ in range(1000): # Number of iterations for optimization # Update mean and standard deviation of the variational posterior posterior_mean = np.mean(data) / (1 + 1 / prior_std2) posterior_std = np.sqrt(1 / (1 / prior_std2 + len(data) / posterior_std**2))

Calculate variational posterior probabilities

x_values = np.linspace(-5, 5, num=100) posterior_probs = norm.pdf(x_values, loc=posterior_mean, scale=posterior_std)

Print mean and standard deviation of variational posterior

print("Variational Posterior Mean:", posterior_mean) print("Variational Posterior Standard Deviation:", posterior_std)

  • I dont understand what it has to do with my question ? I was asking a differential question, not a code one – Jack21 Jul 16 '23 at 11:48
  • If you post code then please make sure it is formatted as code: https://stackoverflow.com/editing-help#syntax-highlighting – Broele Jul 24 '23 at 13:13
  • As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Jul 25 '23 at 15:12