1

On Bayesian Networks, Ghahramani (2001) says:

A node is independent of its non-descendants given its parents.

This point is fundamental enough that Ghahramani calls it the “semantics” of a Bayesian network. It is certainly useful, and it is simple enough to prove using d-separation. But his characterization suggests that the property should be even more primitive than something provable by d-separation.

Overall, I feel that I am missing something. Is there a more primitive way to verify the statement than to use d-separation? Why does Gharamani equate that fact specifically to the semantics of a Bayesian network, rather than equating the semantics to the overall conditional independencies in the network (given by d-separation)? And if the statement is a consequence of d-separation, why focus on this fact specifically rather than the (arguably equally useful) fact about Markov blankets?

Reference: Ghahramani, Z. (2001). An introduction to hidden Markov models and Bayesian networks. In Hidden Markov models: applications in computer vision (pp. 9-41).

ebrahimi
  • 1,277
  • 7
  • 20
  • 39
ashman
  • 111
  • 1
  • 1
    Maybe https://stats.stackexchange.com/ would be a better place for this question? – Erwan Nov 10 '20 at 14:51
  • Yes, perhaps. I think it's generally unclear where to ask machine learning questions. – ashman Nov 10 '20 at 14:54
  • Is it kosher to re-post the exact same question there, while leaving it up here? Or should I follow some other process? – ashman Nov 11 '20 at 17:39
  • 1
    I think it's fine, otherwise you can flag your own question and ask a moderator to migrate it properly (it might take longer, I don't know). Your question fits on both sites, I recommended CrossValidatedSE mostly because there are more contributors there, so you'd have a better chance to get a good answer. – Erwan Nov 11 '20 at 18:33

0 Answers0