Recently I am reading the following paper (link)
Liu, Sifei, Jinshan Pan, and Ming-Hsuan Yang. “Learning Recursive Filters for Low-Level Vision via a Hybrid Neural Network.” In Computer Vision – ECCV 2016, edited by Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, 9908:560–76. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2016. https://doi.org/10.1007/978-3-319-46493-0_34.
In the paper, the hidden state of the recurrent neural network, after some simplifications, is modeled as (relation (9) in the paper)
h[k] = (1-p) \cdot x[k] + p \cdot h[k - 1].
where, p, and x are n \tmes 1 vector. Then, it is stated that the derivations with respect to h[k], denoted as \theta [k] is (relation (10) in the paper)
\theta[k] = \delta[k] + p \cdot \theta [k + 1]
My question: How has this derivation been computed?