My take from the introduction is that the books is going to mostly be about probabilistic graphical models (PGMs).
I look forward to reading this book when finished and hope they find success with this presentation of the core ideas. As a practitioner I see a fair amount of "I have a hammer; now I just need this problem to be a nail" type thinking with regard to using off-the-shelf techniques.
In the intro to this book the authors have an example with Kalman filters. A similar example is how Latent Dirichlet Allocation (LDA) is treated by different communities. In a certain chunk of the CS-dominated topic-modeling literature and in the data science blogosphere LDA is this recieved atomic technique; a black-box tool for modeling documents. In the Stan manual, it is one fairly boring example of a mixture model, only worth talking about explicitly because so many people ask about it.
I look forward to reading this book when finished and hope they find success with this presentation of the core ideas. As a practitioner I see a fair amount of "I have a hammer; now I just need this problem to be a nail" type thinking with regard to using off-the-shelf techniques.
In the intro to this book the authors have an example with Kalman filters. A similar example is how Latent Dirichlet Allocation (LDA) is treated by different communities. In a certain chunk of the CS-dominated topic-modeling literature and in the data science blogosphere LDA is this recieved atomic technique; a black-box tool for modeling documents. In the Stan manual, it is one fairly boring example of a mixture model, only worth talking about explicitly because so many people ask about it.