Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It took years after I learned the Kalman Filter as a student, until I actually intuitively understood the update of the covariances. Most learning sources (including the OP) just mechanically go through the computations of the a-posterior covariance, but don't bother with an intuition other than "this is the result of multiplying two gaussians", if anything at all.

I wrote down a note for myself where I work this out, if anyone is interested: https://postbits.de/kalman-measurement-update.html



Figured I can save you a click and put the main point here, as few people will be interested in the rest:

The Kalman filter is adding the precision (inverse of covariance) of the measurement and the precision of the predicted state, to obtain the precision of the corrected state. To do so, the respective covariance matrices are first inverted, to obtain precision matrices. To have both in the same space, the measurement precision matrix is projected to the state space using matrix H. The resulting sum is converted back to a covariance matrix, by inverting it.


That is super helpful, thanks! I'm used to calling the inverse of the covariance the information matrix.


Yeah, that's the correct term! I think precision is mainly used for 1D. But I like the term, as I feel it has a better intuition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: