Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

i don't know know why the hell people are so obsessed with it. like why aren't there recurring posts about how to solve a separable PDE or how to perform gram shmidt or whatever other ~junior math things.


Kalman filters are useful in data processing and interpretation, I used them heavily in continuous geophysical signal processing four decades past.

My guess is that many computer data engineers encounter them and find their self taught grasp of linear algebra and undergraduate math challenged by the theory behind K-F's .. they seem to come across as a bit of a leg up over moving averages, Savitzky–Golay, FFT applications, etc.

There are many more people dealing with implementing these things than have had formal undergraduate lectures on them.

My gut feeling is that most are more likley to encounter K-F applications in drone control, dead reckoning positions when undergound or with flakey GPS, cleaning real world data, etc. than to find themselves having to solve PDE's ..

I posit the existence of some form of pragmatic Maslow's Hierarchy of Applicable Math.

I do agree though that HN has odd bursts of Kalman filter posts.


> Kalman filters are useful in data processing and interpretation

vaguely - plenty of other imputation approaches that are simpler/better/more accessible.

> F applications in drone control, dead reckoning positions when undergound or with flakey GPS

these are not things 99% of devs encounter. literally

> dead reckoning positions when undergound or with flakey GPS

is the domain of probably like 100-1000 people in the entire world - i know because i actually have brushed up against it and am aware painfully aware of the lack of resources.

i really do think it's just a programmer l33t meme not unlike monads, category theory, etc - something that most devs think will elevate them to godhood if they can get their heads around it (when in fact it's pretty useless in practice and just taught in school as a prereq for actually useful things).


The assertion was not that these examples are common rather that currently they are more common to generic app developers than manipulating PDE's

As K-filters in data processing and interpretation, that depends thoroughly on the data domains, a good number have biases and co-signals that are more easily removed with an adaptive model of some form.

Eg: magnetic heading effect when recording nine axis nano-tesla range ground signals. The readings returned over a specific point at a specific time of day are a function of sensor speed and heading. Repeated flying over the same point (hypothetically at the same time) from North to South Vs East to West returns different data streams on each of the nine channels.

To get a "true ground reading" both the heading bias and the diurnal flux must be estimated and subtracted.

> plenty of other imputation approaches that are simpler/better/more accessible.

Do tell. What would you use in the above example?


If only we had some way to predict when these bursts would appear. But, I guess it would probably depend on a lot of factors, and it might be hard to guess how they all influence each other…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: