I have built a thermometer for an automation project with the requirement to be able to measure rate of temperature change 50 times a second with resolution of 0.001K/s (between 50 and 140C).
I can tell you, I spent probably a month just learning how difficult it is to measure temperature and what are all different ways you can screw up the measurement. You can't just stick a thermocouple anywhere you want. Everything has thermal mass, everything is thermal impedance, everywhere there is a thermal gradient. If you have ANY rate of change of temperature, your measurement method will always greatly impact the result regardless of how precise thermocouple and electronics you use.
If they really can get nanokelvin resolution I would be more interested in understanding their measurement method, because resolution is pretty much worthless if you stick it in the wrong place.
It wasn't anything special. I am an amateur EE and not especially talented. I did some cursory search on the Internet, found a circuit that was looking promising and brute forced it by using best components I could find and then ovenized it for great stability and removed any sources of noise I could imagine (for example using linear power supply, etc.) I think just the measurement circuit with the oven was about $500 which is just saying I did poor at optimising for the cost.
But for this project the price was not an issue and the real challenge and bulk of work was in building a model and control algorithm.
There is couple of gotchas. For example if you use very low current through the RTD (PT100 in my case) you get a lot of noise so you need to use relatively high current. But this causes it to heat up. Rather than try to eliminate this effect I decided to make sure this heating up is as constant as possible. This is one of the reasons it had poor absolute precision but was good for measuring changes over time.
I don't remember the exact PT100 I used but it wasn't very expensive (still probably one of the more expensive things I ever bought per unit of mass -- they are probably more expensive that diamonds).
It was definitely 4 wire (you don't want to measure wire resistance change and 4 wire is only way to get rid of it).
As to housing and type of RTD, a lot of the cost here is spent on making sure the measurements are stable over long time (so for example to prevent humidity and other gasses from putting it out of calibration).
In my case I did not care a lot about it because I wasn't after absolute precision, for me the important part was rate of change. If you are after absolute precision you might want one of the fancy RTDs.
But I cared that it is small and that it has very low thermal impedance interface with the measured object. Smaller means less thermal mass means faster response.
For the chip I used a general purpose ADC. You also need a very good voltage reference, otherwise even best ADC isn't worth much.
If you're primarily chasing sensitivity, one can generally be much more sensitive with a non-platinum thermistor. They're less stable but exhibit a much greater change in resistance with temperature.
A good voltage reference (or current source) is important, but even more important is building up a reliable bridge topology.
You might be right about PTCs though I now can't remember exact reason for choosing PT100. It probably wasn't just linearity. I think I had other problems like controlling amount of power dissipated on the sensor (large changes in resistance mean large changes in amount of power dissipated) and possibly other.
Hmmh? Some confusion of terms perhaps. You can expose a RTD or even NTC (to bring the point home) to the temperature in question, then measure the resistance in a bridge with nearly arbitrarily high resolution. Tadah! Temperature measurement with nK resolution. Of course it doesn't say anything about precision, much less about accuracy nor as you pointed out "The Temperature".
> Specifically, using a suspended asymmetric Fabry–Pérot resonator and a wavelength-stabilized probe laser we demonstrate a thermoreflectance coefficient of >30/K, enabling measurements with a thermometry noise floor of ~60 [nK/root Hz] and a temperature resolution of <100 nK in a bandwidth of 0.1 Hz.
Is a bandwidth of 0.1 Hz good in this context? Does it imply that the temperatures they're measuring are very stable on a time scale of a second or so? Or are higher-frequency variations ignored?
It is a reasonable bet that the thermalization time of their sensor is in the neighborhood of 100ms or so (which is very fast).
The thing that is really good/interesting is the 60 nK/rtHz sensitivity at those frequencies. Off the top of my head, I believe that the best thermistor-based systems I've built have been in the 10-100 uK/rtHz realm.
It is interesting to ponder whether such extreme sensitivity for a small object might open up some new avenues in fundamental physics. Things related to temperature are generally hard, but every time a new technique emerges, it is worth looking around to see if any new doors have opened.
Sounds like a bunch of very cool gobbledy-gook to me. My (extremely) layman's understanding is that this this is effectively better thermometers that could be useful for (among other things) bio-calorimetry.
My previous understanding of the word calorimetry was "indirect vs. direct calorimetry" - which (I think) are two techniques for measuring the energy used by some person either at rest or during. It's useful to understand how much energy someone is burning for obvious reasons related to health, nutrition, energy balance, etc. It seems like calorimetry has a much more general usage though, and it's pretty much the science of measuring heat transfers. Cool :}
There is so-much extremely-high-context science out there, and it would would be so fun and thought-provoking to keep up with some it it. It feels like my only options are a) spending a ton of time trying to parse extremely high-context articles (and probably misinterpreting them, like I do above), or relying on pop-science communication. The latter, to me, seems to miss most of the time - it usually simplifies things to just conclusions like "new X does Y," which feels like it misses most of the scientific processes while also misunderstanding the scientific spirit of this sort of knowledge...
I can tell you, I spent probably a month just learning how difficult it is to measure temperature and what are all different ways you can screw up the measurement. You can't just stick a thermocouple anywhere you want. Everything has thermal mass, everything is thermal impedance, everywhere there is a thermal gradient. If you have ANY rate of change of temperature, your measurement method will always greatly impact the result regardless of how precise thermocouple and electronics you use.
If they really can get nanokelvin resolution I would be more interested in understanding their measurement method, because resolution is pretty much worthless if you stick it in the wrong place.