Currently on the front page there are ~2/30 posts on the front page.
AI hype is everywhere. On HN there is genuinely much higher quality conversation about AI than many other places.
So, no, the question in the title doesn't resonate with me. I don't think there is an obsession, and for the AI discussion happens, I'm happy with the quality of it.
Unfortunately, this is only deterministic on the same hardware, but there is no reason why one couldn't write reasonably efficient LLM kernels. It just has not been a priority.
Nevertheless, I still agree with the main point that it is difficult to get LLMs to produce the same output reliably. A small change in the context might trigger all kinds of changes in the generated code.
Ollama can pull directly from HF, you just provide the URL and add to the end :Q8_0 (or whatever) to specify your desired quant. Bonus: use the short form url of `hf` instead of `huggingface` to shorten the model name a little in the ollama list table.
Edit: so for example of you want the unsloth "debugged" version of Phi4, you would run:
`$ollama pull hf.co/unsloth/phi-4-GGUF:Q8_0`
(check on the right side of the hf.co/unsloth/phi-4-GGUF page for the available quants)
You still need to make sure the modelfile works so this method will not run out of the box on a vision GGUF or anything with special schemas. Thats why mostly a good idea to pull from ollama directly.
I haven't tried this yet myself, but have you tried plugging it into GPT or Claude or Perplexity and asking Qs?
I've made some progress on thongs this way, much faster than I would have the usual way.
Apply the usual precautions about hallucinations etc (and maybe do the "ask multiple AIs the same thing" thing). We don't yet have a perfect tutor in these machines, but on balance I've gained from talking to them about deep topics.
The economist Emily Oster wrote an excellent series of books about pregnancy anf early kid years, where she dives into details about various studies, whether they are causal, etc. It's one of the best practiacl explainations of reading research I've encountered targeted at non-academics, really well done. She has a chapter on peanut exposure allergies and i think inrecall that these early-exposure results are in fact from causal research vs just correlational research (basically there are at least two types of papers out thwre -- correlational and causal. As you might guess causal is harder to get for many reasons). Great books; she may also have published some chapters on her substack (substack came after the book I think).
I realize as I write this that you are probably saying that for your own experience you can't disentangle correlational from causation, to which I would say -- correct!
Emily Oster is a national treasure that is still flying under the radar for most people.
As above, she presents excellent science in an approachable manner for non-science minded folks. She also has enough of the technical details for this with the knowledge to be confident she has done a strong analysis.
If you are a parent and haven’t checked her stuff out, please do. (Zero affiliation or connection)
I've directly experienced this in my own life. Moved during the pandemic to be near family, and coincidentally that is a much more religious and conservative community. Many people I am around daily are much more unplugged from online discussion and much more directly involved in face to face lives. There is a subset who are more "online" and tend to be more stressed/anxious.