Hacker Newsnew | past | comments | ask | show | jobs | submit | spagettnet's commentslogin

If something is more efficient 95% of the time, and as efficient as normal for the rest of the time, its still a good solution.


I think the researchers agree with your premise. The “evidence” is not that chicks have more language understanding than previously understood, but rather that the source of the universality of bouba/kiki is due to something more primitive than built in human language hardware.


Modern LLM are certainly fine tuned on data that includes examples of tool use, mostly the tools built into their respective harnesses, but also external/mock tools so they dont overfit on only using the toolset they expect to see in their harnesses.


IDK the current state, but I remember that, last year, the open source coding harnesses needed to provide exactly the tools that the LLM expected, or the error rate went through the roof. Some, like grok and gemini, only recently managed to make tool calls somewhat reliable.



Thanks - the OP’s site was a truly horrible experience


I dunno I just copied it into emacs. Another free short story to keep in my digital collection.


That was my exact reaction after opening this post...


I haven't seen any ads on the site - I guess AdNauseum works well :)


For some reason Safari's reader view skips a part of the page.


axolotl is great on consumer hardware.


depends ln your goals of course. but worth mentioning there are plenty of narrowish tasks (think text-to-sql, and other less general language tasks) where llama8b or phi-4 (14b) or even up to 30b with quantization can be trained on 8xa100 with great results. plus these smaller models benefit from being able to be served on a single a100 or even L4 with post training quantization, with wicked fast generation thanks to the lighter model.

on a related note, at what point are people going to get tired of waiting 20s for an llm to answer their questions? i wish it were more common for smaller models to be used when sufficient.



There's something about the yellow plastic version that feels more Lando, like it reflects some kind of distilled essence of the character.


Looks fine to me?


Not disagreeing, but as a bonus they can reuse the head and hair pieces for Tom Selleck in any Magnum, P.I. sets they might make.


I was ready to criticize, but... he looks pretty perfect like that


Of all the great things people say about UV, this is the one that sold me on it when I found this option in the docs. Such a nice feature.


Subtext is that the solution is always the best possible move sequence. OP’s comment is clarifying that sometimes after executing the best move sequence, the puzzle ends with a capture, and sometimes ends with a checkmate (“winning”).


Do you have a writeup or video of the aim bot you made? Would love to see it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: