Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And yet prompts can be optimized.


You can optimize a prompt for a particular LLM model and this can be done only through experimentation. If you take your heavily optimized prompt and apply it to a different model there is a good chance you need to start from scratch.

What you need to do every few months/weeks depending of when the last model was released is to reevaluate your bag of tricks.

At some point it becomes a roulette - you try this, you tray that and maybe it works or maybe not ...


Stumbled upon this in another thread:

https://ai-analytics.wharton.upenn.edu/generative-ai-labs/re...

My point still holds that it is optimizable though (https://github.com/zou-group/textgrad, https://arxiv.org/abs/2501.16673)

>Subjects develop elaborate "rain dances" in the belief that they can influence the outcome. Not unlike sports fans superstitions.

Anybody tuning neural weights by hand would feel like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: