Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The prompt is decreasingly relevant. The verification environment you have is what actually matters.




I think this all comes down to information.

Most prompts we give are severely information-deficient. The reason LLMs can still produce acceptable results is because they compensate with their prior training and background knowledge.

The same applies to verification: it's fundamentally an information problem.

You see this exact dynamic when delegating work to humans. That's why good teams rely on extremely detailed specs. It's all a game of information.


Having prompts be information deficient is the whole point of LLMs. The only complete description of a typical programming problem is the final code or an equivalent formal specification.

Exactly the point. But, LLM's miss that human intuition part.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: