Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pretty positive, much better than my experience with OpenAI's models. The killer feature for me is their prompt generator[1], which you can use to create system prompts or improve user prompts. As I said in another thread, the generator is tuned to generate prompts better suitable for Claude, which improves responses.

You can see examples of the prompts it generates in the repository linked below, including a customized version of their prompt generator[2]. This feature significantly improved my experience with LLMs in general, both Claude and local ones, and I now use Claude 3.5 Sonnet for pretty much all my coding, mostly in Go and, lately, Rust.

I mostly use it to improve existing code or to get started, not to generate entire code bases, so I can't say much about that.

I found it lacking in shell scripting though. Nine out of ten times I need to fix the shell script it generates, to a point where I just gave up trying and went back to writing them from scratch.

Can't wait to see how much better Claude 3.5 Opus is, though.

[1]: https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

[2]: https://sr.ht/~jamesponddotco/llm-prompts/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: