Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Thinking blocks from previous assistant turns are preserved in model context by default

This seems like a huge change no? I often use max thinking on the assumption that the only downside is time, but now there’s also a downside of context pollution



Opus 4.5 seems to think a lot less than other models, so it’s probably not as many tokens as you might think. This would be a disaster for models like GPT-5 high, but for Opus they can probably get away with it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: