Hacker Newsnew | past | comments | ask | show | jobs | submit | 2023throwawayy's favoriteslogin

From making a few variations on data chatbots in the past year, I found that my favorite / most fun to use ones seem to be more "chain-of-thought" and conversational rather than "retrieval-augmented" style.

Less about one-shotting the answer, and more about showing its work, if it errors, letting it self-correct. Latency goes up, but quality of the entire conversation also goes up, and feels like it builds more trust with the user. Key steps are asking it to "check its work", and watching it work through new code etc. (I open-sourced one version of this: https://github.com/approximatelabs/datadm that can be run entirely locally / privately)

From their article: I'm surprised they got something working well by going through an intermediate DSL -- thats moving even further away from the source-material that the LLMs are trained on, so it's an entirely new thing to either teach or assume is part of the in-context learning.

All that said, interesting: I'll definitely have to try out tencentmusic/supersonic and see how it feels myself.


Just to echo the other comments- really impressed with both Depot and the team. I decided to kick the tyres on it last week and suddenly found myself replacing all our production docker builds with it by the end of the day. Felt like my Tailscale experience in terms of onboarding.

Totally seamless integration and it solves a very real issue that I’ve had with docker caching across our environments. We tried with the docker s3 cache originally but it didn’t really work in practice. Depot is the answer.

When I ran into an issue last week, the guys had responded and scheduled a call within minutes.

Depot are a team I’m happy to back with a product I’m very happy to pay for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: