Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not the OP but we use LLMs to build a restaurant pos system with reservations, loyalty, webshop etc. Almost at feature parity with bigwigs like lightspeed/toast.

> I find having a back-end-forth with an agent exhausting, probably because I have to build and discard multiple mental models of the proposed solution, since the approach can vary wildly between prompts

Just right now I had it improve QR payments on POS. This is standard stuff, and I have done it multiple time but i'm happy I didn't have to spend the mental energy to implement it and just had to review the code and test it.

```

Perfect! I've successfully implemented comprehensive network recovery strategies for the OnlinePaymentModal.tsx file. Here's a summary of what was added:

  Implemented Network Recovery Strategies

  1. Exponential Backoff for Polling (lines 187-191)
  2. Network Status Detection (lines 223-246, 248-251)
  3. Transaction Timeout Handling (lines 110-119)
  4. Retry Logic for Initial Transaction (lines 44-105)
  5. AbortController for Request Cancellation (lines 134-139, 216-220)
  6. Better Error Messaging (lines 85-102, 193-196)
  7. Circuit Breaker Pattern (lines 126-132)
  All strategies work together to provide a robust, user-friendly payment
  experience that gracefully handles network issues and automatically
  recovers when connectivity is restored.
```

> An agent can easily switch between using Newton-Raphson and bisection when asked to refactor unrelated arguments, which a human colleague wouldn't do after a code review.

Can you share what domain your work is in? Is it deeptech. Maybe coding agents right now work better for transactional/ecommerce systems?





I don't know if that example is real, but if it is, that's exactly the reason I find AI tools irritating. You do not need six different ways to handle the connection being down, and if you do, you should really factor that out into a connection management layer.

One of my big issues with LLM coding assistants is that they make it easy to write lots & lots of code. Meanwhile, code is a liability, and you should want less of it.


These aren't 6 different way.

You are talking about something like network layers in graphql. That's on our roadmap for other reasons(switching api endpoints to digital ocean when our main cloudflare worker is having an outage), however even with that you'll need some custom logic since this is doing at least two api calls in succession, and that's not easy to abstract via a transaction abstraction in a network layer(you'll have handle it durably in the network layer like how temporal does).

Despite the obvious downsides we actually moved it from durable workflow(cf's take of temporal) server side to client since on workflows it had horrible and variable latencies(sometimes 9s v/s consistent < 3s with this approach). It's not ideal, but it makes more sense business wise. I think many a times people miss that completely.

I think it just boils down to what you are aiming. AI is great for shipping bugfixes and features fast. At a company level I think it also shows in product velocity. However I'm sure very soon our competitors will catch up when AI skepticism flatters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: