Hacker Newsnew | past | comments | ask | show | jobs | submit | ativzzz's commentslogin

We as engineers are still paid to create working software. As such, you are responsible for the genAI code you ship to production. That is, our customers are paying us for working software, so we should all understand what the AIs are writing. This is slower and we become the bottlenecks, but it's a part of the offering of our business.

If I was working at a startup or working on a personal project, I wouldn't read the code but instead build a tighter verification loop to ensure the code functions as expected. Much harder to do in an existing system that was built pre-AI


On my team I've been adding additional linters and analyzers (some I've written with Claude) to run at CI or locally to prevent codified "bad patterns" from entering our systems. This has been nice as a backstop, as I can't enforce what everyone's Claude prompts and local workflows are, but we can agree what CI checks run before merging. Not a 100% solution, but it has been helpful so far.

I think the opposite question is more prevalent, how much money have you spent?

Not a small amount :)

I spend $140/mo on Anthropic + OpenAI subs and I use all my tokens all the time.

I've started spending about $100/week on API credits, but I'd like to increase that.


Still waiting for these software factories to solve problems that aren't related to building software factories. I'm sure it'll happen sooner or later, but so far all the outputs of these "AI did this whole thing autonomously" are just tools to have AI build things autonomously. It's like a self reinforcing pyramid.

AI agents haven't yet figured out a way to do sales, marketing or customer support in a way that people want to pay them money.

Maybe that won't be necessary and instead the agent economy will be agents providing services for other agents.


A different perspective. I'm a first generation immigrant that moved from Russia -> US 30 years ago with my parents.

Some things to consider:

Despite living here for 30 years, my parents don't feel they fit in. Their friends are Russian and the media they consume is in Russian. At the same time, they wouldn't fit in back in Russia either at this point. It's a weird state where you lack a strong cultural association. If you move, highly recommend immersing yourself in local culture, language, and activities.

We moved here because my dad had a good job at an international company (software dev). Our immigrant friends who are doing well are in a similar boat, or have entered higher paying fields like nursing. If you don't plan to climb the financial ladder via upskilling, or aren't in a transferrable career, your material life will be much better in your home country.

Overall, I don't think their quality of life changed much between the two countries. They are educated, white collar workers who would have a similar life anywhere they lived.


I'm fully onboard the Neovim train. Lua is a much better language than vimscript and there's a lot more interest in Neovim so there's more interesting packages that people create. Regular Vim is probably fine if you aren't gonna put as much effort into customizing it and if you just stick to the tried and true.

I use nvim all the time for code exploration & figuring out what i need to tell the AI. Invest in tools and packages that let you navigate your codebase quickly


First, don't do what people on the cutting edge are doing. They are AI hobbyists and their methods becomes obsolete within weeks. Many of the tricks they use become first class objects of frontier models/tooling or are unnecessary 2 model versions later.

What you can do is empower your agent to solve more complex problems fully on its own. See if you can write a plan (claude is great at writing plans) that encapsulates all the work needed for a feature. Go back and forth with the AI on the plan. Spend some time thinking about it. Including tests, how the AI can automatically validate that things work, etc. Put it on auto-accept and tab away. Once it finishes, review the code, do your normal QA, follow ups, etc

While it's working, go work on another feature in the same vein. Git worktrees are a simple way to work on the same codebase in parallel (though you likely work on an app that isn't set up for multiple instances of it running in parallel, have fun with that). Containers are another way to run these in parallel. Vibe code yourself a local tool to manage these. This is somewhat built into the claude/codex desktop apps, but you likely need to customize it for your environment

Basically you do the architecture, code review & QA and let the model do as much of the code as possible, and try to do it in parallel. I still do manual coding when I need to explore what a solution might look like, but AI is much faster at experimenting with larger solutions, and if you don't like what it did, it's a `git checkout .` away from a clean slate

How much time you spend on validating is a tradeoff of speed vs correctness needs for your business.


If our human thoughts can't be translated to food and shelter, then we'll pick up guns and go steal the food and shelter from someone else.


Noted.

Sometimes what needs translated to food and shelter isn't your thoughts, however. If you nurture and value thoughts regardless, this point will be understandable.


The same way you prevented this previously. Copying successful products is nothing new, AI just makes it easier.

Marketing, lawyers, good customer support, creating relationships with customers.


It seems like your priorities are backwards. CI/CD is meant to keep your product stable, there's no point if there's no product to keep stable. DB scaffolding & proper environment config is meant to help you maintain velocity & allow for the proper dev/staging/prod pipeline for testing & scale up to meet your customer needs. But there's no velocity to maintain and no scale to meet.

Auth flows are important for enterprise customers, but just use an existing off the shelf library for oauth/SSO. Depending on your customers, this is a feature. If you mean auth flows between your own services, then you've overengineered it.

Basically none of the things you've listed are as important as having features that attract customers. Those things you build afterwards for stability and velocity.

I'm biased, but in 2026 I would just use a Ruby on Rails monolith with Postgres. You don't even need to containerize it until the stack becomes too complex to run locally.


Sorry, ai;dr (AI didn't read)

> The real question isn’t whether AI helped write this

It is. As soon as I saw the bullet points, my mind went "AI wrote this" and I stopped reading.


That’s fair. If formatting alone is enough to trigger dismissal, there’s not much I can do about that.

Bullet points aren’t an AI signature, they’re just a way to compress structure. If the argument is wrong, I’m happy to debate it. If it’s right, the formatting shouldn’t matter.

The economics of subsidized infrastructure vs. sustainable pricing is the core claim. That’s the part worth engaging with.


AI - in this case, LLMs and their ecosystem - is an incredibly impactful technology. I would put it up with:

- the printing press

- radio

- tv

- personal computers

- internet

in terms of important contributors to human civilization. We live in the information age, and all of these are significant advances in information.

The printing press allowed small organizations to create written information. It de-centralized the power of the written text and encouraged the rapid growth of literacy.

Radio allowed humans to communicate quickly across long distances

TV allowed humans to communicate visually across long distances - what we see is very important to the way we process information

PCs allowed for digitizing of information - made it denser, more efficient, easier to store and generate larger datasets

The internet is a way to transfer large amounts of this complex digital information even more quickly across large distances

AI is the ability to process this giant lake of digital information we've made for ourselves. We can no longer handle all the information that we create. We need automated ways to do it. LLMs, which translate information to text, is a way for humans to parse giant datasets in our native tongue. It's massive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: