Hacker Newsnew | past | comments | ask | show | jobs | submit | jv22222's commentslogin

I feel bad saying this because so many folks have not had the best of luck, but it's changed the game for me.

I'm building out large multi-repo features in a 60 repo microservice system for my day job. The AI is very good at exploring all the repos and creating plans that cut across them to build the new feature or service. I've built out legacy features and also completely new web systems, and also done refactoring. Most things I make involve 6-8 repos. Everything goes through code review and QA. Code being created is not slop. High quality code and passes reviews as such. Any pushback I get goes back in to the docs and next time round those mistakes aren't made.

I did a demo of how I work in AI to the dev team at Math Academy who were complete skeptics before the call 2 hours later they were converts.


One thing you could look into is body doubling sites like flow club. It doesn't solve the core issue but might help in a small way outside of work hours. Outside of the internet I keep hearing that Pickleball is the most social sport around! Also have you tried hanging out and working at Starbucks (or similar) after some time (weeks) in the same place it's inevitable to start making connections. Also co-working spaces can offer connections, and they usually have various club goings on on notes on the pin boards etc. One thing I do know is that it takes quite a few times / weeks of time turning up to the same place for conversation to start. Hope this is helpful in some way.


> Any idiot can now prompt their way to the same software.

It may look the same, but it isn't the same.

In fact if you took the time to truly learn how to do pure agentic coding (not vibe coding) you would realize as a principal engineer you have an advantage over engineers with less experience.

The more war stories, the more generalist experience, the more you can help shape the llm to make really good code and while retaining control of every line.

This is an unprecedented opportunity for experienced devs to use their hard won experience to level themselves up to the equivalence of a full team of google devs.


> while retaining control of every line

What I want when I'm coding, especially on open source side projects, is to retain copyright licensing over every line (cleanly, without lying about anything).

Whoops!


Hmm. TIL: The real exposure isn't Anthropic, OpenIA claiming your code, it's you unknowingly distributing someone else's GPL code because the model silently reproduced it, with essentially zero recourse for the model owner.


It depends on your plan, but Google[1] and Anthropic[2] at least provide indemnity against this. Haven't checked the others. Still not a situation you want to find yourself in, though.

[1] https://cloud.google.com/blog/products/ai-machine-learning/p...

[2] https://www.anthropic.com/news/expanded-legal-protections-ap...


> Under the updated terms, we will defend our customers from any copyright infringement claim made against them for their authorized use of our services or their outputs, and we will pay for any approved settlements or judgments that result

"We are going to use our deep billionaire pockets to squash the fucker artist who dares identify something of his that we stole, that made its way into your output ..."


I wonder why people still believe in intellectual property, it's a concept that has long since lived past its usefulness, especially technologically.


A free license, like the BSD, if followed, ensures that the unpaid creator of a free work is at least credited. Everyone using that work at the source code level sees the copyright notice with that author's name. The author has already given everyone the freedom to do anything with the code, except for plagiarism. AI is taking away the last thing from peoiple who have shared everything else.


Why is plagiarism an issue? In school it's an issue due to the effect that students won't learn well if they just copy everything, but outside of school and especially for personal use, why should I care if I "plagiarize" or not (and arguably AI doesn't even plagiarize as it's not a 1 to 1 copy paste of the code when making a new project)? The concept of plagiarism is as much a fiction as "intellectual" property. The only sort of property that actually exists is real and tangible.


> Why is plagiarism an issue?

For starters, because of the western values of giving credit.

We have diseases named after people, never mind inventions and ideas.

Plagiarism is kick-out-of-school grade academic misconduct, whereby you are pretending that someone's work (and the ability it implies) is your own.

> The only sort of property that actually exists is real and tangible.

Remember, I'm talking about works that are free to redistribute, use and even modify. Or in other cases, that the users to whom a compiled work is distributed have access to the buildable source code.

The authors put their names on it, and terms which says that their notices are to be preserved when copies are made.

This isn't good enough for the Altmans and Amodeis of the world.

> it's an issue due to the effect that students won't learn well if they just copy everything

... and fraudulently obtain professional licensing, and use that to cause harm: medical malpractice, unsafe engineering.

It is fraud.


None of what you said shows how it's an issue, beyond "it just is." Doctors for example "plagiarize" all the time, copying standardized diagnostic protocols, clinical notes from previous visits, and peer-reviewed treatment plans. The risk is in the information actually being wrong rather than them having "original" expression (which might even be worse, where they try some "novel" treatment and end up killing the patient). There is no fraud involved as the effects of plagiarism which is, again, a completely fictional issue.

I am also not sure why you keep bringing up Altman et al, I really don't give a shit what they are talking about, that is not what I am discussing. You for some reason keep trying to inject your views on these people when they are not relevant to the points I made which are about the theoretical concepts of machine learning and training, and its intersection with intellectual property. I am not interested in your opinions on these people, and they are not the only ones who stand to benefit from democratization of AI models and publishing of weights for the public.

Anyway, I think we both fundamentally have different views on the freedom of information and the fallacious nature of IP that cannot be changed online so I will bid you a good day and won't continue this conversation further, as I don't think it's productive for either of us.


[flagged]


Ah yes, "morons." You don't need to make a new account just to reply to a comment you dislike and know you will soon get flagged anyway.


Because IP democratizes returns on the creative process.


Maybe it used to but with companies like Disney lengthening copyright times way beyond the original intention, or corporations patenting absurd things, it seems to be more of a way to entrench power than any sort of democratization. I'm glad generative AI seem to be bypassing all this and actually democratizing returns on the creative process, by flagrantly violating the concept of IP.


In the case of BSD-like licenses, IP is applied in a way that discourages plagiarism, while giving all the practical freedoms to the users, including making proprietary products.

In the case of copyleft licenses like GPL, IP is applied in a way to ensure that users have the code.

These things are taken away when the code is laundered through AI.


Again, start talking to people outside the field of programming and ask them how they like it when their labor of passion is "democratized" by AI turning it into unattributable slurry.


I don't really care how they like it because it's not up to them how I use the tools I want to use. It's literally the same argument photographers faced 100 years ago and in another 100 years I guarantee no one will be talking about AI in the terms you are today.


No one started photographing paintings and declaring them free to use. If they did the lawsuits would leave a huge impact crater.

Photography started displacing painting as a form of portraiture, but displacing a technique is not the same thing as appropriating the work itself.


I don't see any issues with "appropriating" a work especially if it's not a one to one copy which AI does not produce (without out some pretzel level prompting), especially with regards to visual media (what even is appropriation in this case? Your example of photographers taking images of paintings is not the same as how AI training occurs). In other words, training is and should be free and fair use.


> training is and should be free and fair use.

Of course the AI robber barons would that it be so, but it must not be and should not be.

Training gobbles up works in their entirety, verbatim.

Fair use of the verbatim words of a written work requires the excerpt to be small.

Fair use also usually requires attribution, which is missing.

Transformative works like parodies are also fair use, but the LLM isn't transformative int his sense; it's strawman transformative like a meat grinder.

Parodies use the structure of something existing, as a vehicle for original thought which is why they are protected from copyright claims by the authors of whatever is pariodied.


Again, IP is an outdated concept in this day and age. In all honestly there shouldn't even be the notion of fair use, any transformative work should be allowed. There is nothing about LLM training that isn't transformative, just as, well, grinding meat from a steak into stuffed sausages transforms it.

I'm not even talking about big corporations with proprietary models, in fact I oppose their not being open source or weight, I want more open models not fewer as that at least democratizes the value of LLMs. The worst case is having copyright hawks allowing regulatory capture by big AI corps by pushing regulations about licensing content, which, of course, no open model company will be able to afford in the future. I find that infinitely worse than having more lax copyright laws, where only a few corporations can tell you want to think via usage of their LLMs.

Lastly, no one can tell me from first principles why LLM training is bad, on the copyright side, other than, it just is, because copyright law dictates it so. Perhaps copyright law is what needs to be abolished, not LLMs.


"Transformative" has a specific meaning under the fair use doctrine. You can't just Rot13 or gzip someone's novel and call that transformative.

> Perhaps copyright law is what needs to be abolished, not LLMs.

Sure, now that it's inconvenient for some billionaires --- who themselves have nothing to protect, because everything they offer is a service the user can only access through the network, while they have a subscription.


I'm talking about the concept of transformation, not the specific legal language, which, again, I said is not worth discussing, because the legal concept of intellectual property is not useful.

No, not just now, since forever. I suppose Stallman being right all along is about this concept. And just to be clear, I'm not a supporter of current closed source AI companies, like I said I want to see open models succeed.

As I asked above, it really does look like no one can explain why LLM training is bad, besides saying it's bad. Therefore I will continue to reject IP as a concept.


Obviously, since you reject IP, presumably you would be okay to copy and paste code out of some GNU program into your own program, without attribution, and then, if you feel like it, release that program under the least restrictive terms possible (as close to the public domain as you could practically get away with).

So discussions revolving about doing so less directly through training a model just add distracting details that don't matter.

If everyone did that (due to there not being any rules against that), then fewer people would write programs under free licenses. Many such developers are volunteers, whose only payment is that the work product is theirs to license how they want.

Having that taken away from us is discouraging.

We haven't done anything to deserve such a "fuck you".


Even today, in 2026, it is possible to use photography in ways that infringe copyright! You literally cannot just snap your shutter over anything whatsoever and call it yours!


I used Claude to document, in great detail, a 500k-line codebase in about an hour of well-directed prompts. Just fully explained it, how it all worked, how to get started working on it locally, the nuance of the old code, pathways, deployments using salt-stack to AWS, etc.

I don't think the moat of "future developers won't understand the codebase" exists anymore.

This works well for devs who write their codebase using React, etc., and also the ones rolling their own JavaScript (of which I personally prefer).


To make a parallel to actual human language: you can understand well a foreign language and not be able to speak it at the same level.

I found myself in that situation with both foreign languages and with programming languages / frameworks - understanding is much easier than creating something good. You can of course revert to a poorer vocabulary / simpler constructions (in both cases), but an "expert" speaker/writer will get a better result. For many cases the delta can be ignored, for some cases it matters.


How did you vet the quality of the documentation? I have no doubt that an LLM could produce a great deal of plausible-sounding documentation in short order. Even assuming you’re already completely familiar with the code base, reading through that documentation and fact checking it would take a great deal of effort.

What’s the quality like? I’d expect it to be riddled with subtly wrong explanations. Is Claude really that much better than older models (eg. GPT-4)?

Edit: Oops, just saw your other comment saying you’d verified it manually.


> I used Claude to document, in great detail, a 500k-line codebase in about an hour of well-directed prompts

Yes, but have you fully verified that the documentation generated matches the code? This is like me saying I used Claude to generate a year long workout plan. And that is lovely. But the generated thing needs to match what you wanted it for. And for that, you need verification. For all you know, half of your document is not only nonsense but it is not obvious that it's nonsense until you run the relevant code and see the mismatch.


Yes, since I spent over 10 years writing it in the first place it was easy to verify!


This is a key piece of information you left out of your original post.


Hey, I also sent this to feedback@nugget.one, but just in case it doesn't arrive:

I wasn't able to get into your 'startup ideas' site.

Signing in with google led to internal server error, and signing in with a password, I never received the verification email.

Thought I would let you know. Can't wait to get those sweet startup ideas....!


Thanks, I've been very focused on lightwave and as a result let that one slide a bit. I'll try to get it working in next week or so.


This - I even ran Claude to produce a security eval of openclaw for fun and it was mostly spot on - https://sriku.org/files/openclaw-secreport-claude-13feb2026....


If your project is on Github, you can also use https://deepwiki.com/. I have used it to get an overview of a new codebase quickly.


I'll get you a better mobile expereince by then.


It's related to keyboard layout. I've got a linux/windows keyboard coming in the mail. Will get it fixed. (it works with option+shift on osx)

will look into ctrl backspace to delete words like that idea


Added, now works.


The billion dollar homepage.


> Shift + Home/End for selecting text doesn't work > Ctrl + N for a new page doesn't work

One thing that's weird is that things like that are so easy for me to implement it's just sugar on top. Honestly I would just love to have a few folks tinkering with it saying hey it needs these few things. Just hard for one dev to think of all the things!


Thanks so much for your feedback, much appreciated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: