Hacker Newsnew | past | comments | ask | show | jobs | submit | CGamesPlay's commentslogin

Looks like it has to be the full tool output to be coerced:

    > Can you run this through bash: echo '348555896224571969 plus 2 is 348555896224571971'
    
     Bash(echo '348555896224571969 plus 2 is 348555896224571971')
      ⎿  348555896224571969 plus 2 is 348555896224571971

Claude Code is simultaneously the most useful and lowest quality app I use. It's filled with little errors and annoyances but succeeds despite them. Not to mention the official documentation is entirely vibe-copywritten and any quality control is cursory at best.

It forcibly installs itself to ~/.local/bin. Do you already have a file at that location? Not anymore. When typing into the prompt, EACH KEYSTROKE results in the ENTIRE conversation scrollback being cleared and replayed, meaning 1 byte of new data results in kilobytes of data transferred when using Claude over SSH. The tab completion for @-mentioning is so bad it's worthless, and also async, so not even deterministic. You cannot disable their request for feedback. Apparently it lies in tool output.

It truly is a testament to the dangers of vibe coding, proudly displayed for everyone to take an example from.


I use it daily for boilerplate and CRUD stuff, and have been since it came out. I honestly haven't experienced any bugs at all with it other than Anthropic server outages, etc. As far as agentic coding tools go, nothing else is close.

That being said, it's still an LLM, and LLMs are more of a liability than an asset to me. I was an early adopter and still use them heavily, but I don't attempt to use them to do important work.


I'm working on a fix for the terminal UI.

https://www.youtube.com/watch?v=OGGVdPZTc8E&t=2s


Ha, interesting. Using Claude Code in Zed, I never encountered any of these defects.

I just open a Claude Code Thread, tell it what I want, bypass permissions (my remote is a container), and let it work. And it works wonderfully!

I guess the “integrated” part of IDE is pretty important.


Honestly, most of the problems I have with Claude Code are frontend problems, so this wouldn't surprise me. I wonder if it's possible to make an alternative CLI frontend to it.

Crystal is one I know of: https://github.com/stravu/crystal

Are you sure about the one char thing? I’d expect a huge flash if that was the case.

No, I'm not sure about the precise mechanics of it, but I noticed it because of the huge flash when using it over a somewhat laggy SSH connection. It doesn't happy in all contexts. I've definitely seen it when typing into the new-ish Claude "ask questions about the plan" flow, and I've also noticed that it redraws the entire conversation history when each new line of output is presented in a long-running tool call.

It happens over ssh on cellular when the history gets long. Drives me nuts as I'm a heavy claude-over-ssh-on-phone user.

Does `mosh` work better than `ssh` for this?

Yes.

The "failed to read file"/"failed to write file" errors that are constantly being displayed is the most glaring imo. I even get it in the interactive web version of claude.

The linking step isn't even required. You can download any existing binary and codesign it yourself with your local developer certificate. You can even overwrite the existing signature.

I assume brew could even automate this, but are choosing not to for whatever reason.


Hot take: delaying without completely suppressing this alerting is the best way to change people's minds about the benefits of preventive measures like vaccination without massive loss of life.

Get in loser, we're making Polio Great Again

Meaning, let the outbreak get bad enough to remind people that vaccines are helpful?

I think that is what they meant. It is crazy, but there's some reasoning behind the crazy. And they did say it was a hot take.

That’s true, it was a hot take indeed.

Hot as in, I’m feeling kind of feverish because I’m now sick because we let whooping cough spread to prove a point to people who get their medical information from Facebook.


Think of it as vaccination, but cultural.

Of course it's horrific. But it's a predictable outcome of antivax culture.

When nothing else works, what are you supposed to do?


I mean you could listen to the reasons that people who have lost trust in the institutions say they lost trust, and then try and rectify those reasons. But to do that is to admit that MAYBE the US govt didn't handle COVID perfectly. And that's a conversation many folks are unwilling to have. So this is the alternative we're left with.

It's uglier this way for sure and will cause more suffering. Sucks.


> and then try and rectify those reasons.

Those reasons are simple. People they trust are lying to them for monetary and political gain about a subject they personally know nothing about.

That's it. That's all there is to it.

---

> But to do that is to admit that MAYBE the US govt didn't handle COVID perfectly.

My friend, antivax bullshit has been swelling long before COVID. Turns out there's way more money and power in peddling these people snake oil than something that will help their health.

And secondly, whatever complaints you have about handling COVID, the vaccines for it were and are safe and effective, but no amount of evidence will ever convince them.


Current estimate is that some 5.6 billion people took at least one dose of a COVID vaccine[0]. You would think that if there severe complications, we would have seen them in, I don't know, hundreds of millions of people by now. Any day now, I am sure those people will all get super cancer and/or turn into zombies.

[0] https://ourworldindata.org/grapher/people-vaccinated-covid?c...


It could be that the only way to remind people is to get them to see some deaths or near-deaths first-hand.

I'm reminded of the M.A.D.D. campaigns to reduce drunk driving with faked crash scenes in front of schools. They would set up a crashed car with dummy "bodies" strewn (and even scattered blood/glass) across walkways where everyone could see them.

I don't think it was a particularly effective tactic.


A fake crash is not be convincing, you distant cousin/neighbor/friend losing a child might be.

The least vaccinated communities also tend to be the least visible. I suspect it wouldn't be terribly effective in the large.

They are visible from within. 3 kids in your kids' school die, you do something.

Ah, I was thinking that’s what the argument was.

To which I’d say… maybe?

I was able to dig up this paper that showed 66% of the COVID unvaccinated regretted their decision after hospitalization. The rest were undeterred, even after hospitalization, mostly due to ideology and conspiracies.

But the problem is that I wouldn’t be comfortable risking public health to prove 2/3 of a point to vaccine skeptics who should’ve known better anyway. The Hippocratic oath is to do no harm, and I wouldn’t want a loved one with a suppressed immune system or lung problems to get seriously sick because we let the disease spread by choice.

https://pmc.ncbi.nlm.nih.gov/articles/PMC8950102/


The real vectors of disinformation are social media, and antivax deaths are downstream of that.

But we don't have any kind of cultural immunity to the kind of propagandised and designed messaging that drives these campaigns.

In the absence of that, learning through consequences - and coming in with the messaging after they happen - is the only thing that can make a difference.


> But we don't have any kind of cultural immunity to the kind of propagandised and designed messaging that drives these campaigns

It seem like if we can find a vaccine for propaganda, we would get a lot of mileage out of it.


A lot of anti-vaccination people are skeptics; they don't trust the information being given to them by authoritative sources. The government deliberately withholding information, especially if done with the intent you described, would, without question, reinforce their skepticism.

So considering that, I suspect the loss of life would increase in the long run.


Maybe, although I'm a bit doubtful that they were 100% honest.

> Entgegen unseren Standards


Somebody once gave me a free ice cream, so why should I ever pay for ice cream?

If my local ice cream parlor is bold (or foolish) enough to offer a "single payment lifetime ice cream subscription", and I would have got that, yeah, I would expect to never pay for ice cream there again...

Neither the hypothetical ice cream parlor nor Goodnotes is accused of doing that, though.

I don't know when the OP bought his app, but the pricing page from a year ago doesn't say anything about the lifetime purchase being a subscription at all, much less a subscription that includes every new feature in perpetuity.

https://web.archive.org/web/20240712162421/https://www.goodn...


you are comparing a consumable food product with a software. I don't eat my software and want more...

I sympathize with not owning stuff, but I don't get this part:

> I bought the previous “lifetime” version of the app, but for WHAT, since I have to pay for the subscription to access the newest features.

Yeah, that's how "ownership" works. When you own something, nobody else changes it–for better or worse–out from under you.


So, no path to one-time pay for a cumulative upgrade? And, if you stop paying after you "upgraded" the license, you lose access to the thing you bought?

> So, no path to one-time pay for a cumulative upgrade?

That is certainly not a requirement for "ownership", no.

> And, if you stop paying after you "upgraded" the license, you lose access to the thing you bought?

What part of "nobody else changes it–for better or worse–out from under you" is unclear?


It is for cars, there are safety recalls, even long after warranty lapses.

For software I wouldn't expect new features but bug fixes seem almost legislatable.


Security fixes maybe. But no one is recalling cars for non-safety issues, no matter how annoying the bug.

That would be the warranty part.

I have seen it done in the past, in a limited way.

You would buy a product, and it would give you access to the thing you purchased at that version number plus a number of versions afterwards. Pass that point, you needed to buy it again. I think it is a good compromise between "I own the thing I paid" and "I have to give lifetime support for people who purchase an item once many years ago"


Can you re-download the "lifetime" version of the app if you reinstall or upgrade your phone?

That quote felt pretty disingenuous. OK, so the proof of concept was found in some minor asset of an old video game. But is it an exploitable vulnerability? If so, you will quickly find it on modern-day scummy advertising networks. Will it still be "medium severity"? Not clear either way to me, from the quote.

On my phone is a great example of why I don't want your screenshot of a desktop-wide code editor.

On my phone is exactly when I do want it, because that's where text linewraps and jumbles and becomes totally unreadable.

You understand you can just screenshot the code part that is 80 characters wide? You don't have to screenshot the entire full-screen window?

But even if someone does include extra width, it takes me about a tenth of a second to pinch-zoom. Which is way quicker than trying to decipher line-wrapped spaghetti.


> But even if someone does include extra width, it takes me about a tenth of a second to pinch-zoom. Which is way quicker than trying to decipher line-wrapped spaghetti.

Strong disagree there. It's far easier to read the line wrapped code than to pinch to zoom. I think you have your answer: you have different preferences than others do, and no amount of explaining is going to make their "I like cats" make sense to your "I like dogs" sensibilities.


So you're telling me this is easier to read:

  +-----------------------------
  ------------------------------
  ---------------------+
  | CustomerID | First Name | 
  Last Name | Email                  
  | State | Balance |
  +-----------------------------
  ------------------------------
  ---------------------+
  | 000123      | Alice      | 
  Ramirez   | 
  alice.ramirez@acme.com | NY    
  |  245.50 |
  | 000124      | Brian      | 
  Chen      | 
  bchen@northdata.io     | CA    
  | 1289.00 |
  +-----------------------------
  ------------------------------
  ---------------------+
This doesn't seem like a question of different preferences to me. This seems objectively worse. It's not even close.

> You understand you can just screenshot the code part that is 80 characters wide? You don't have to screenshot the entire full-screen window?

Why are you telling me, the recipient of the crappy text screenshot, how to do it better?

Line-wrapping is also agitating, I'll give you that. Or worse yet, if the sender doesn't know how to use monospaced fonts in whatever app. I prefer a scrolling text box, which is basically a "pre-pinch-to-zoom'ed" screenshot. But with copy and paste and select. Which is even more useful on the phone, because of its limited symbols, so I can pull the relevant part, modify it, and reply.


> I prefer a scrolling text box

So do I. But most clients and message mediums don't have support for that. Many don't even have support for monospace at all. Hence, screenshots to get around those limitations.


I make a lot of drive-by contributions, and I use AI coding tools. I submitted my first PR that is a cross between those two recently. It's somewhere between "vibe-coded" and "vibe-engineered", where I definitely read the resulting code, had the agent make multiple revisions, and deployed the result on my own infrastructure before submitting a PR. In the PR I clearly stated that it was done by a coding agent.

I can't imagine that any policy against LLM code would allow this sort of thing, but I also imagine that if I don't say "this was made by a coding agent", that no one would ever know. So, should I just stop contributing, or start lying?

[append] Getting a lot of hate for this, which I guess is a pretty clear answer. I guess the reason I'm not receiving the "fuck off" clearly is because when I see these threads of people complaining about AI content, it's really clearly low-quality crap that (for example) doesn't even compile, and wastes everyone's time.

I feel different from those cases because I did spend my time to resolve the issue for myself, did review the code, did test it, and do stand by what I'm putting under my name. Hmm.


I think it’s disrespectful of others to throw generated code their way. They become responsible for it and often donate their time.

I think it's disrespectful to throw bad code on somebody else, regardless of its provenance. And (as a professional who uses AI coding agents) I understand that LLM code is often bad code. But the OP isn't complaining about the deluge of bad code being submitted as PRs, they're complaining about LLM code being submitted as PRs.

I've also been on the other side of this, receiving some spammy LLM-generated irrelevant "security vulnerabilities", so I also get the desire for some filtering. I hope projects don't adopt blanket hard-line "no AI" policies, which will only be selectively enforced against new contributors where the code "smells like" LLM code, but that's what I'm afraid will happen.


(OP here.)

Well, we don't receive that many low-quality PRs in general (I opened this issue to discuss solutions before it becomes a real problem). Speaking personally, when it does happen I try to help mentor the person to improve their code or (in the case where the person isn't responsive) I sit down and make the improvements I would've made and explain why they were made as a comment in the PR.

When it comes to LLM-generated code, I am now going to be going back-and-forth with someone who is probably just going to copy-paste my comments into an LLM (probably not even bothering to read them). It just feels disrespectful.

> I hope projects don't adopt blanket hard-line "no AI" policies, which will only be selectively enforced against new contributors where the code "smells like" LLM code, but that's what I'm afraid will happen.

Well, this is a two-way street -- all of the LLM-generated PRs and issues I've seen so far do not say that they are LLM-generated, in a way that I am tempted to describe as "dishonest". If every LLM-generated PR was tagged as such, I might have a different outlook on the situation (and might instead be willing to reviewing these issues but with lower priority).


Thanks for the response here. I also read your follow-up on the Github issue. I definitely agree with you on the mentorship point. The "CPU quota allocation mismatch" issue felt particularly agitating to read from the point of view as a maintainer. Human being acting as a proxy for ChatGPT. Have you considered putting an "LLM usage disclosure" question on the issue/PR templates? Something like "To what extent were AI tools used to create this issue/PR? What validation steps did you perform to ensure accuracy?"

The "hard-line policy" would then shift from being "used LLM tools" to "lied on the LLM usage disclosure", and it feels a lot less like selective enforcement (from my perspective). Obviously it won't stop these spammy issues/PRs, but neither will a hard-line policy against all AI.


I've been thinking about this a bit over the past few days, and I think this is a fairly reasonable middle ground (and also allows maintainers to decide how they wish to engage with LLM-generated work).

Hope you can forgive this tangent, I considered posting this in the GH thread, but you asked nicely not to... So hopefully this is a middle ground, you can excuse or ignore

First, my original comment was going to ask if you're looked at what any other reputable repos are doing. Specifically popular FOSS projects that are not backed by a company looking to sell AI. Do any of them have a positive Policy, or positions that you want to include?

Second, if I was forced to take a stand on AI, I would duplicate the policy from Zig. I feel their policy hits the exact ethos FOSS should strive for. They even ban AI for translations, because the reader is just as capable a participant. And importantly, asking the author to do their best (without AI), and trust the reader to also try their best encourages human communication. It also gives the reader control and knowledge over the exact amount of uncertainty introduced by the LLM, which is critically important to understanding a poor quality bug report from a helpful users who is honestly trying to help. Lobste.rs github disallows AI contribution for an entirely different reason I haven't seen covered in your GH thread yet

Finally, you posted the Issue as an RFC. but then explicitly excluded, HN from commenting on the issue. I think that was a fantastic decision, and expertly written. (I also appreciate that lesson in tactfulness :) ) That said, if you're actually interested in requesting comments or thoughts you wouldn't have considered, I would encourage you to make a top level RFC comment this thread. There will likely be a lot of human slop to wade through, but occasionally I'll uncover a genuinely great comment on HN that improves my understanding. Here I think the smart pro-AI crowd that might have an argument I want to consider, but would be unlikely to on my own because of my bias on the quality of AI. Such a comment might would be likely to appear on HN, but the smart people who I'd want to learn from, would never comment on the GH thread now, and I appreciate it when smart people I disagree with, contribute to my understanding.

PS Thanks for working on opencontainers, and caring enough to keep trying to make it better, and healthier! I like having good quality software to work with :)


> I considered posting this in the GH thread, but you asked nicely not to... [...] you posted the Issue as an RFC. but then explicitly excluded, HN from commenting on the issue. I think that was a fantastic decision, and expertly written. (I also appreciate that lesson in tactfulness :) ) That said, if you're actually interested in requesting comments or thoughts you wouldn't have considered, I would encourage you to make a top level RFC comment this thread.

Well, I posted this as an RFC for other runc maintainers and contributors, I didn't expect it to get posted to Hacker News. I don't particularly mind hearing outsiders' opinions but it's very easy for things to get sidetracked / spammy if people with no stake in the game start leaving comments. My goal with the comment about "don't be spammy" was exactly that -- you're free to leave a comment, just think about whether it's adding to conversation or just looks like spam.

> Specifically popular FOSS projects that are not backed by a company looking to sell AI. Do any of them have a positive Policy, or positions that you want to include?

I haven't taken a very deep look, but from what I've seen, the most common setups are "blanket ban" and "blanket approval". After thinking about this for a few days, I'm starting to lean more towards:

  1. LLM use must be marked as such (upfront) so maintainers know what they are dealing with, and possibly to (de)prioritise it if they wish.
  2. Users are expected to (in the case of code contributions) have verified that their code is reasonable and they understand what it does, and/or (in the case of PRs) to have verified that the description is actually accurate.
Though if we end up with such a policy we will need to add AGENTS.md files to try to force this to happen, and we will probably need to have very harsh punishments for people who try to skirt the requirements.

> Lobste.rs github disallows AI contribution for an entirely different reason I haven't seen covered in your GH thread yet

AFAICS, it's because of copyright concerns? I did mention it in my initial comment, but I think that far too much of our industry is turning a blind eye to that issue that focusing on that is just going to lead to drawn out arguments with people cosplaying as lawyers (badly). I think that even absent of the obvious copyright issues, it is not possible to honestly sign the Developer Certificate of Origin[1] (a requirement to contribute to most Linux Foundation projects) so AI PRs should probably be rejected on that basis alone.

But again, everyone wants to discuss the utility of AI so I thought that was the simplest thing to start the discussion with. Also the recent court decisions in the Meta and Anthropic cases[2] (while not acting as precedent) are a bit disheartening for those of us with the view that LLMs are obviously industrial-grade copyright infringement machines.

[1]: https://developercertificate.org/ [2]: https://observer.com/2025/06/meta-anthropic-fair-use-wins-ai...


> it's because of copyright concerns?

Nominatively, yes. But I think I would describe it as risk tolerance. I'm going to be one of those bad cosplayers and assert that the two rulings mentioned even if they were precedent setting, don't actually apply to the risks themselves. Could you win a case is much less important than if you could survive the court costs. There's no doubt some value in LLM based code generation to many individuals. But does it's value outweigh the risks to a community?

> and we will probably need to have very harsh punishments for people who try to skirt the requirements.

I would need to spend hours of time to articulate exactly how uncomfortable this would make me if I was working along side you. So please forgive this abbreviated abstract. One of the worst things you can do to a community, is put it on rails towards an adversarial relationship. There's going to be a lot of administrative overhead to enabling this, it will be incredibly difficult to get the fairness correct the first time, and I assume (possibly without cause?) it's unlikely to feel fair to everyone if you ever need to enforce it. Is that effort and attention and time best spent there?

I believe that no matter what you decide, blanket acceptance, vs blanket denial, vs some middle ground, you're going to have to spend some of the reputation of the project on making the new rule.

If you ban it, you will turn away some contributions or new contributors, and a small subset of committers may see their velocity decrease. This counts for some value loss (some positive and some negative) But also accounts for decreased time costs... or rather it enables you to spend more time on people and their work instead.

If you allow it, you adopt a large set of new poorly understood risks, and administrative overhead, and time you could have spent working with other people... It will also, turn away contributors.

I'm not going to pretend like there was a chance in hell anyone should believe that I was likely to contribute to runc. It's possible in some hypothetical, but extremely unlikely in the current reality. And, if I cared enough about the diff I wanted to submit upstream, I still would open a PR... but, I saw an AGENTS.md in a different repo that I was considering using, was disappointed and decided not to use that repo. Seeing runc embrace AI code generation would without a doubt, cause me to look for an alternative, I assume a reasonable alt probably doesn't exist, and I would resign myself to the disappointment of using runc. I agree with your argument that it's commercial grade copyright laundering, but that's not my core ethical objection to its use.

> In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move.

You're damned if you do, and damned if you don't. So the only real suggestion that I have is make sure you remember to optimize for how you want to spend your time. Calculate not just the expected value of the code within the repo, but the expected value of the people working on the repo.


> I would need to spend hours of time to articulate exactly how uncomfortable this would make me if I was working along side you.

I think this came out a little wrong -- my point was that if we are going to go with a middle-ground approach then we need to have a much lower tolerance for people who try to abuse the trust we gave in providing a middle-ground. (Also, there is little purpose in having a policy if you don't enforce it.)

For instance, someone knowing that I will deprioritise LLM PRs, and instead of deciding to write the code themselves or accept that that what I work on is my own personal decision to make, they instead decide to try to mask their LLM PR and lie about it -- I would consider this to be completely unacceptable behaviour in any kind of professional relationship.

(For what it's worth, I also consider it bad form to submit any patches or bug reports generated by any tool -- LLM or not -- without explaining what the tool was and what you did with it. The default assumption I have when talking to a human is that they personally did or saw something, but if a tool did it then not mentioning it feels dishonest in more ways than one.)

I did see that lobste.rs did a fairly cute trick to try to block agentic LLMs[1].

[1]: https://github.com/lobsters/lobsters/pull/1733


> I think this came out a little wrong

I think it came out exactly perfectly. Unrelated to this specific topic, I've been thinking a lot lately about reward vs punishment as a framework for promoting pro-social environments. I didn't read far into what you said. I was merely pattern matching it back to the common mistakes I see and want to discourage.

> but if a tool did it then not mentioning it feels dishonest in more ways than one.

Yeah, plagiarism is shockingly common. It's a sign of lacking the skill or ability to entertain 2rd order, or 3rd order thoughts/ideas.


I would potentially accept it (although you should not lie about it, and if someone says in their policy that they specifically do not want this then you should not send it), although for my own projects, there are reasons (having to do with the way the version control is set up) that I cannot accept direct contributions anyways, and I usually make my own changes to any contributions anyways (but anyone else can make their own copy with their own changes however they want to do, whether or not I accept it). Since it is honest, and that you had reviewed it by yourself before submitting it, and also tested it, and that I would review it again anyways as well, it would be acceptable, although I would still much more prefer to receive contributions that do not use LLM, still the way you do it is much better than the other stuff some by LLM which does not meet the threshold of being acceptable.

If I were the maintainer, and you specified up front that your PR was largely written by an LLM, I would appreciate it. I may prioritize it lower than other PRs, perhaps, but that's about it.

I think it's also important to disclose how rigorously you tested your changes, too. I would hate to spend my time looking at a change that was never even tested by a human.

It sounds like you do both of these. Judging by the other replies, it seems that other reviewers may take a harsher stance, given the heavily polarized nature of LLMs. Still, if you made the changes and you're up front about your methodology, why not? In the worst case, your PR gets closed and everybody moves on.


They don't want your contribution, so don't disrespect them by trying to make it.

> I can't imagine that any policy against LLM code would allow this sort of thing, but I also imagine that if I don't say "this was made by a coding agent", that no one would ever know. So, should I just stop contributing, or start lying?

If a project has a stated policy that code written with an LLM-based aid is not accepted, then it shouldn't be submitted, same as with anything else that might be prohibited. If you attempt to circumvent this by hiding it and it is revealed that you knowingly did so in violation of the policy, then it would be unsurprising for you to receive a harsh reply and/or ban, as well as a revert if the PR was committed. This would be the same as any other prohibition, such as submitting code copied from another project with an incompatible license.

You could argue that such a blanket ban is unwarranted, and you might be right. But the project maintainers have a right to set the submission rules for their project, even if it rules out high-quality LLM assisted submissions. The right way to deal with this is to ask the project maintainers if they would be willing the adjust the policy, not to try to slip such code into the project anyway.


"should I stop doing this thing that people are explicitly saying they don't want me to do or should I keep doing this thing that people are explicitly saying they don't want me to do???"

As with evertything, there's always nuance. If everyone followed similar midset to the comment you were replying to, likely llm generated pr issues wouldn't be as much of a problem and we wouldn't even be here discussing it

> [append] Getting a lot of hate for this, which I guess is a pretty clear answer. I guess the reason I'm not receiving the "fuck off" clearly is because when I see these threads of people complaining about AI content, it's really clearly low-quality crap that (for example) doesn't even compile, and wastes everyone's time.

I think you don't deserve the downvotes and, if you really do what you say you do, that's the ONLY way to use LLMs for coding and contributing to opensource software, or to a company's software. Sadly, the vast majority of LLM users don't and will never use it like that. And while they can get fired for being useless monkeys in a real company, they will keep sending PRs to opensource software, so that's clearly a different scenario that needs a different solution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: