Hacker Newsnew | past | comments | ask | show | jobs | submit | catlifeonmars's commentslogin

It all depends on the key exchange mechanism (KEM) used at the start of the TLS session. Some KEM have a property called “perfect forward secrecy” (PFS) which means it’s not possible to decrypt the TLS session after the fact unless one of the nodes logs out the session key(s). Diffie Helman and ECDH are two KEM that provide a PFS guarantee.

Have you tried wireshark?

It’s how not to get fired, ostracized, etc. I don’t understand how you read that as ego.

Anyone sufficiently motivated and well funded can just run their own abliterated models. Is your worry that a government has access to such models, or that Anthropic could be complicit?

I don’t think this constitution has any bearing on the former and the former should be significantly more worrying than the latter.

This is just marketing fluff. Even if Anthropic is sincere today, nothing stops the next CEO from choosing to ignore it. It’s meaningless without some enforcement mechanism (except to manufacture goodwill).


It sounds like you speak from experience

Because of findings like this

https://www.anthropic.com/research/small-samples-poison

(A small number of samples can poison LLMs of any size) to save clicks to read the headline

The way I think of it is, coding agents are power tools. They can be incredibly useful, but can also wreak a lot of havoc. Anthropic (et al) is marketing them to beginners and inevitably someone is going to lose their fingers.


I understand the need, but I don't understand why a VM or Docker is not enough. Why are people creating custom wrappers around VMs/containers?

Docker isn't virtualization; it's not that hard to infiltrate the underlying system if you really want to. But as for VMs--they are enough! They're also a lot of boilerplate to set up, manage, and interact with. yolo-cage is that boilerplate.

I don’t think that’s quite fair. What would you infer from the absence of such a commit message?

> Moving slower is usually faster long-term granted you think about the design, but obviously slower short-term, which makes it kind of counter-intuitive.

Like an old mentor of mine used to say:

“Slow is smooth; smooth is fast”


> Being able to learn from the code is a core part of the ideology embedded into the GPL.

I have to imagine this ideology was developed with humans in mind.

> but LLMs learning from code is fair use

If by “fair use” you mean the legal term of art, that question is still very much up in the air. If by “fair use” you mean “I think it is fair” then sure, that’s an opinion you’re entitled to have.


> I have to imagine this ideology was developed with humans in mind.

Actually, you don't have to. You just want to.

N=1 but to me, LLMs are a perfect example of where the "ideology embedded into the GPL" benefits the world.

The point of Free Software isn't for developers to sort-of-but-not-quite give away the code. The point of Free Software is to promote self-sufficient communities.

GPL through its clauses, particularly the viral/forced reciprocity ones, prevents software itself from becoming an asset that can be rented, but it doesn't prevent business around software. RMS/FSF didn't make the common (among fans of OSS and Free Software) but dumb assumption that everyone wants or should be a developer - the license is structured to allow anyone to learn from and modify software, including paying a specialist to do it for them. Small-scale specialization and local markets are key for robust and healthy communities, and this is what Free Software ultimately encourages.

LLMs becoming a cheap tool for modifying or writing software, even by non-specialists (or at least people who aren't domain experts), furthers those same goals, by increasing individual and communal self-sufficiency and self-reliance.

(INB4: The fact that good LLMs are themselves owned by some multinational corps is irrelevant - much in the same way as cars are important tool for personal and communal self-sufficiently, despite being designed and manufactured by few large corporations. They're still tools ~anyone can use.)


Something can be illegal and it can be technically legal but at the same time pretty damn bad. There is the spirit and the letter of the law. They can never be in perfect agreement because as time goes bad guys tend to find new workarounds.

So either the community behaves, or the letter becomes more and more complicated trying to be more specific about what should be illegal. Now that GPL is trivially washed by asking a black box trained on GPLed code to reproduce the same thing it might be inevitable, I suppose.

> They're still tools ~anyone can use

Of course, technology itself is not evil, just like crypto or nuclear fission. In this case when we are discussing harm we are almost always talking about commercial LLM operators. However, when the technology is mostly represented by that, it doesn't seem required to add a caveat every time LLMs are mentioned.

There's hardly a good, truly fully open LLM that one can actually run on own hardware. Part of the reason is that hardly anyone, in the grand scheme of things, even has the hardware required.

(Even if someone is a techie and has the money and knows how to set up a rig, which is almost nobody on grand scale of the things, now big LLM operators make sure there are no chips left for them.)

So you can buy and own (and sell) a car, but ~anyone cannot buy and run an independent LLM (and obviously not train one). ~everyone ends up using a commercial LLM powered by some megacorp's infinite compute and scraping resources and paying that megacorp one way or another, ultimately helping them do more of the stuff that they do, like harming OSS.


LLMs spitting out GPL code seems perfectly inline with the spirit to me. The goal is to make it so that users have the freedom to make software behave in ways that suit them. Things kicked off when some printer could not be made to work correctly because of its proprietary drivers. LLMs are a huge multiplier for that: now even people who don't know how to program can customize their software! We're already approaching (or at?) the point where local agents on commodity hardware (like a few $thousand worth of GPUs, which was the nominal cost of a 90s PC) are able to make whatever changes you want given the correct feedback loops. Sounds good to me.

> LLMs spitting out GPL code seems perfectly inline with the spirit

Only if spitted out code is GPL-licensed, which it isn't.


That car analogy seems really weak. It might make sense, but only if we replace Ford, Chevy, et al with Enterprise or Hertz etc.

> Actually, you don't have to. You just want to.

Fair.

> The point of Free Software isn't for developers to sort-of-but-not-quite give away the code. The point of Free Software is to promote self-sufficient communities.

… that are all reliant on gatekeepers, who also decide the model ethics unilaterally, among other things.

> (INB4: The fact that good LLMs are themselves owned by some multinational corps is irrelevant - much in the same way as cars are important tool for personal and communal self-sufficiently, despite being designed and manufactured by few large corporations. They're still tools ~anyone can use.)

You’re not wrong. But wouldn’t the spirit of Free Software also apply to model weights? Or do the large corps get a pass?

FWIW I don’t have a problem with LLMs per se. Just models that are either proprietary or effectively proprietary. Oligarchy ain’t freedom :)


> > Actually, you don't have to. You just want to.

> Fair.

I don't think it's fair. That ideology was unquestionably developed with humans in mind. It happened in the 80s, and back then I don't think anyone had a crazy idea that software can think for itself and so terms "use" and "learn" can apply to it. (I mean, it's a crazy idea still, but unfortunately not to everyone.)

One can suggest that free software ideology should be expanded to include software itself in the beneficiaries of the license, not just human society. That's a big call and needs a lot of proof that software can decide things on its own, and not just do what humans tell it.


> It happened in the 80s, and back then I don't think anyone had a crazy idea that software can think for itself and so terms "use" and "learn" can apply to it. (I mean, it's a crazy idea still, but unfortunately not to everyone.)

Sure they did. It was the golden age of Science Fiction, and let's just say that the stereotype of programmers and hackers being nerds with sci-fi obsession actually had a good basis in reality.

Also those ideas aren't crazy, they're obvious, and have already been obvious back then.


> It was the golden age of Science Fiction, and let's just say that the stereotype of programmers and hackers being nerds with sci-fi obsession actually had a good basis in reality.

At worst you are trying to disparage the entire idea of open source by painting the people who championed it as idiots who cannot tell fiction from reality. At best you are making a fool of yourself. If you say that free software philosophy means "also, potential sentient software that may become a reality in 100 years" everywhere it mentions "users" and "people" you better quote some sources.

> Also those ideas aren't crazy, they're obvious, and have already been obvious back then.

Fire-breathing dragons. Little green extraterrestrial humanoids. Telepathy. All of these ideas are obvious, and have been obvious for ages. None of these things exist. Sorry to break it to you, but even if an idea is obvious it doesn't make it real.

(I'll skip over the part where if you really think chatbots are sentient like humans then you might be defending an industry that is built on mass-scale abuse of sentient beings.)


> I have to imagine this ideology was developed with humans in mind.

Given what a big deal RMS made over not descriminating over purpose (https://www.gnu.org/philosophy/free-sw.html#run-the-program) i think that is far from clear.


>question is still very much up in the air

It is not up in the air at all. It's completely transformative.


1. It's decided by courts in US. Courts in US currently are very friendly to big tech. At this point if they deny this and say something that undermines this industry it's going to be a big economic blow, the country is way over-invested in this tech and its infrastructure.

2. "Transformative means fair" is the old idea from pre-LLM world. That's a different world. Now those IP laws are obsolete and need to be significantly updated.


Last time I checked, there are still undecided cases wrt fair use. Sure, it’s looking favorable for LLM training, but it’s definitely still up in the air.

> it’s completely transformative

IANAL, but apparently hinges on how the training material is acquired


> IANAL, but apparently hinges on how the training material is acquired

That doesn't make sense. You are either transforming something or you are not. There might be other legal considerations based on how you acquired, but it doesn't affect if something is transformative.


So there are mixed messages, per my understanding. Kadrey v Meta seems to favor the transformative nature. Bartz v Anthropic went to summary judgement but the court expressed skepticism that the use in that case was “transformative”. We won’t know because of the settlement.

Again, IANAL, so take this with a big grain of salt.


Is that the case?

The LLM models give the most likely respond to a prompt. So if you prompt it with "find security bugs from this code" it will respond with "This may be a security bug" than you "you fucking donkey this curl code has already been eyeballed by hundreds of people, you think a statistic model will find something new?"

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: