Hacker Newsnew | past | comments | ask | show | jobs | submit | Wowfunhappy's commentslogin

I mean, if we're talking any pair of headphones... a good Television certainly costs more than that, why should good sound be less worthy if investment?

Prior to this announcement, all 1M context use consumed "extra usage", it wasn't included in a normal subscription plan.

So, I’ve been using opus 4.6 1m since it was fist available to 20x max users daily. What I think has happened is that even in doing so, I have not actually exceeded the plan token limits and therefore haven’t been charged for “extra usage” (just double checked). So, unless there’s a billing mistake or delay, “any usage” != “extra usage” which is what I was always unclear about. I am careful to iterate with claude on plans in plan mode followed by clearing the context and executing. I think I am hovering around the higher end of the smaller window model where I would have otherwise seen auto-compaction run.

Another reason for less token usage is that 4.6 is much better at delegating agents (its own explorer agents or my custom agents) to avoid cluttering the window.


You and Bigstrat2003 are arguing a technicality, and you're technically correct, but in context I think that's somewhat beside the point. Skrebbel and Layer8 are focused on the cultural associations of "open source" development, and this mismatch is causing everyone to talk past each other.

The original post in this thread was:

> This is because Carmack doesn't really do OSS, he just does code dumps and tacks on a license ("a gift"). That's of course great and awesome and super nice, but he's not been painstakingly and thanklessly maintaining some key linux component for the last 20 years or something like that. It's an entirely different thing; he made a thing, sold it, and then when he couldn't sell more of it, gave it away. That's nice! But it's not what most people who are deep into open source mean by the term.

Skrebbel probably shouldn't have said that Carmack "doesn't really do OSS", but what they clearly meant was, Carmack doesn't participate in the sort of community development as the Linux kernel or Apache or whatever.


More succinctly, Carmack only contributes his code to OSS, but not his time, and shouldn't impose his values on the wider community that contribute both.

> technically correct, but in context I think that's somewhat beside the point

Talking past people to argue on semantics and pedantry is a HN pastime. It may even be it's primary function.


Code gifted absolutely includes the time taken to write it.

case in point

As pointed out in the OP comment, it's basically 'money for jam' by the point he releases the source code:

> It's an entirely different thing; he made a thing, sold it, and then when he couldn't sell more of it, gave it away. That's nice!

Carmack has extracted as much profit as he could care for from the source code. The releasing of the code is warm fuzzy feelings for zero cost, while keeping it closed source renders zero benefit to him.


its*

Well done

>“Primary” function

If that was the intent don’t you think it would be stated somewhere, or in the faq?

>“Talking” past

It’s only text, there’s no talking past. You can’t talk past someone when the conversation isn’t spoken. At best, you might ignore what they write and go on and on and on at some length on your own point instead, ever meandering further from the words you didn’t read, widening the scope of the original point to include the closest topic that isn’t completely orthogonal to the one at hand, like the current tendency to look for the newest pattern of LLM output in everyone’s’ comments in an attempt to root out all potential AI generated responses. And eventually exhaust all of their rhetoric and perhaps, just maybe, in the very end, get to the


I lol'd.

This. ^^^

Fwiw, I made something similar, but it targets an ancient version of OS X. It's theoretically possible it works on modern macOS too, but I haven't tested it.

https://github.com/Wowfunhappy/media-subscriptions-prefpane

I've been using a version of this for five years, although until recently the PrefPane was built via janky Applescript. I rewrote it in proper Objective-C last summer.


Why does the homemade butter spoil so quickly?

It usually contains significant amounts of milk protein, which contribute to flavor but spoil about as quickly as milk does. Washing the butter thoroughly until well after the water is clear will improve storage time, as will salting the butter and thoroughly drying it.

Doesn't that remove the very milk protein that makes it taste better?

Yes, it's a balance between flavor and storage time. If you plan to use it immediately, unwashed butter is best.

But milk does not spoil in 3 days. Why would the butter?

Natural, organic milk does. What you are probably used to is pasteurized, and treated with short bursts of heat. Since at least 20 years, for almost anything milky which you can find in the refrigerated parts of stores. I'm not talking about the stuff which doesn't need to be cooled, until opened, that's heated even longer, and pasteurized harder.

What? Raw milk doesn't expire in 3 days either. More like 2 weeks. And butter unlike cheese has no problem being made from pastueurized milk, so I'm not sure why you'd bring that up.

I've brought that up in response to milk doesn't go bad that fast. Which is against my experience. Maybe I should have that defined more precise?

Under which storage conditions? Refrigerated? Check. Closed container? Check. Climate? Any time of the year, central Europe. Check. Any time of the year somewhere in the Rockies, on the 'Western Slope', at 2600m altitude. Check. After 3 to 4 days it begins to smell and taste different. After which I won't touch/consume it anymore.

I'd be really interested in the stuff lasting 2 weeks, and the conditions under which that's possible?

edit: Again, not that highly pasteurized, homogenized, otherwise treated stuff, but fresh from the cows udder (let's call this really raw milk, which isn't on shelves anywhere, AFAIK), or only the slightest treatmeant, like 'fully organic/bio', which nowadays has a refrigerated shelf life of something like 2 weeks, there aren't any other options anymore. It's all treated. And that stuff still goes bad after opening in a few days.


When proper raw milk starts to go bad, you can keep it at room temperature for a day or so and get something similar to yoghurt. It was done all the time when I was little. I grew up on a farm; the milk came from another farm in the village by the time I had been born.

That may be the case, but isn't what I meant to say, which was just the (refrigerated) shelf-live of the stuff for drinking, and preparation of other stuff, assuming drinkability of it.

Secondary usage of it for other stuff is another matter.


All I know is that I used to buy raw milk from a local farm and it lasted about 2 weeks in the fridge until it tasted bad. Google suggests it lasts 1-2 weeks. I keep my fridge colder than the recommended settings.

It doesn’t. People have been doing this without refrigeration for a long time. The above poster did not wash off the buttermilk at the end which would cause it to spoil.

When unrefrigerated, they used copious amounts of salt to keep butter from spoiling. So much so that you'd have to first wash the butter (wash the salt away) before using it.

I'm not sure it does. It seems to last similarly to me when I make my own as long as I make sure to use sterilized containers to make it and such. It isn't as long as margarine which is maybe the comparison? Not sure.

> If you signed up with your Apple on the iOS Claude app, to access your account on the computer, you have to open the passwords app and copy your random email address and paste it into the Claude website login.

Isn't this basically Apple's fault? When you signed up, Apple provided a fake email address in leu of your real one. This is great for privacy but means the service has the wrong email.

I'm sure they didn't want to provide an Apple sign in option at all, but it's required by App Store rules.


They could also just implement sign in with apple on their website, they have the ability to sign in with google so not supporting Apple is still a weird choice they are making.

Apple should not have had to require developers to have options other than Google for authentication, but clearly some companies have to be dragged kicking and screaming.

So clearly they support it, and there is no reason it should not work on the web also.


A vendor doesn't have to bend for another.

Always best to sign in with your own email address.


There are a lot of websites that only support third party login, so that is not always an option.

They don't have to bend for another, but they made a choice to put an app on iOS. They added support for apple signin, and then for some reason did not put it on their website.

You can criticize Apple for requiring that all you want, but they clearly have support for it and are choosing to not put it on their website which is causing a worse user experience.

IF apple did not support website loggin than sure, but they do. So the ability to fix this is on Anthropic (and many other websites).

If you are already going to support third party login you should not limit it to only Google accounts and there is no reason to support Apple on iOS and not the web.

Also for the record, Apple only requires sign in with apple if you already support third party authentication. So if you are already going to support that, giving the user more choice (and making it so we are all a bit less dependent on google) is a good thing.


No criticism from me towards apple or Anthropic. Both parties made their choice. Apple was late to the identity business and the other ships had already sailed.

Third party logins are an extension and a massive risk to any website that doesn't include email hosting.

We have see identity providers dissapear, and people may change their mind.

Easiest way is to register you rown domain and use it with an identity provider of your choice and be able to move it anywhere.

Otherwise we are a faceless citizen of a corporation that can handle access to our identity and everything attached to it without recourse or access to anyone.


Bruh.

Are you seriously trying to justify offering Sign in with Google but not ALSO offer Sign in with Apple because of some contorted principle, the method which HELPS users maintain their privacy? What the actual f.

Antrhopic's UX is just trash, the worst of all the major AI products.

They have this "I'm special" syndrome where they think they can get away with doing shit weirdly and not offer basic features that everyone else does, and the reason why I never purchased any of their services again after the first month, and had to replace my payment info with a throwaway card because they wouldn't let me remove it, again unlike everyone else.


I don't think it's hard to understand why a service would want to support Google as an identity provider but not Apple. Google is probably the most commonly used provider out there, at least outside of the enterprise space.

Apple's identity service is not as common, and newer than the ones that were established before.

It's ok that Anthropic wasn't a fit for your prompting preference, it doesn't have to work for everyone, and it doesn't mean it wont' work for others. LLMs in general have proven that trying it once a few months ago can be a great way to miss changes. There's something out there for everyone.


> Always best to sign in with your own email address.

Using a randomly generated email per service is a huge improvement over always using the same email.


> Always best to sign in with your own email address.

Oh boy

Saying this in 2026 is just.. oh man. just wow


Not really. It's the user's fault. Apple provides an option to hide your email, it's not required. It's an option that shows up when you're prompted to create an account.

Oh, I agree with this.

My original thinking was that Apple makes it too easy for a general audience to hide their email without considering the implications (the service won't know your email). But of course there's a tension here, since you also want the option to be easy and accessible.

The party I do not consider at fault in this case is Anthropic.


But Anthropic should test their app and login experiences from phones they ship to.

> I'm sure they didn't want to provide an Apple sign in option at all

But they wanted to provide a Google Sign In? wth?

> This is great for privacy but means the service has the wrong email.

So harm the users to benefit the service? wtf?

I don't want to give my real email or anything to random services, specially not one like Claude where they don't even let you remove your payment info.


> I don't want to give my real email or anything to random services, specially not one like Claude where they don't even let you remove your payment info.

The original complaint was:

>> If you signed up with your Apple on the iOS Claude app, to access your account on the computer, you have to open the passwords app and copy your random email address and paste it into the Claude website login.

Either you use your original email or you use a per-service email. Apple helps you do the latter, but this does come with UX tradeoffs.

Using a per-service email, then complaining that the service does not have your real email, strikes me as misguided.


> but this does come with UX tradeoffs.

Only when a dumb service refuses to support Sign In with one pro-privacy provider but does for another anti-privacy one.

Anyway I've voted by having a ChatGPT/Codex subscription for 1 year and only tried Claude for 1 month. Not missing anything.


> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

> If you flag, please don't also comment that you did.

I don't understand why you cut these, they seem important! (I can understand the others, which feel either implied or too specific.)


Of course they're important, but they're also implicitly encoded into the culture. Cutting something from the guidelines doesn't mean the rule is canceled. HN has countless rules that don't appear explicitly in https://news.ycombinator.com/newsguidelines.html.

I think I'm going to put that one back, though, because it's not a hill I want to die on and I know what arguing with dozens of people simultaneously feels like when you only have 10 minutes.


> Cutting something from the guidelines doesn't mean the rule is canceled.

Understood, but I feel like I see people breaking these ones frequently, so removing the explicit guideline feels to me like a bad idea.


People break them whether they're in the list or not. But don't worry, we'll put that one back.

My experience with posted rules is that it's less about people following them preemptively than having an explicit reference to point to when they don't.

HN's long-standing policy has been to fewer explicit rules, and looser rather than stricter interpretation. This particular one comes up often enough though that it's helpful to retain IMO, thanks for restoring the cut.

I've long made a practice of linking to moderator comments regarding policies when calling out deviations, as I'm sure the mods are aware, others might find that helpful. I've found it generally reduces the personal-irritation element going both ways, helps avoid derailing threads, and serves as a refresher to me on what standards apply.


I seem to recall a rule about "don't downvote something because you disagree with it", but I can't find anything like that.

Not sure if that's really solvable with rules, though.

My experience with downvotes is that people mostly use it as a "I don't like this" button, which is proxy for "I couldn't think of a counterargument so I don't want to look at it."

(I noted recently that downvotes and counterarguments appear to be mutually exclusive, which I found somewhat amusing.)

Whereas I will often upvote things I personally disagree with, if they are interesting or well reasoned. (This seems objectively better to me, of course, but maybe it's personality thing.)


Oh that one is a classic case of people 'remembering' a rule that never existed - there's a name for this illusion but I forget what it is.

See https://news.ycombinator.com/item?id=16131314 and https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... for history...


> 'remembering' a rule that never existed

Probably the Mandela effect!

https://en.wikipedia.org/wiki/False_memory#Mandela_effect


This was (maybe still is) part of "reddiquette." Like the guidelines and case law here, it often found its way into subreddit rules and comments from moderators.

To me it's just like how, growing up in Canada, we all assumed we had Miranda rights because we watched American TV.

But how useful is source code if it takes millions of dollars to compile? At that point, if you do need to make changes, it probably makes more sense to edit the precompiled binary. Even the original developers are doing binary edits in most cases.

I agree that open weight models should not be considered open source, but I also think the entire definition breaks down under the economics of LLMs.


There are lots of reasons to read through source code you never edit or recompile: security audits, interoperability, learning from their techniques, etc. And I think many of those same ideas apply to seeing the training data of a LLM. It will help you understand quickly (without as much experimentation) what it's likely to be good at, where its biases may be, where some kind of supplement (transfer learning? RAG? whatever) might be needed. And the why.

> security audits

If you are unable to run the multimillion training, then any kind of security audit of the training code is absolutely meaningless, because you have no way to verify that the weights were actually produced by this code.

Also, the analogy with source code/binary code fails really fast, considering that model training process is non-deterministic, so even if are able to run the training, then you get different weights than those that were released by the model developers, then... then what?


I probably shouldn't have led with that example because yeah, reproducible (and cheap) builds would be best for security audits. But I wouldn't say it's absolutely meaningless. At least it can guide your experimentation, and if results start differing radically from what you'd expect from the training data, that raises interesting questions.

If you're going through the effort to be open source you can probably set up fixed batch sizes and deterministic combination of batches without too much more effort. At least I hope it's not super hard.

> considering that model training process is non-deterministic

Why would it have to be? Just use PRNG with published seeds and then anyone can reproduce it.


I have zero actual experience in training models, but in general, when parallelizing work: there can be fundamental nondeterminism (e.g., some race conditions) that is tolerated, whose recording/reproduction can be prohibitive performance-wise.

Agree, this feels like a distinction that needs formalising...

Passive transparency: training data, technical report that tells you what the model learned and why it behaves the way it does. Useful for auditing, AI safety, interoperability.

Active transparency: being able to actually reproduce and augment the model. For that you need the training stack, curriculum, loss weighting decisions, hyperparameter search logs, synthetic data pipeline, RLHF/RLAIF methodology, reward model architecture, what behaviours were targeted and how success was measured, unpublished evals, known failure modes. The list goes on!


I'd also add training checkpoints to the list for active transparency. I think the Olmo models do a decent job, but it would be cool to see it for bigger models and for ones that are closer to state-of-the-art in terms of both architecture and algorithms.

Security audits, etc, are possible because binary code closely implements what the source code says.

In this case, you have no idea what the weights are going to "do", from looking at the source materials --- the training data and algorithm --- without running the training on the data.


Compute costs are falling fast, training is getting cheaper. GPT-2 costs pocket change to train, and now it costs pocket train to tune >1T parameter models. If it was transparent what costs went into the weights, they could be commodified and stripped of bloat. Instead the hidden cost is building the infrastructure that was never tested at scale by anyone other than the original developers who shipped no documentation of where it fails. Unlike compute, this hidden cost doesn't commodify on its own.

yeah, the costs are definitely a factor and prohibitive in completely replicating an open source model. Still, there's a lot of useful things that can be done cheaply, including fine tuning, interpretability work, and other deeper investigations into the model that can't happen without the infrastructure.

Hi! I’d like to call out that the most interesting piece—the thing that made me go, oh my god—is likely https://github.com/Wowfunhappy/Celeste-64-Patched-For-Maveri...

I didn’t know this, but apparently Apple introduced a new format for shared libraries (dylibs) in macOS 12. Claude just up and wrote a C program to go through the binary of the new format and rewrite the bytes to convert it into the old format.


I’m on the Claude Max 5x plan. It was within that limit.

I actually can’t figure out how to see even the full chat log, it autocompacted during the night (presumably many times) and I can't seem to see past the most recent compaction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: