Hacker Newsnew | past | comments | ask | show | jobs | submit | HyprMusic's commentslogin

Set whitelist to a file path and then add a file with each genre that you want to keep on a new line. You can use the top-level genres in this file to base it off (except the whitelist isn't YAML, just a genre per line): https://raw.githubusercontent.com/beetbox/beets/master/beets...

Be sure to enable canonical so it converts the specific genres into their parent genre.


I've tried many times to find a nice UI for beets and somehow never come across this. It is exactly what I've been searching for all these years... Thanks for sharing!


This feels like a cheap trick to drive users towards Claude Code. It's likely no coincidence that this happened at the same time they announced subscription access to Claude Code.

The Windsurf team repeatedly stated that they're running at a loss so all this seems to have achieved is giving OpenAI an excuse to cut their 3rd party costs and drive more Windsurf users towards their own models.


> This feels like a cheap trick to drive users towards Claude Code

How did you come to this conclusion? It’s very much like he remarked: OpenAI acquired Windsurf, OpenAI is Anthropics direct competitor.

It doesn’t make strategic sense to sell Claude to OpenAI. OpenAI could train against Claude weights, or OpenAI can cut out Anthropic at any moment to push their own models.

The partnership isn’t long lasting so it doesn’t make sense to continue it.


> OpenAI could train against Claude weights

OpenAI can always buy a Claude API subscription with a credit card if they want to train something. This change only prevents the Windsurf product from offering Claude APIs to their customers.


Other than, you know, terms and contracts.


But that's a completely different story, no? Cutting off Windsurf has nothing to do with enforcing that T&C.


Totally irrelevant, Anthropic isn’t cutting off OpenAI, it is cutting off Windsurf users.


Windsurf users can still plug their own Anthropic key and continue using the models. It’s Windsurf subscribers (eg OpenAI customers) that use the models through the Windsurf service (through their servers as proxy, that’s now OpenAI) are getting cut off

I don’t see how this is irrelevant. Windsurf is a first party product of their most direct competitor. Imagine a car company integrating the cloud tech of a different manufacturer


Exactly, it's like car manufacturers cutting off Android because Google own Waymo. The only people that pay are the consumers.


Nobody is actually using Windsurf. It was an acquihire and a squashing of competition that caught ground in the enterprise contract market really early. Anyone doing agentic coding seriously is using open source tooling with direct token pricing to the major model providers. Windsurf/Cursor/et. al are just expensive middlemen with no added value.


Which open source agentic tooling are you using. I'm a fan of Aider but I find it lacking the agentic side of things. I've looked at Goose, Plandex, Opencode, and etc. Which do you like?


Cline all the way: https://cline.bot/.

Haven't found anything else that even comes close.


Dang, was hoping for something terminal based <3 but thank you


If nobody is using then why cut off access?


Looks great. Before I get too excited, do you plan to release a per-token paid API, or is your target audience bigger companies who negotiate proper contracts?


I think we have one on the site right now -- it's roughly 4.1-mini pricing. We're not aiming to make money off of individual users, which is why we're trialing a free thing (and trying to partner with open-source frameworks). Our bread and butter is more companies doing this at scale & licensing.


Very useful. I just tried it with my spare racquet, and it seems about right. I have always wondered if my main racquet has lost tension, so it'll be really useful for that.

For making money, I'd suggest reaching out to professional re-stringers and asking if they want to advertise (once you have some nice analytics to brag about). Maybe even localise them based on geolocation data so you can have more re-stringers without cluttering it up. It's a value-add to your users so everyone should be happy.

You could probably even reach out to some YouTubers (I personally like BadmintonInsight). Since it's free, I would imagine some of them would do a video on it just to help their viewers.


Finding stringers nearby is really cool idea. Yeah, BadmintonInsight is a really good channel. I'll try reaching out.


I'm assuming the low latency cold starts are from a paused state, considering chrome itself takes a few seconds to boot? Or have you found some clever way to snapshot a running chrome and fork that?

Either way thanks for sharing.


It snapshots / pauses the entire unikernel instance after launching chromium, and then resumes the instance in <20ms with exactly the same state.


Is that safe? I was under the impression that snapshot/resume of ex. anything running crypto libraries was a minefield of duplicate keys and reused nonces.


This looks great.

I have a few questions. 1. I'm assuming by the pricing it's "serverless" inference, what's the cold-start time like? 2. Any idea on inference costs?

Also just to reiterate what others say but the option of exporting weights would definitely make it more appealing (although it sounds like that's in the roadmap).


Thanks!

> I'm assuming by the pricing it's "serverless" inference, what's the cold-start time like?

Yeah, you could probably call it serverless inference. However, due to the fact that all fine-tuned models are trained on the same base model(s), we have some interesting optimizations we can apply over standard "serverless" model deployment. The biggest is that we can keep the base model loaded in VRAM and only swap the trained weight deltas per request. This gives us sub-second cold-start times for inference in the average case.

> Any idea on inference costs?

Right now, we’re pricing inference at $0.5/M input tokens, $2.5/M output tokens. That’s in a similar price range but a bit lower than gpt-4o/Claude 3.5, which we consider the main models we’re "competing" with. As it’s our goal to democratize access to models/agents in the long run, we hope that we can drop the prices for inference further, which should be enabled by some other optimizations we’re currently planning.


It's not that Europe has never developed the tech... the UK developed and launched an orbital rocket in the 60s/70s but it had to be shipped to Australia to be launched. I can't remember the specifics but I recall reading that Europe just isn't a geographically optimal place to launch rockets.

https://en.m.wikipedia.org/wiki/Black_Arrow


> has never developed the tech

Well... an European company controlled > 50% of the commercial orbital launch market back in the late 80s and 80s.

That might have been tricky to achieve without developing the tech first.


It's more than suitable if you don't care where the debris lands.


Like China.


Mostly random videos and memes that you never asked for. Updates from friends are legitimately difficult to find amongst the noise.


Those memes and videos should only be coming from pages that you follow and groups that you're in.

I don't understand where your comment comes from, to be honest. I use Facebook a decent amount, and every single post in my feed comes from a page that I follow, a group that I'm in, or a friend.

I think the issue is when your friend Bob shares something from a page that you don't follow. People seem to blame Facebook for showing you irrelevant content, rather than blaming their friend Bob for sharing it. People share things on Facebook with the intent for you to see it. If you don't like it, then block the page Bob shared from. If Bob does a lot of sharing, then either unfriend him entirely, or at least unfollow so his crap stops showing up in your feed.


This is absolutely untrue. My feed literally only displays suggested content from page I'm not following (with a small "join" CTA above the page name). Most of these are garbage content I'm not interested into and not shared by my friends


Does this mean it is limited to the model's internal memory? Meaning newer shows won't be in the recommendations because they're past the training cut-off?


That is a likely true to an extent, though it's hard to say at what point it cuts off.

If a model was trained 6 months ago for example it will likely have some info on shows that came out this month due to various data points talking about that show as "upcoming" but not released. Due to that it may still recommend "new" shows that have just released.

All that being said, I have to imagine that suggesting shows that have just now been released is likely the weak point of the system for sure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: