Hacker Newsnew | past | comments | ask | show | jobs | submit | colinsane's commentslogin

> Why hasn’t there been a fork of nixos? And the folks who want to do things in a certain politically leaning way gravitate towards that and those that don’t stay.

v.s.

> Why hasn’t there been a fork of nixos? And the folks who want to do things in a certain politically leaning way stay and those that don’t gravitate towards that.

now let's spend the next few years arguing which of these is the correct proposition.

sure, it's more complicated: there's questions about _what_ to fork (Nix is an _ecosystem_, not necessarily a single repository), there are certain things which can't trivially _be_ forked (e.g. a multi-hundred-TB S3 cache that's actually critical infrastructure; project websites, wikis, uncountable automation services). how do you coordinate all the details of forking, if forking isn't actually as trivial as pushing the "fork" button? that requires highly capable leaders, and if the ecosystem were good at finding and promoting that type of leader, then it wouldn't be in this place to begin with.

more optimistically, various parts of this ecosystem _have_ been forked, or reshaped, by various entities. things happen; sometimes that happening is just a lengthy process.


> I should be able to get a cheap / run my own 600B param model.

if the margins on hosted inference are 80%, then you need > 20% utilization of whatever you build for yourself for this to be less costly to you (on margin).

i self-host open weight models (please: deepseek et al aren't open _source_) on whatever $300 GPU i bought a few years ago, but if it outputs 2 tokens/sec then i'm waiting 10 minutes for most results. if i want results in 10s instead of 10m, i'll be paying $30000 instead. if i'm prompting it 100 times during the day, then it's idle 99% of the time.

coordinating a group buy for that $30000 GPU and sharing that across 100 people probably makes more sense than either arrangement in the previous paragraph. for now, that's a big component of what model providers, uh, provide.


> So, there'll be a one-time, 'Oh man, we spent a lot of money and we didn't get anything for it.'


> [RSS] is a standard that websites and podcasts can use to offer a feed of content to their users, one easily understood by lots of different computer programs. Today, though RSS continues to power many applications on the web, it has become, for most people, an obscure technology.

arguing that RSS is dead because the average person doesn't understand it is like saying HTTP's dead for the same reason. neither are dead: we've just abstracted them to the point that they're no longer the front-facing part of any interaction.


wow there's really _zero_ sense of mutual respect in this industry isn't there. it's all just "let's make a buck by being total assholes to everyone around us".


> Are there cons of being more aggressive with these settings?

well, the con is you might unknowingly break some setups. take NetworkManager: after tightening it down, did you check both IPv4 and IPv6 connectivity? did you check that both the `dns=systemd-resolved` and `dns=default` modes of operation (i.e. who manages /etc/resolv.conf) work? did you check its ModemManager integrations, that it can still manage cellular connections? did you check that the openvpn and cisco anyconnect plugins still work? what about the NetworkManager-dispatcher hooks?

> Why don't distros flip more of these switches?

besides the bit of "how many distro maintainers actually understand the thing they're maintaining well enough to know which switches can be flipped without breaking more than 0.01% of user setups", there's the bit of "should these flags be owned by the distro, or by the upstream package?" if the distro manages these, they'll get more regressions on the next upstream release. if the upstream sets these, they can't be as aggressive without breaking one or two of their downstreams.


on the commerce front, it's really easy to find small-to-medium size vendors who accept Bitcoin for just about any category of goods now.

on the legal front, there's been some notable "wins" for cryptocurrency advocates: e.g. the U.S. lifted its sanctions against Tornado Cash (the Ethereum anonymization tool) a few months ago.

on the UX front, a mixed bag. the shape of the ecosystem has stayed remarkably unchanged. it's hard to build something new without bridging it to Bitcoin or Ethereum because that's where the value is. but that means Bitcoin and Ethereum aren't under much pressure to improve _themselves_. most of the improvements actually getting deployed are to optimize the interactions between institutions, and less to improve the end-user experience directly.

on the privacy front, also a mixed bag. people seem content enough with Monero for most sensitive things. the appetite for stronger privacy at the cryptocurrency layer mostly isn't here yet i think because what news-worthy de-anonymizations we have are by now being attributed (rightly or wrongly) to components of the operation _other_ than the actual exchange of cryptocurrency.


so you watch videos, listen to music, read books, and take/share photos on a phone, ipad, or tv. you seek a better experience doing those things, and your solution is to spin up some software _on a totally new device_ (a server).

that's a huge leap! i think most of us gloss over it, but the rest of the article is predicated on that leap.

the tv you're streaming video to probably runs Android by now. it has a stable internet connection, CPU, RAM, and probably a couple USB ports. why not install the Jellyfin server software on it, attach a USB hard drive, and let it be the machine that hosts all your media? why, actually, do you need to go out of your way to buy a completely new machine for this?

similar argument applies to Immich. you're wanting to co-edit an album among several contacts. you're probably all uploading your photos from a phone. why not just have one of your always-on phones host that album? i shouldn't expect the drain on your battery to serve an album to a few friends is that much more than it took to take those photos in the first place.

to a certain degree, you're "self-hosting" things on a physical server because that's the only platform on which we all still have the ability to run arbitrary workloads on. solve that problem and everything becomes a _lot_ simpler.


what i hated about AI discourse a year ago was how far removed it was from anything concrete. nobody seemed to have a _purpose_ in building AI; no thing they wanted to use it _for_. it was a silly text or image generator, would someday become a "do everything" device, and the progression from here to there was unknowable: it was just a plot device for anyone suddenly interested in writing speculative fiction.

in my vicinity, the sci-fi discourse has died down the last few months. my coworkers will _show me_ how they use these tools when i ask them, and are building on/with them incrementally. the shift in tone is encouraging. there's space for actual practical discourse around this stuff now. chat about concrete things with your friends/coworkers -- if you're interested in it. ignore the media, CEO interviews and LinkedIn hype posts: they're playing a different sort of game that you're probably happier off not being a part of.


I'm personally interested about the intersection of AI and physics


Do you follow Steve Brunton's YT channel[1]? His physics stuff is mostly fluid dynamics but still pretty cool.

[1] https://www.youtube.com/@Eigensteve


project website, links to a release video and some writeups on the project: https://byran.ee

not sure why it's not linked from the README: found it in the repo at website/astro.config.mjs


It's in the repo description at the top of the page.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: