Hacker Newsnew | past | comments | ask | show | jobs | submit | baby_souffle's commentslogin

> the legacies are still proudly talking up their upcoming "unified platforms", that allow them to build models in a single factory and interchange ICE and EV powertrains in the same model based on demand. Same cars in everything but drivetrain.

I still remember when ford was _super_ proud of their ability to push OTAs to their mach-e mustang and lightning. This was in 2020, not 2010 when it would actually have been considered innovative and cutting-edge.

> So, they will lose. Its their kodak moment.

Agree. It's only a question of how many years the decline is stretched out over. We'll learn a lot about the long term viability of US auto over the next 36 months as slate/teleo/scout start to ship.


> but I submit that there’s a reason that capital-E-Engineering credentials typically require some kind of education in ethics-in-design.

Or said differently: there’s a reason why software engineering jobs pay so well; no mandatory ethics training required!


No. It's supply and demand.

> or those overrun with malware to protect users

The anti-malware companies won't lobby government to block malware as that would cut into sales of their anti virus/malware.


Not just audio, anybody in the live events / production space needs all equipment marching in lock step.

If it's for an event, can they not bring all the devices together in close proximity and sync them somehow? That at least removes network delays

> That at least removes network delays

But that's often _the source_ you need to work around; there is no way in hell that they're going to get all the light/sound/video/graphics ... etc people in the same 2 square meter area to put on a show like the super bowl :).

For smaller events like a touring act or even a venue with a few hundred people capacity you still need a single master clock but this time it's not "wall time" and is "absolute" time. E.g.: a musician at the front of the house chooses when to start and the video/lighting guy in the back needs to be on the same page so the visuals line up [0].

[0]: https://en.wikipedia.org/wiki/SMPTE_timecode


You can't sync individual oscillators precisely for very long.

Not even if you test hundreds of pairs to find a match?

> Not even if you test hundreds of pairs to find a match?

Assuming you do find _a_ match, you still need everything else to be in sync across the different temperature(s) that each component will be operating at


> I keep saying, the first cheap Chinese vendor that ships a SystemReady-compliant SBC is gonna make a killing.

Agree. When ARM announced the initiative, I thought that the raspberry pi people would be quick but they haven't even announced a plan to eventually support it. I don't know what the hold up is! Is it really that difficult to implement?


Apparently Pine64 and Radxa sell SystemReady-compliant SBCs; even a Raspberry Pi 4 can be made compliant (presumably by booting a UEFI firmware from the Raspberry's GPU-based custom-schmustom boot procedure, which then loads your OS).

The Pi boots on its GPU, which is a closed off Broadcom design. Likely complicates things a bit.

> Some places value upgrading dependencies while others value extreme stability at the potential cost of security.

Both are valid. The latter is often used as an excuse, though. No, your $50 wifi connected camera does not need the same level of stability as the WiFi connected medical device that allows doctor to remotely monitor medication. Yes, you should have a moderately robust way to update and build and distribute a new FW image for that camera.

I can't tell you the number of times I've gotten a shell on some device only to find that the kernel/os-image/app-binary or whatever has build strings that CLEARLY feature `some-user@their-laptop` betraying that if there's ever going to be an updated firmware, it's going to be down to that one guy's laptop still working and being able to build the artifact and not because a PR was merged.


The obvious counterpoint is that a PR system is also likely to break unless it is exercised+maintained often enough to catch little issues as they appear. Without a set of robust tests the new artifact is also potentially useless to a company that has already sold their last $50 WiFi camera. If the artifact is also used for their upcoming $54.99 camera then often they will have one good version there too. The artifact might work on the old camera but the risk/reward ratio is pretty high for updating the abandonware.

I largely agree but don't want to entirely discount the effect that using a compiled language had.

At least in my limited experience, the selling point with the most traction is that you don't already need a working python install to get UV. And once you have UV, you can just go!

If I had a dollar for every time I've helped somebody untangle the mess of python environment libraries created by an undocumented mix of python delivered through the distributions package management versus native pip versus manually installed...

At least on paper, both poetry and UV have a pretty similar feature set. You do however need a working python environment to install and use poetry though.


> the selling point with the most traction is that you don't already need a working python install to get UV. And once you have UV, you can just go!

I still genuinely do not understand why this is a serious selling point. Linux systems commonly already provide (and heavily depend upon) a Python distribution which is perfectly suitable for creating virtual environments, and Python on Windows is provided by a traditional installer following the usual idioms for Windows end users. (To install uv on Windows I would be expected to use the PowerShell equivalent of a curl | sh trick; many people trying to learn to use Python on Windows have to be taught what cmd.exe is, never mind PowerShell.) If anything, new Python-on-Windows users are getting tripped up by the moving target of attempts to make it even easier (in part because of things Microsoft messed up when trying to coordinate with the CPython team; see for example https://stackoverflow.com/questions/58754860/cmd-opens-windo... when it originally happened in Python 3.7).

> If I had a dollar for every time I've helped somebody untangle the mess of python environment libraries created by an undocumented mix of python delivered through the distributions package management versus native pip versus manually installed...

Sure, but that has everything to do with not understanding (or caring about) virtual environments (which are fundamental, and used by uv under the hood because there is really no viable alternative), and nothing to do with getting Python in the first place. I also don't know what you mean about "native pip" here; it seems like you're conflating the Python installation process with the package installation process.


Linux systems commonly already provide an outdated system Python you don’t want to use, and it can’t be used to create a venv of a version you want to use. A single Python version for the entire system fundamentally doesn’t work for many people thanks to shitty compat story in the vast ecosystem.

Even languages with great compat story are moving to support multi-toolchains natively. For instance, go 1.22 on Ubuntu 24.04 LTS is outdated, but it will automatically download the 1.25 toolchain when it seems go 1.25.0 in go.mod.


> Linux systems commonly already provide an outdated system Python you don’t want to use

They can be a bit long in the tooth, yes, but from past experience another Python version I don't want to use is anything ending in .0, so I can cope with them being a little older.

That's in quite a bit of contrast to something like Go, where I will happily update on the day a new version comes out. Some care is still needed - they allow security changes particularly to be breaking, but at least those tend to be deliberate changes.


> Linux systems commonly already provide an outdated system Python you don’t want to use

Even with LTS Ubuntu updated only at EOL, Python will not be EOL most of the time.

> A single Python version for the entire system fundamentally doesn’t work for many people thanks to shitty compat story in the vast ecosystem.

My experience has been radically different. Everyone is trying their hardest to provide wheels for a wide range of platforms, and all the most popular projects succeed. Try adding `--only-binary=:all:` to your pip invocations and let me know the next time that actually causes a failure.

Besides which, I was very specifically talking about the user story for people who are just learning to program and will use Python for it. Because otherwise this problem is trivially solved by anyone competent. In particular, building and installing Python from source is just the standard configure / make / make install dance, and it Just Works. I have done it many times and never needed any help to figure it out even though it was the first thing I tried to build from C source after switching to Linux.


For much of the ML/scientific ecosystem, you're lucky to get all your deps working with the latest minor version of Python six months to a year after its release. Random ML projects with hundreds to thousands of stars on GitHub may only work with a specific, rather ancient version of Python.

> Because otherwise this problem is trivially solved by anyone competent. In particular, building and installing Python from source is just the standard configure / make / make install dance, and it Just Works. I have done it many times and never needed any help to figure it out even though it was the first thing I tried to build from C source after switching to Linux.

I compiled the latest GCC many times with the standard configure / make / make install dance when I just started learning *nix command line. I even compiled gmp, mpfr, etc. many times. It Just Works. Do you compile your GCC every time before you compile your Python? Why not? It Just Works.


> Why not?

Time. CPython compiles in a few minutes on an underpowered laptop. I don't recall last time I compiled GCC, but I had to compile LLVM and Clang recently, and it took significantly longer than "a few minutes" on a high-end desktop.


> Random ML projects with hundreds to thousands of stars on GitHub may only work with a specific, rather ancient version of Python.

Can you name some?

> Do you compile your GCC every time before you compile your Python? Why not? It Just Works.

If I needed a different version of GCC to make Python work, then probably, yes. But I haven't yet.

Just like I barely ever need a different version of Python. I keep several mainly so that I can test/verify compatibility of my own code.


Sure. You do a source install every time you require a python version newer than system python.

I'll be using uv for that though, as I'll be using it for its superior package management anyway.


Why not just use a Python container rather than rely on having the latest binary installed on the system? Then venv inside the container. That would get you the “venv of a version” that you are referring to

It's more complex and heavier than using uv. I see docker/vm/vagrant/etc as something as something I reach for when the environment I want is too big, too fancy or too nondeterministic to manually set up locally; but the entire point is that "plain Python with some dependencies" really shouldn't qualify as any of these (just like build environment for a random Rust library).

Also, what do you do when you want your to locally test your codebase across many Python versions? Do you keep track of several different containers? If you start writing some tool to wrap that, you're back at square one.


> what do you do when you want your to locally test your codebase across many Python versions?

I haven’t found that there was any breakage across Python 3.x. Python 2.x to 3.x yes.

Anyways, this all could be wrapped in a CICD job and automated if you wanted to test across all versions.


Our firm uses python extensively and the virtual environment for every script or script is ... difficult. We have dozens of python scripts running for team research and in production, from small maintenance tools to rather complex daemons. Add to that the hundreds of Jupyter notebooks used by various people. Some have a handful of dependencies, some dozens of dependencies. While most of those scripts/notebooks are only used by a handful of people, many are used company-wide.

Further, we have a rather largish set of internal libraries most of our python programs rely on. And some of those rely on external 3rd party API's (often REST). When we find a bug or something changes, more often than not, we want to roll out the changed internal lib so that all programs that use it get the fix. Having to get everyone to rebuild and/or redeploy everything is a non-starter as many of the people involved are not primarily software developers.

We usually install into the system dirs and have a dependency problem maybe once a year. And it's usually trivially resolved (the biggest problem was with some google libs which had internally inconsistent dependencies at one point).

I can understand encouraging the use of virtual environments, but this movement towards requiring them ignores what, I think, is a very common use case. In short, no one way is suitable for everyone.


But in your case if you had a vanilla even just a standard, hardened RHEL image then you can run as many container variations as you want and not be impacted by host changes. Actually the host can stay pretty static.

You would have a standard container image


> Why not just use a Python container rather than rely on having the latest binary installed on the system?

Sometimes this is the right answer. Sometimes docker/podman/runc are not an option nor would the headache of volumes/mounts/permissions/hw-pass-through be worth the additional mess.

It is hard to over-state how delightful putting `uv` in the shebang is:

in `demo.py`:

    #!/usr/bin/env -S uv run
    # /// script
    # requires-python = ">=3.13"
    print("hello, world")
Then `chmod +x demo.py; ./demo.py`

At no point did I have a detour to figure out why `python` is symlinked to `python3` unless I am in some random directory where there is a half-broken `conda` environment...


Yes, PATH-driven interpreter selection is the source of the detours. uv eliminates interpreter ambiguity but requires uv as a prerequisite. This improves portability inside environments that standardize uv; it’s not “portable to machines with nothing installed.”

Though, this isn’t about avoiding installs; it’s about making the one install (uv) the only thing you have to get right, instead of debugging whatever python means today.

I was advocating for containers as the “hard isolation / full stack” solution which eliminate host interpreter ambiguity and OS drift by running everything inside a pinned image. But you do need podman and have the permissions set right on it.


> PATH-driven interpreter selection is the source of the detours. uv eliminates interpreter ambiguity but requires uv as a prerequisite.

Also, to use uv like this you either need to specify its path, or as shown in the example invoke /usr/bin/env. The Linux shebang requires a path rather than an executable name, and a relative path only works if you're in the exact right directory.

So in practical terms we have gained nothing, since if we want to avoid "PATH-driven interpreter selection" we could specify an absolute path like /usr/bin/python in the shebang, and uv doesn't let us avoid that.

That said, the PEP 723 interface is really nice (there's a lot more going on in the example than just figuring out which Python to use), and the experience of using uv as the interpreter is nicer in the sense that you only need uv to exist in one place. (This, too, is a problem that can be solved just fine in Python, and there are many approaches to it out there already.)


'we can't ship the Python version you want for your OS so we'll ship the whole OS' is a solution, but the 'we can't' part was embarrassing in 2015 already.

GP is referring to LTS versions though

Many Linux distributions ship Python. Alpine and DSL don’t. You can add it to Alpine. If you want the latest, you install it.


> I still genuinely do not understand why this is a serious selling point. Linux systems commonly already provide (and heavily depend upon) a Python distribution

Sounds like you've never actually used Python. You should never, ever be using the system Python for anything you need to run yourself. Don't even touch it. It's a great way to break your entire system. Many distros have stopped providing it at all, for good reason.

The first step every Python dev has to take on every single system they want to run their project on is to install their own sandboxed version of Python and it's libraries and it's library manager. Alternatively you pre build a docker container with it all packed inside which is the same basic thing.

Better option still is to simply ditch Python and switch to compiled languages that don't have this stupid problems.


So basically, it avoids the whole chicken-and-egg problem. With UV you've simply always got "UV -> project Python 1.23 -> project". UV is your dependency manager, and your Python is just another dependency.

With other dependency managers you end up with "system Python 3.45 -> dep manager -> project Python 1.23 -> project". Or worse, "system Python 1.23 -> dep manager -> project Python 1.23 -> project". And of course there will be people who read about the problem and install their own Python manager, so they end up with a "system Python -> virtualenv Python -> poetry Python -> project" stack. Or the other way around, and they'll end up installing their project dependencies globally...


Sorry, but that is simply incorrect, on many levels.

Virtual environments are the fundamental way of setting up a Python project, whether or not you use uv, which creates and manages them for you. And these virtual environments can freely either use or not use the system environment, whether or not you use uv to create them. It's literally a single-line difference in the `pyvenv.cfg` file, which is a standard required part of the environment (see https://peps.python.org/pep-0405/), created whether or not you use uv.

Most of the time you don't need a different Python version from the system one. When you do, uv can install one for you, but it doesn't change what your dependency chain actually is.

Python-native tools like Poetry, Hatch etc. also work by managing standards-defined virtual environments (which can be created using the standard library, and you don't even have to bootstrap pip into them if you don't want to) in fundamentally the same way that uv does. Some of them can even grab Python builds for you the same way that uv does (of course, uv doesn't need a "system Python" to exist first). "system Python -> virtualenv Python -> poetry Python -> project" is complete nonsense. The "virtualenv Python" is the system Python — either a symlink or a stub executable that launches that Python — and the project will be installed into that virtual environment. A tool like Poetry might use the system Python directly, or it might install into its own separate virtual environment; but either way it doesn't cause any actual complication.

Anyone who "ends up installing their project dependencies globally" has simply not read and understood Contemporary Python Development 101. In fact, anyone doing this on a reasonably new Linux has gone far out of the way to avoid learning that, by forcefully bypassing multiple warnings (such as described in https://peps.python.org/pep-0668/).

No matter what your tooling, the only sensible "stack" to end up with, for almost any project, is: base Python (usually the system Python but may be a separately installed Python) -> virtual environment (into which both the project and its dependencies are installed). The base Python provides the standard library; often there will be no third-party libraries, and even if there are they will usually be cut off intentionally. (If your Linux comes with pre-installed third-party libraries, they exist primarily to service tools that are part of your Linux distribution; you may be able to use them for some useful local hacking, but they are not appropriate for serious, publishable development.)

Your tooling sits parallel to, and isolated from, that as long as it is literally anything other than pip — and even with pip you can have that isolation (it's flawed but it works for common cases; see for example https://zahlman.github.io/posts/2025/02/28/python-packaging-... for how I set it up using a vendored copy of pip provided by Pipx), and have been able to for three years now.


> Most of the time you don't need a different Python version from the system one.

Except for literally anytime you’re collaborating with anyone, ever? I can’t even begin to imagine working on a project where folks just use whatever python version their OS happens to ship with. Do you also just ship the latest version of whatever container because most of the time nothing has changed?


If you're writing Python tools to support OS operations in prod, you need to target the system Python. It's wildly impractical to deploy venvs for more than one or two apps, especially if they're relatively small. Developing in a local venv can help with that targeting, but there's no substitute for doing that directly on the OS you're deploying to.

This is why you DON'T write system tools in Python in the first place. Use a real language that compiles to a native self contained binary that doesn't need dependency installing. Or you use a container. This has been a solved problem for decades. Python users have been trying to drag the entire computing world backwards this whole time because their insistence on using a toy language invented to be the JavaScript of the server, as an actual production grade bare metal system language

This is more or less the thinking that got us into the mess Python packaging is.

I, as a user, do not care whatsoever about any of this. At all. If you're explaining "virtual environments", you've lost the plot.

Compiled languages got this right. The dev creates a binary and I as a user simply run it. That's it. That's the holy grail.

It's good to see at last someone in the Python space got their ducks in a row and we've finally got a sensible tool.


This is why I ditched Python years ago for Go. I cross-compile my program binary to every OS + CPU combination then just curl the binary to the server and run it. Done. Life is much better. I encourage others to do the same. Python is a waste of time

> has simply not read and understood Contemporary Python Development 101.

They haven't. At the end of the day, they just want their program to work. You and I can design a utopian packaging system, but the physics PhD with a hand-me-down windows laptop and access to her university's Linux research cluster don't care about python other than it has a PITA library situation that UV addresses.


If they are not developers, it's the developer's responsibility to fix that. The developers have many options available for this.

You misunderstand. The physicists are developing their own software to analyze their experimental data. They typically have little software development experience, but there is seldom someone more knowledgeable available to support them. Making matters worse, they often are not at all interested in software development and thus also don't invest the time to learn more than the absolute minimum necessary to solve their current problem, even if it could save them a lot of time in the long run. (Even though I find the situation frustration, I can't say I don't relate, given that I feel the same way about LaTeX.)

Honestly, they should be using conda (if they're working on their laptops) and the cluster package manager otherwise.

Conda has slowly but surely gone down the drain as well. It used to be bullet proof but there too you now get absolutely unsolvable circular dependencies.

I'd be curious as to seeing what these circular dependencies you're seeing are (not saying I don't believe you, and I do recall in the early days of conda it doing some dumb stuff, but that particular issue seems odd)?

As for why conda: wheels do not have post-installation hooks (which given the issues with npm, I'm certainly a fan of), and while for most packages this isn't an issue, I've encountered enough packages where sadly they are required (for integration purposes), and the PyPI packages are subtlety broken on install without them. Additionally, conda (especially Anaconda Inc's commercial repositories) have significantly more optimised builds (not as good as the custom build well-run clusters provide, but better than PyPI-provided ones). I personally do not use conda (because I tend to want to test/modify/patch/upstream packages lower down the chain and test with higher up packages), but for novices (especially novices on Windows), conda for all its faults is the best option for those in the "data science" ecosystem.


I haven't ever experienced this yet, what packages were involved?

Good question, I can't backtrack right now but it was apmplanner that I had to compile from source, and it contains some python that gets executed during the build process (I haven't seen it try to run it during normal execution yet).

Probably either one of python-serial python-pexpect judging by the file dates, and neither of these are so exciting that there should have been any version conflicts at all.

And the only reason I had to rebuild it at all was due to another version conflict in the apm distribution that expects a particular version of pixbuf to be present on the system and all hell breaks loose if it isn't, and you can't install that version on a modern system because that breaks other packages.

It is insane how bad all this package management crap is. The GNU project and the linux kernel are the only ones that have never given me any trouble.


They're not applications developers, but they need to write code. That's the whole point. Python is popular within academia because it replaces R/Excel/VB.Net, not Java/C++.

Or they can give them a self contained binary that dodges 80% of these support issues because hear me out - and we've known this for 60+ years:

Users do NOT read the manual. Users ignore warnings. Users double click "AnnaKurnikovaNude.exe".


> If I had a dollar for every time I've helped somebody untangle the mess of python environment libraries created by an undocumented mix of python delivered through the distributions package management versus native pip versus manually installed...

macos and linux usually come with a python installation out of the box. windows should be following suite but regardless, using uv vs venv is not that different for most users. in fact to use uv in a project, `uv venv` seems like a prerequisite.


> macos and linux usually come with a python installation out of the box

Yep. But it's either old or broken or both. Using a tool not dependent on the python ecosystem to manage the python ecosystem is the trick here that makes it so reliable and invulnerable to issues that characterize python / dependency hell.


imho the dependency hell is a product of the dependencies themselves (a la node), especially the lack of version fixing in majority of projects.

conda already had the independence from python distribution, but it still had its own set of problems with overlap with pip (see mamba).

i personally use uv for projects at work, but for smaller projects, `requirements.txt` feel more readable than the `toml` and `uv.lock`. in the spirit of encouraging best practices, it is still probably simpler to do it with older tools. but larger projects definitely benefit, such as in building container images.


1000% this. uv is trivially installable and is completely unrelated to installations of python.

If I want to install Python on Windows and start using pip, I grab an installer from python.org and follow a wizard. On Linux, I almost certainly already have it anyway.

If I want to bootstrap from uv on Windows, the simplest option offered involves Powershell.

Either way, I can write quite a bit with just the standard library before I have to understand what uv really is (or what pip is). At that point, yes, the pip UX is quite a bit messier. But I already have Python, and pip itself was also trivially installable (e.g. via the standard library `ensurepip`, or from a Linux system package manager — yes, still using the command line, but this hypothetical is conditioned on being a Linux user).


Not many normal people want to install python. Instead, author of the software they are trying to use wants them to install python. So they follow readme, download windows installer as you say, pip this pipx, pipx that conda, conda this requirements.txt, and five minutes later they have magic error telling that tensorflow version they are installing is not compatible with pytorch version they are installing or some such.

The aftertaste python leaves is lasting-disgusting.


Nailed it. Python was my first language, but I dread having to install someone else's Python software!

Scenarios like that are simply not realistic. Besides which, multiple solutions exist for bundling Python with an application.

Scenarios like that occur daily. I do quite a bit of software development and whenever I come across something that really needs python I mentally prepare for a day of battle with the various (all subtly broken) package managers, dependency hell and circular nonsense to the point that I am also ready to give up on it after a day of trying.

Just recently: a build of a piece of software that itself wasn't written in python but that urgently needed a very particular version of it with a whole bunch of dependencies that refused to play nice with Anaconda for some reason (which in spite of the fact that it too is becoming less reliable is probably still the better one). The solution? Temporarily move andaconda to a backup directory, remove the venv activation code from .bashrc and compile the project, then restore everything to the way it was before (which I need it to be because I have some other stuff on the stove that is built using python because there isn't anything else).

And let's not go into bluetooth device support in python, anything involving networking that is a little bit off the beaten path and so on.


> Scenarios like that occur daily. I do quite a bit of software development and whenever I come across something that really needs python I mentally prepare for a day of battle with the various (all subtly broken) package managers, dependency hell and circular nonsense to the point that I am also ready to give up on it after a day of trying.

Please name a set of common packages that causes this problem reliably.


You're getting a bit boring, and are not arguing in good faith. "Reliably"... as per your definition I guess. You have now made 60(!!!) comments in this thread questioning everything and everybody without ever once accepting that other people's experiences do not necessarily have to match your own. If you did some reading rather than just writing you'd have seen that I gave a very specific example right in this thread. You are now going on my blocklist because I really don't have time or energy to argue with language zealots.

Imagine telling 60 different people "you're wrong and I'm right" without realizing that it's actually you who is wrong

The large majority of my comments ITT are not in fact "questioning everything and everybody". I checked your comment history and couldn't find other comments from you ITT, and the post I responded to does not contain anything like a "very specific example". Your accusations are entirely unfounded, and frankly inflammatory.

"not realistic"? Lmao tell me you've never used Python without telling me you've never used Python. This kind of situation is so ubiquitous they've even got an xkcd comic for it https://xkcd.com/1987/

Traditional Windows install didn’t include things Microsoft doesn’t make. But, any PC distributor could always include Python as part of their base Windows install with all the other stuff that bloats the typical third party Windows installs. They don’t which indicates the market doesn’t want it. Your indictment of the lack of Python out of the box is less on Windows than on the “distro” served by PC manufacturers

I wonder how much Rust's default to statically link almost everything helped here? That should make deployment of uv even easier?

I don't think this makes a meaningful difference. The installation is a `curl | sh`, which downloads a tarball, which gets extracted to some directory in $PATH.

It currently includes two executables, but having it contain two executables and a bunch of .so libraries would be a fairly trivial change. It only gets messy when you want it to make use of system-provided versions of the libraries, rather than simply vendoring them all yourself.


It gets mess not just in that way but also someone can have a weird LD_LIBRARY_PATH that starts to have problems. Statically linking drastically simplifies distribution and you’ve had to have distributed 0 software to end users to believe otherwise. The only platform this isn’t the case for is Apple because they natively supported app bundles. I don’t know if flat pack solves the distribution problem because I’ve not seen a whole lot of it in the ecosystem - most people seem to generally still rely on the system package manager and commercial entities don’t seem to really target flat pack.

When you're shipping software, you have full control over LD_LIBRARY_PATH. Your entry point can be e.g. a shell script that sets it.

There is not so much difference between shipping a statically linked binary, and a dynamically linked binary that brings its own shared object files.

But if they are equivalent, static linking has the benefit of simplicity: Why create and ship N files that load each other in fancy ways, when you can do 1 that doesn't have this complexity?


That’s precisely my point. It’s insanely weird to have a shell script to setup the path for an executable binary that can’t do it for itself. I guess you could go the RPATH route but boy have I only experienced pain from that.

RPATH is painless if you don't try to be clever

Eh conda was already doing all this stuff and its shipped in a self extracting .sh file and written largely in Python itself (at least it used to be lol)

This hypothetical independent shop you walk into is not filled with slop because it's curated; the store is intentionally keeping its inventory to a manageable level so that it can be screened first.

If the owner stopped caring and just decided to let any book that passed through the automated "does this book immediately and actively harm the customer?" screening machine then you'd have something that approximates the app stores.


I've had extensive luck doing just that. Spend some time doing the initial work to see how the page works and then give the llm examples of the HTML that should be clicked for next page or the css classes that indicate the details you're after and then ask for a playwright to yaml tool.

Been doing this for a few months now to keep an eye on the prices for local grocery stores. I had to introduce random jitter so Ali Express wouldn't block me from trying to dump my decade+ of order history.


I can certainly think of some organizations that fit into that bucket. I can also name organizations that are hyper controlling and micromanage every aspect of the interaction with their core products and services because they value consistency above all else.

I wonder if we'll have a situation where out of two competing organizations only one is elected to use this and the other one staunchly opposes. That will be telling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: