Hacker Newsnew | past | comments | ask | show | jobs | submit | barosl's commentslogin

I tested the demo at https://moq.dev/publish/ and it's buttery as hell. Very impressive. Thanks for the great technology!

Watching the Big Buck Bunny demo at https://moq.dev/watch/?name=bbb on my mobile phone leaves a lot of horizontal black lines. (Strangely, it is OK on my PC despite using the same Wi-Fi network.) Is it due to buffer size? Can I increase it client-side, or should it be done server-side?

Also, thanks for not missing South Kora in your "global" CDN map!


Is it Chrome only? On Android Firefox it just says no browser support :(


Same here


Same on safari


Safari is still working to add full support for WebTransport.


You can enable webtransport in safari, in privacy settings experimental tag, but its just black on the page


Yes, Safari's WebTransport implementation is still a work in progress.


Horizontal black lines? Dunno what that could be about, we render to a <canvas> element which is resized to match the source video and then resized again to match the window with CSS.


What’s that like for performance and power usage? I understand normal videos can generally be entirely hardware-accelerated so that the video doesn’t even touch the CPU, and are passed straight through to the compositor. I’m guessing with this you’re stuck with only accelerating individual frames, and there’ll be more back and forth so that resource usage will probably be a fair bit higher?

An interesting and unpleasant side-effect of rendering to canvas: it bypasses video autoplay blocking.


It's all hardware accelerated, assuming the VideoDecoder has hardware support for the codec. VideoFrame is available in WebGL and WebGPU as a texture or gpu-buffer. We're only rendering after a`requestAnimationFrame` callback so decoded frames may get automatically skipped based on the display frame rate.

I don't think the performance would be any worse than the <video> tag. The only exception would be browser bugs. It definitely sounds like the black bars are a browser rendering bug given it's fine when recorded.


Unfortunately canvas (rgb'ish) can't overlay as efficiently as <video> (yuv'ish), so there is some power cost relative to the lowest power video overlays.

It really only matters in long form content where nothing else on the page is changing though.


> It really only matters in long form content where nothing else on the page is changing though.

Did you not just describe at least 99% of all web video?


If only that were true, battery usage would be much better :) Just consider the prominence of content like tiktoks/shorts/reels/etc alone.


Oh and the autoplay restrictions for <video> don't apply when muted.


Depends on your configuration. Firefox has a “block audio and video” option. Which this bypasses.


Doesn't show up on screen capture, but there's random rolling quickly flickering lines on my phone, kinda like from analog distortion on old tvs


I have got same issue with the black lines


The page mentions a lot of Rust code and WASM. Maybe your phone's CPU cannot run WASM fast enough?

My Samsung S20 shows no black lines.


My Samsung S24 Ultra shows black lines too, on Chrome and Samsung Internet.


Chrome on my oneplus ten, I get flickering black lines routinely. The fact they're going from somewhere along the top, down towards the right makes me wonder if it's a refresh artifact maybe? It's sort of like the rolling shutter effect


On a mac book air m4 with a 600mbps connection, it's instantaneous and amazing.


with this pc spec and internet speed, I expect its "normal"


You’d be amazed. A very significant number of websites are anything but instant on my i9 14900k and 1000mbps connection. Our work identity provider/vpn app for 80 people takes about 15 seconds to start up on said machine. Apparently that’s “normal”


You'd be surprised i have 1g/1g and youtube still is not buttery smooth lol on my m3 mac pro, theirs always a noticeable gap between clicking play and it actually starting to stream


Let's create a metric, call it "click to streaming time" (CTST).

This site was instantaneous. Youtube was not.

(Even if it's an M4)


I have the same experience on a Macbook Air M1 (I don't think that matters at all) and 100 MBit/s DSL.


I don't get the black lines on Android/Chrome but it doesn't respect my aspect ratio when I go full screen. Instead of adding black bars to the sides, it excludes the top and bottom of the video completely.


I am bad at CSS.


Managing aspect ratios in conjunction with managing a responsive page layout is one of the darker parts of CSS in my experience. You’re not alone.



I wish it was true but there’s so many times aspect-ratio still doesn’t work. It’s a pretty weak property so things like flexbox parents will quickly cause it to be ignored.


Holy shit that starts streaming fast! like WTF


Actually the default value of `APT::Install-Recommends` had been false, and it was changed to true in Debian 6.0 Squeeze (2011-02-06). I didn't like the change at the time because my Debian and Ubuntu systems suddenly installed more packages by default. However, now that I think of, the distinction of recommended packages and suggested packages was blurry before the change, because both were opt-in. Auto-installing recommended packages, while allowing the user to opt out is a better default I guess. But I still turn off auto-installation of recommended packages in the systems I manage.


fish doesn't use SQLite, but its own plaintext format. They've been trying to migrate to other formats.[1] Currently fish stores timestamps and directories in its history.

[1] https://github.com/fish-shell/fish-shell/issues/3341


There still seem to be some remaining issues. My Cloudflare Pages site is still giving me 500 errors. I looked up the response headers and realized that when requests are served by a certain data center, they fail, but if processed by another data center, they succeed. I suspected there to be some stale cache data, so I looked around the Cloudflare console but found no way to invalidate the cache in the Pages menu (the one in the domains menu didn't work). I also sent a ticket to their help center, only to be greeted by an AI. Probably waiting more will solve this problem automatically.


Off topic, but it is good to see an article about the LIEF library on Hacker News. I recently had a need to modify the header of an ELF file and LIEF was a lifesaver. Thanks to all the authors and contributors!


History classes are always fun! I didn't know dash had the command history feature. It is just disabled by default.


Shell vi mode is part of the POSIX standard, but emacs is not.

Dash has an option to be built with libedit that supports set -o vi. It should very much be built this way.

"[set -o] vi: Allow shell command line editing using the built-in vi editor. Enabling vi mode shall disable any other command line editing mode provided as an implementation extension. This option shall be supported if the system supports the User Portability Utilities option.

"It need not be possible to set vi mode on for certain block-mode terminals."

https://pubs.opengroup.org/onlinepubs/9799919799/utilities/ [V3_chap02.html]


Yes. The current state of things makes much more sense when one is aware of the background.


Yeah, I also used Docker (actually, Podman) as an alternative Python package manager and it worked well enough. Most of all, it felt somewhat cleaner and more reproducible than using plain virtualenv.

Of course, I migrated from it after I learned uv.


A very well written article! I admire the analysis done by the author regarding the difficulties of Python packaging.

With the advent of uv, I'm finally feeling like Python packaging is solved. As mentioned in the article, being able to have inline dependencies in a single-file Python script and running it naturally is just beautiful.

  #!/usr/bin/env -S uv run
  # /// script
  # dependencies = ['requests', 'beautifulsoup4']
  # ///
  import requests
  from bs4 import BeautifulSoup
After being used to this workflow, I have been thinking that a dedicated syntax for inline dependencies would be great, similar to JavaScript's `import ObjectName from 'module-name';` syntax. Python promoted type hints from comment-based to syntax-based, so a similar approach seems feasible.

> It used to be that either you avoided dependencies in small Python script, or you had some cumbersome workaround to make them work for you. Personally, I used to manage a gigantic venv just for my local scripts, which I had to kill and clean every year.

I had the same fear for adding dependencies, and did exactly the same thing.

> This is the kind of thing that changes completely how you work. I used to have one big test venv that I destroyed regularly. I used to avoid testing some stuff because it would be too cumbersome. I used to avoid some tooling or pay the price for using them because they were so big or not useful enough to justify the setup. And so on, and so on.

I 100% sympathize with this.


One other key part of this is freezing a timestamp with your dependency list, because Python packages are absolutely terrible at maintaining compatibility a year or three or five later as PyPI populates with newer and newer versions. The special toml incantation is [tool.uv] exclude-newer:

  # /// script
  # dependencies = [
  #   "requests",
  # ]
  # [tool.uv]
  # exclude-newer = "2023-10-16T00:00:00Z"
  # ///
https://docs.astral.sh/uv/guides/scripts/#improving-reproduc...

This has also let me easily reconstruct some older environments in less than a minute, when I've been version hunting for 30-60 minutes in the past. The speed of uv environment building helps a ton too.


Maybe I'm missing something, but why wouldn't you just pin to an exact version of `requests` (or whatever) instead? I think that would be equivalent in practice to limiting resolutions by release date, except that it would express your intent directly ("resolve these known working things") rather than indirectly ("resolve things from when I know they worked").


Pinning deps is a good thing, but it won't necessarily solve the issue of transitive dependencies (ie: the dependencies of requests itself for example), which will not be pinned themselves, given you don't have a lock file.

To be clear, a lock file is strictly the better option—but for single file scripts it's a bit overkill.


1 file, 2 files, N files, why does it matter how many files?

Use a lock file if you want transitive dependencies pinned.

I can't think of any other language where "I want my script to use dependencies from the Internet, pinned to precise versions" is a thing.


If there's a language that does this right, all ears? But I havn't seen it -

The use case described is for a small one off script for use in CI, or a single file script you send off to a colleague over Slack. Very, very common scenario for many of us. If your script depends on

    a => c
    b => c
You can pin versions of those direct dependencies like "a" and "b" easy enough, but 2 years later you may not get the same version of "c", unless the authors of "a" and "b" handle their dependency constraints perfectly. In practice that's really hard and never happens.

The timestamp appraoch described above isn't perfect, but would result in the same dep graph, and results, 99% of the time..


Try Scala with an Ammonite script like https://ammonite.io/#ScalaScripts . The JVM ecosystem does dependencies right, there's no need to "pin" in the first place because dependency resolution is deterministic to start with. (Upgrading to e.g. all newer patch versions of your current dependencies is easy, but you have to make an explicit action to do so, it will never happen "magically")


Rust tends to handle this well. It'll share c if possible, or split dependencies. Cargo.lock preserves exact resolution


> 1 file, 2 files, N files, why does it matter how many files?

One file is better for sharing than N, you can post it in a messenger program like Slack and easily copy-and-paste (while this becomes annoying with more than one file), or upload this somewhere without needing to compress, etc.

> I can't think of any other language where "I want my script to use dependencies from the Internet, pinned to precise versions" is a thing.

This is the same issue you would have in any other programming language. If it is fine for possibly having breakage in the future you don't need to do it, but I can understand the use case for it.


I think it's a general principle across all of software engineering that, when given the choice, fewer disparate locations in the codebase need to have correlated changes.

Documentation is hard enough, and that's often right there at exactly the same location.


> why does it matter how many files?

Because this is for scripts in ~/bin, not projects.

They need to be self-contained.


For N scripts, you will need N lock files littering your directories and then need venvs for all of them.

Sometimes, the lock files can be larger than the scripts themselves...


One could indicate implicit time-based pinning of transitive dependencies, using the time point at which the dependended-on versions were released. Not a perfect solution, but it's a possible approach.


isn't that quite exactly what the above does?


I think OP was saying to look at when the package was build instead of explicitly adding a timestamp. Of course, this would only work if you speficied `requests@1.2.3` instead of just `requests`.

This looks like a good strategy, but I wouldn't want it by default since it would be very weird to suddenly having a script pull dependencies from 1999 without explanation why.


I'm not a python packaging expert or anything but an issue I run into with lock files is they can become machine dependent (for example different flavors of torch on some machines vs others).


Oh yeah, I completely forgot about transitive dependencies. That makes perfect sense, then! Very thoughtful design/inclusion from `uv`.


Except at least for the initial run, the date-based approach is the one closer to my intent, as I don't know what specific versions I need, just that this script used to work around specific date.


Oh that's neat!

I've just gotten into the habit of using only the dependencies I really must, because python culture around compatibility is so awful


This is the feature I would most like added to rust, if you don’t save a lock file it is horrible trying to get back to the same versions of packages.


Why wouldn't you save the lock file?


Well, of course you should, but it’s easy to forget as it’s not required. It also used to be recommended to not save it, so some people put it in their gitignore.

For example, here is a post saying it was previously recommended to not save it for libraries: https://blog.rust-lang.org/2023/08/29/committing-lockfiles.h...


Gosh, thanks for sharing! This is the remaining piece I felt I was missing.


For completeness, there's also a script.py.lock file that can be checked into version controls but then you have twice as many files to maintain, and potentially lose sync as people forget about it or don't know what to do with it.


Wow, this is such an insanely useful tip. Thanks!


Why didn't you create a lock file with the versions and of course hashsums in it? No version hunting needed.


Because the aim is to have a single file, fairly short, script. Even if we glued the lock file in somehow, it would be huge!

I prefer this myself, as almost all lock files are in practice “the version of packages at this time and date”, so why not be explicit about that?


A major part of the point of PEP 723 (and the original competing design in PEP 722) is that the information a) is contained in the same physical file and b) can be produced by less sophisticated users.


That's fantastic, that's exactly what I need to revive a bit-rotten python project I am working with.


Oooh! Do you end up doing a binary search by hand and/or does uv provide tools for that?


Where would binary search come into it? In the example, the version solver just sees the world as though no versions released after `2023-10-16T00:00:00Z` existed.


I mean a binary search or a bisect over dates.


My feeling sadly is because uv is the new thing, it hasn't had to handle anything but the common cases. This kinda gets a mention in the article, but is very much glossed over. There are still some sharp edges, and assumptions which aren't true in general (but are for the easy cases), and this only going to make things worse, because now there's a new set of issues people run into.


As an example of an edge case - you have Python dependencies that wrap C libs that come in x86-64 flavour and arm-64.

Pipenv, when you create a lockfile, will only specify the architecture specific lib that your machine runs on.

So if you're developing on an ARM Macbook, but deploying on an Ubuntu x86-64 box, the Pipenv lockfile will break.

Whereas a Poetry lockfile will work fine.

And I've not found any documentation about how uv handles this, is it the Pipenv way or the Poetry way?


PEP 751 is defining a new lockfile standard for the ecosystem, and tools including uv look committed to collaborating on the design and implementing whatever results. From what I've been able to tell of the surrounding discussion, the standard is intended to address this use case - rather, to be powerful enough that tools can express the necessary per-architecture locking.

The point of the PEP 723 comment style in the OP is that it's human-writable with relatively little thought. Cases like yours are always going to require actually doing the package resolution ahead of time, which isn't feasible by hand. So a separate lock file is necessary if you want resolved dependencies.

If you use this kind of inline script metadata and just specify the Python dependency version, the resolution process is deferred. So you won't have the same kind of control as the script author, but instead the user's tooling can automatically do what's needed for the user's machine. There's inherently a trade-off there.


Your reply is unrelated to my query - is a uv lockfile able to handle multiple arches like a Poetry lockfile?


uv works like poetry, rather than pipenv, in this regards.


Yeah uv uses a platform independent resolution for its lockfiles supports features that Poetry does not, like

- Specifying a subset of platforms to resolve for

- Requiring wheel coverage for specific platforms

- Conflicting optional dependencies

https://docs.astral.sh/uv/concepts/resolution/#universal-res...

https://docs.astral.sh/uv/concepts/projects/config/#conflict...


I think this is an awesome feature and will probably a great alternative to my use of nix to do similar things for scripts/python if nothing else because it's way less overhead to get it running and playing with something.

Nix for all it's benefits here can be quite slow and make it otherwise pretty annoying to use as a shebang in my experience versus just writing a package/derivation to add to your shell environment (i.e. it's already fully "built" and wrapped. but also requires a lot more ceremony + "switching" either the OS or HM configs).


It's not a feature that's exclusive to uv. It's a PEP, and other tools will eventually support it if they don't already.


Will nix be slow after the first run? I guess it will have to build the deps, but in a second run should be fast, no?


`nix-shell` (that is what the OP seems to be referring) is always slow-ish (not really that slow if you are used with e.g.: Java CLI commands, but definitely slower than I would like) because it doesn't cache evaluations AFAIK.

Flakes has caching but support for `nix shell` as shebang is relatively new (nix 2.19) and not widespread.


Agreed. I did the exact same thing with that giant script venv and it was a constant source of pain because some scripts would require conflicting dependencies. Now with uv shebang and metadata, it’s trivial.

Before uv I avoided writing any scripts that depended on ML altogether, which is now unlocked.


You know what we need? In both python and JS, and every other scripting language, we should be able to import packages from a url, but with a sha384 integrity check like exists in HTML. Not sure why they didn't adopt this into JS or Deno. Otherwise installing random scripts is a security risk


Python has fully-hashed requirements[1], which is what you'd use to assert the integrity of your dependencies. These work with both `pip` and `uv`. You can't use them to directly import the package, but that's more because "packages" aren't really part of Python's import machinery at all.

(Note that hashes themselves don't make "random scripts" not a security risk, since asserting the hash of malware doesn't make it not-malware. You still need to establish a trust relationship with the hash itself, which decomposes to the basic problem of trust and identity distribution.)

[1]: https://pip.pypa.io/en/stable/topics/secure-installs/


Good point, but it's still a very useful way to ensure it doesn't get swapped out underneath you.

Transitive dependencies are still a problem though. You kind of fall back to needing a lock file or specifying everything explicitly.


Right, still a security risk, but at least if I come back to a project after a year or two I can know that even if some malicious group took over a project, they at least didn't backport a crypto-miner or worse into my script.


The code that you obtain for a Python "package" does not have any inherent mapping to a "package" that you import in the code. The name overload is recognized as unfortunate; the documentation writing community has been promoting the terms "distribution package" and "import package" as a result.

https://packaging.python.org/en/latest/discussions/distribut...

https://zahlman.github.io/posts/2024/12/24/python-packaging-...

While you could of course put an actual Python code file at a URL, that wouldn't solve the problem for anything involving compiled extensions in C, Fortran etc. You can't feasibly support NumPy this way, for example.

That said, there are sufficient hooks in Numpy's `import` machinery that you can make `import foo` programmatically compute a URL (assuming that the name `foo` is enough information to determine the URL), download the code and create and import the necessary `module` object; and you can add this with appropriate priority to the standard set of strategies Python uses for importing modules. A full description of this process is out of scope for a HN comment, but relevant documentation:

https://docs.python.org/3/library/importlib.html

https://docs.python.org/3/library/sys.html#sys.meta_path


Deno and npm both store the hashes of all the dependencies you use in a lock file and verify them on future reinstalls.


The lockfile is good, but I'm talking about this inline dependency syntax,

  # dependencies = ['requests', 'beautifulsoup4']
And likewise, Deno can import by URL. Neither include an integrity hash. For JS, I'd suggest

    import * as goodlib from 'https://verysecure.com/notmalicious.mjs' with { integrity="sha384-xxx" }
which mirrors https://developer.mozilla.org/en-US/docs/Web/Security/Subres... and https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

The Python/UV thing will have to come up with some syntax, I don't know what. Not sure if there's a precedent for attributes.


Where do you initially get the magical sha384 hash that proves the integrity of the package the first time it's imported?


Same way we do in JS-land: https://developer.mozilla.org/en-US/docs/Web/Security/Subres...

tl;dr use `openssl` on command-line to compute the hash.

Ideally, any package repositories ought to publish the hash for your convenience.

This of course does nothing to prove that the package is safe to use, just that it won't change out from under your nose.


This is a nice feature, but I've not found it to be useful, because my IDE wont recognize these dependencies.

Or is it a skill issue?


What exactly do you imagine that such "recognition" would entail? Are you expecting the IDE to provide its own package manager, for example?


Generally it means "my inspections and autocomplete works as expected".


It can't possibly autocomplete or inspect based off code it doesn't actually have.


That’s the point. Many modern IDEs are in fact capable of downloading their own copy off some source and praising it.


No, it's the fact that it's a rather new PEP, and our IDEs don't yet support it, because, rather new.


> I'm finally feeling like Python packaging is solved

Not really: https://github.com/astral-sh/uv/issues/5190


This looks horrible for anything but personal scripts/projects. For anything close to production purposes, this seems like a nightmare.


Anything that makes it easier to make a script that I wrote run on a colleagues machine without having to give them a 45 minute crash course of the current state of python environment setup and package management is a huge win in my book.


Don’t use it in production, problem solved.

I find this feature amazing for one-off scripts. It’s removing a cognitive burden I was unconsciously ignoring.


It's not meant for production.


There's about 50 different versions of "production" for Python, and if this particular tool doesn't appear useful to it, you're probably using Python in a very very different way than those of us who find it useful. One of the great things about Python is that it can be used in such diverse ways by people with very very very different needs and use cases.

What does "production" look like in your environment, and why would this be terrible for it?


> As mentioned in the article, being able to have inline dependencies in a single-file Python script and running it naturally is just beautiful.

The syntax for this (https://peps.python.org/pep-0723/) isn't uv's work, nor are they first to implement it (https://iscinumpy.dev/post/pep723/). A shebang line like this requires the tool to be installed first, of course; I've repeatedly heard about how people want tooling to be able to bootstrap the Python version, but somehow it's not any more of a problem for users to bootstrap the tooling themselves.

And some pessimism: packaging is still not seen as the core team's responsibility, and uv realistically won't enjoy even the level of special support that Pip has any time soon. As such, tutorials will continue to recommend Pip (along with inferior use patterns for it) for quite some time.

> I have been thinking that a dedicated syntax for inline dependencies would be great, similar to JavaScript's `import ObjectName from 'module-name';` syntax. Python promoted type hints from comment-based to syntax-based, so a similar approach seems feasible.

First off, Python did no such thing. Type annotations are one possible use for an annotation system that was added all the way back in 3.0 (https://peps.python.org/pep-3107/); the original design explicitly contemplated other uses for annotations besides type-checking. When it worked out that people were really only using them for type-checking, standard library support was added (https://peps.python.org/pep-0484/) and expanded upon (https://peps.python.org/pep-0526/ etc.); but this had nothing to do with any specific prior comment-based syntax (which individual tools had up until then had to devise for themselves).

Python doesn't have existing syntax to annotate import statements; it would have to be designed specifically for the purpose. It's not possible in general (as your example shows) to infer a PyPI name from the `import` name; but not only that, dependency names don't map one-to-one to imports (anything that you install from PyPI may validly define zero or more importable top-level names, and of course the code might directly use a sub-package or an attribute of some module (which doesn't even have to be a class). So there wouldn't be a clear place to put such names except in a separate block by themselves, which the existing comment syntax already does.

Finally, promoting the syntax to an actual part of the language doesn't seem to solve a problem. Using annotations instead of comments for types allows the type information to be discovered at runtime (e.g. through the `__annotations__` attribute of functions). What problem would it solve for packaging? It's already possible for tools to use a PEP 723 comment, and it's also possible (through the standard library - https://docs.python.org/3/library/importlib.metadata.html) to introspect the metadata of installed packages at runtime.


> by the author

And the author is?


Any flow that does not state checksums/hashsums is not ready for production and all but beautiful. But I haven't used uv yet, so maybe it is possible to specify the dependencies with hashsums in the same file too?

Actually the order of import statement is one of the things, that Python does better than JS. It makes completions much less costly to calculate when you type the code. An IDE or other tool only has to check one module or package for its contents, rather than whether any module has a binding of the name so and so. If I understand correctly, you are talking about an additional syntax though.

When mentioning a gigantic venv ... Why did they do that? Why not have smaller venvs for separate projects? It is really not that hard to do and avoids dependency conflicts between projects, which have nothing to do with each other. Using one giant venv is basically telling me that they either did not understand dependency conflicts, or did not care enough about their dependencies, so that one script can run with one set of dependencies one day, and another set of deps the other day, because a new project's deps have been added to the mix in the meantime.

Avoiding deps for small scripts is a good thing! If possible.

To me it just reads like a user now having a new tool allowing them to continue the lazy ways of not properly managing dependencies. I mean all deps in one huge venv? Who does that?? No wonder they had issues with that. Can't even keep deps separated, let alone properly having a lock file with checksums. Yeah no surprise they'll run into issues with that workflow.

And while we are relating to the JS world: One may complain in many ways about how NPM works, but it has had automatic lock file for aaages. Being the default tool in the ecosystem. And its competitors had it to. At least that part they got right for a long time, compared to pip, which does nothing of the sort eithout extra effort.


Why did they do that? Why not have smaller venvs for separate projects?

What's a 'project'? If you count every throw away data processing script and one off exploratory Jupyter notebook, that can easily be 100 projects. Certainly before uv, having one huge venv or conda environment with 'everything' installed made it much faster and easier to get that sort of work done.


In what kind of scope are these data processing scripts? In some kind of pipeline used in production I very much would expect them to have reproducible dependencies.

I can understand it for exploratory Jupyter Notebook. But only in the truly exploratory stage. Say for example you are writing a paper. Reproducibility crisis. Exploring is fine, but when it gets to actually writing the paper, one needs to make ones setup reproducible, or lose credibility right away. Most academics are not aware of, or don't know how to, or don't care to, make things reproducible, leading to non-reproducible research.

I would be lying, if I claimed, that I personally always set up a lock file with hashsums for every script. Of course there can be scripts and things we care so little about, that we don't make them reproducible.


For the (niche) Python library that I co-develop, we use this for demo scripts that live in an example/ directory in our repo. These scripts will never be run in production, but it’s nice to allow users to try them out and get a feel for how the library works before committing to installing dependencies and setting up a virtual environment.

In other words, of course, in most long-term cases, it’s better to create a real project - this is the main uv flow for a reason. But there’s value in being able to easily specify requirements for quick one-off scripts.


> Why not have smaller venvs for separate projects?

Because they are annoying and unnecessary additional work. If I write something, I won't know the dependencies in the beginning. And if it's a personal tool/script or even a throwaway one-shoot, then why bother with managing unnecessary parts? I just manage my personal stack of dependencies for my own tools in a giant env, and pull imports from it or not, depending on the moment. This allows me to move fast. Of course it is a liability, but not one which usually bites me. Every some years, some dependency goes wrong, and I either fix it or remove it, but at the end the benefit I save in time far outweighs the time I would lose from micromanaging small separate envs.

Managing dependencies is for production and important things. Big messy envs is good enough for everything else. I have hundred of script and tools, micromanaging them on that level has no benefit. And it seems uv now offers some options for making small envs effortless without costing much time, so it's a net benefit in that area, but it's not something world shattering which will turn my world upside down.


Well, if generating a lock file and installing dependencies is "unnecessary" then you obviously don't have any kind of need to be production ready project. Any project serious about managing its dependencies will mandate hashsums for each dependency, to avoid things having issues a week or a month later, without change to the project.

If you do have a project that needs to manage its dependencies well and you still don't store hashsums and use them to install your dependencies based on them, then basically you forfeit any credibility when complaining about things going wrong with regard to bugs happening or changed behavior of the code without changing the code itself and similar things.

This can be all fine, if it is just your personal project, that gets shit done. I am not saying you must properly manage dependencies for such a personal project. Just not something ready for production.

I for one find it quite easy to make venvs per project. I have my Makefiles, which I slightly adapt to the needs of the project and then I run 1 single command, and get all set up with dependencies in a project specific venv, hashsums, reproducibility. Not that much to it really, and not at all annoying to me. Also can be sourced from any other script when that script uses another project. Could also use any other task runner thingy, doesn't have to be GNU Make, if one doesn't like it.


>Any flow that does not state checksums/hashsums is not ready for production

It's not designed nor intended for such. There are tons of Python users out there who have no concept of what you would call "production"; they wrote something that requires NumPy to be installed and they want to communicate this as cleanly and simply (and machine-readably) as possible, so that they can give a single Python file to associates and have them be able to use it in an appropriate environment. It's explicitly designed for users who are not planning to package the code properly in a wheel and put it up on PyPI (or a private index) or anything like that.

>and all but beautiful

De gustibus non est disputandum. The point is to have something simple, human-writable and machine-readable, for those to whom it applies. If you need to make a wheel, make one. If you need a proper lock file, use one. Standardization for lock files is finally on the horizon in the ecosystem (https://peps.python.org/pep-0751/).

>Actually the order of import statement is one of the things, that Python does better than JS.

Import statements are effectively unrelated to package management. Each installed package ("distribution package") may validly define zero or more top-level names (of "import packages") which don't necessarily bear any relationship to each other, and the `import` syntax can validly import one or more sub-packages and/or attributes of a package or module (a false distinction, anyway; packages are modules), and rename them.

>An IDE or other tool only has to check one module or package for its contents

The `import` syntax serves these tools by telling them about names defined in installed code, yes. The PEP 723 syntax is completely unrelated: it tells different tools (package managers, environment managers and package installers) about names used for installing code.

>Why not have smaller venvs for separate projects? It is really not that hard to do

It isn't, but it introduces book-keeping (Which venv am I supposed to use for this project? Where is it? Did I put the right things in it already? Should I perhaps remove some stuff from it that I'm no longer using? What will other people need in a venv after I share my code with them?) that some people would prefer to delegate to other tooling.

Historically, creating venvs has been really slow. People have noticed that `uv` solves this problem, and come up with a variety of explanations, most of which are missing the mark. The biggest problem, at least on Linux, is the default expectation of bootstrapping Pip into the new venv; of course uv doesn't do this by default, because it's already there to install packages for you. (This workflow is equally possible with modern versions of Pip, but you have to know some tricks; I describe some of this in https://zahlman.github.io/posts/2025/01/07/python-packaging-... . And it doesn't solve other problems with Pip, of course.) Anyway, the point is that people will make single "sandbox" venvs because it's faster and easier to think about - until the first actual conflict occurs, or the first attempt to package a project and accurately convey its dependencies.

> Avoiding deps for small scripts is a good thing! If possible.

I'd like to agree, but that just isn't going to accommodate the entire existing communities of people writing random 100-line analysis scripts with Pandas.

>One may complain in many ways about how NPM works, but it has had automatic lock file for aaages.

Cool, but the issues with Python's packaging system are really not comparable to those of other modern languages. NPM isn't really retrofitted to JavaScript; it's retrofitted to the Node.JS environment, which existed for only months before NPM was introduced. Pip has to support all Python users, and Python is about 18 years older than Pip (19 years older than NPM). NPM was able to do this because Node was a new project that was being specifically designed to enable JavaScript development in a new environment (i.e., places that aren't the user's browser sandbox). By contrast, every time any incremental improvement has been introduced for Python packaging, there have been massive backwards-compatibility concerns. PyPI didn't stop accepting "egg" uploads until August 1 2023 (https://blog.pypi.org/posts/2023-06-26-deprecate-egg-uploads...), for example.

But more importantly, npm doesn't have to worry about extensions to JavaScript code written in arbitrary other languages (for Python, C is common, but by no means exclusive; NumPy is heavily dependent on Fortran, for example) which are expected to be compiled on the user's machine (through a process automatically orchestrated by the installer) with users complaining to anyone they can get to listen (with no attempt at debugging, nor at understanding whose fault the failure was this time) when it doesn't work.

There are many things wrong with the process, and I'm happy to criticize them (and explain them at length). But "everyone else can get this right" is usually a very short-sighted line of argument, even if it's true.


> It's not designed nor intended for such. There are tons of Python users out there who have no concept of what you would call "production"; they wrote something that requires NumPy to be installed and they want to communicate this as cleanly and simply (and machine-readably) as possible, so that they can give a single Python file to associates and have them be able to use it in an appropriate environment. It's explicitly designed for users who are not planning to package the code properly in a wheel and put it up on PyPI (or a private index) or anything like that.

Thus my warning about its use. And we as a part of the population need to learn and be educated about dependency management, so that we do not keep running into the same issues over and over again, that come through non-reproducible software.

> Import statements are effectively unrelated to package management. Each installed package ("distribution package") may validly define zero or more top-level names (of "import packages") which don't necessarily bear any relationship to each other, and the `import` syntax can validly import one or more sub-packages and/or attributes of a package or module (a false distinction, anyway; packages are modules), and rename them.

I did not claim them to be related to package management, and I agree. I was making an assertion, trying guess the meaning of what the other poster wrote about some "import bla from blub" statement.

> The `import` syntax serves these tools by telling them about names defined in installed code, yes. The PEP 723 syntax is completely unrelated: it tells different tools (package managers, environment managers and package installers) about names used for installing code.

If you had read my comment a bit more closely, you would have seen, that this is the assertion I made one phrase later.

> It isn't, but it introduces book-keeping (Which venv am I supposed to use for this project? Where is it? Did I put the right things in it already? Should I perhaps remove some stuff from it that I'm no longer using? What will other people need in a venv after I share my code with them?) that some people would prefer to delegate to other tooling.

I understand that. The issue is, that people keep complaining about things that can be solved in rather simple ways. For example:

> Which venv am I supposed to use for this project?

Well, the one in the directory of the project, of course.

> Where is it?

In the project directory of course.

> Did I put the right things in it already?

If it exists, it should have the dependencies installed. If you change the dependencies, then update the venv right away. You are always in a valid state this way. Simple.

> Should I perhaps remove some stuff from it that I'm no longer using?

That is done in the "update the venv" step mentioned above. Whether you delete the venv and re-create it, or have a dependency managing tool, that removes unused dependencies, I don't care, but you will know it, when you use such a tool. If you don't use such a tool, just recreate the venv. Nothing complicated so far.

> What will other people need in a venv after I share my code with them?

One does not share a venv itself, one shares the reproducible way to recreate it on another machine. Thus others will have just what you have, once they create the same venv. Reproducibility is key, if you want your code to run elsewhere reliably.

All of those have rather simple answers. I grant, some of these answers one learns over time, when dealing with these questions many times. However, none of it must be made difficult.

> I'd like to agree, but that just isn't going to accommodate the entire existing communities of people writing random 100-line analysis scripts with Pandas.

True, but those have apparently a need to have Pandas. Then it cannot be avoided to install dependencies. Then it depends on whether their stuff is one-off stuff, that no one will ever need to run again later, or part of some need to be reliable pipeline. The use-case changes the requirements with regard to reproducibility.

---

About the NPM - PIP comparison. Sure there may be differences. None of those however justify not having hashsums of dependencies where they can be had. And if there is a C thing? Well, you will still download that in some tarball or archive when you install it as a dependency. Easy to get a checksum of that. Store the checksum.

I was merely pointing out a basic facility of NPM, that is there for as long as I remember using NPM, that is still not existent with PIP, except for using some additional packages to facilitate it (I think hashtools or something like that was required). I am not holding up NPM as the shining star, that we all should follow. It has its own ugly corners. I was pointing out that specific aspect of dependency management. Any artifact downloaded from anywhere one can calculate the hashes of. There are no excuses for not having the hashes of artifacts.

That Pip is 19 years older than NPM doesn't have to be a negative. Those are 19 years more time to have worked on the issues as well. In those 19 years no one had issues with non-reproducible builds? I find that hard to believe. If anything the many people complaining about not being able to install some dependency in some scenario tell us, that reproducible builds are key, to avoid these issues.


>I did not claim them to be related to package management, and I agree.

Sure, but TFA is about installation, and I wanted to make sure we're all on the same page.

>I understand that. The issue is, that people keep complaining about things that can be solved in rather simple ways.

Can be. But there are many comparably simple ways, none of which is obvious. For example, using the most basic level of tooling, I put my venvs within a `.local` directory` which contains other things I don't want to put in my repo nor mention in .gitignore. Other workflow managers put them in an entirely separate directory and maintain their own mapping.

>Whether you delete the venv and re-create it, or have a dependency managing tool, that removes unused dependencies, I don't care, but you will know it, when you use such a tool.

Well, yes. That's the entire point. When people are accustomed to using a single venv, it's because they haven't previously seen the point of separating things out. When they realize the error of their ways, they may "prefer to delegate to other tooling", as I said. Because it represents a pretty radical change to their workflow.

> That Pip is 19 years older than NPM doesn't have to be a negative. Those are 19 years more time to have worked on the issues as well.

In those 19 years people worked out ways to use Python and share code that bear no resemblance to anything that people mean today when they use the term "ecosystem". And they will be very upset if they're forced to adapt. Reading the Packaging section of the Python Discourse forum (https://discuss.python.org/c/packaging/14) is enlightening in this regard.

> In those 19 years no one had issues with non-reproducible builds?

Of course they have. That's one of the reasons why uv is the N+1th competitor in its niche; why Conda exists; why meson-python (https://mesonbuild.com/meson-python/index.html) exists; why https://pypackaging-native.github.io/ exists; etc. Pip isn't in a position to solve these kinds of problems because of a) the core Python team's attitude towards packaging; b) Pip's intended and declared scope; and c) the sheer range of needs of the entire Python community. (Pip doesn't actually even do builds; it delegates to whichever build backend is declared in the project metadata, defaulting to Setuptools.)

But it sounds more like you're talking about lockfiles with hashes. In which case, please just see https://peps.python.org/pep-0751/ and the corresponding discussion ("Post-History" links there).


I also write code using my phone when I'm on a bus or the subway. It requires some patience but after getting used to it, the experience is surprisingly pleasant especially if you're familiar with terminal-based tools. My environment consists of:

  - Galaxy S24 Ultra
  - Termius: I think it is the best terminal emulator and SSH client on Android. The sad thing is that the paid version is a bit too expensive. ($10 per month, no permanent option)
  - tmux: Mobile connections are brittle so it is a must.
  - Vim: Allows me to navigate the code freely without using arrow keys, which is really useful on the touch keyboard.
Not that of a big deal, but the thing that I think is more pleasant on the phone than on the PC is that I can use my fingerprint to log in to the remote server. The fingerprint is stored in the TPM so it is safe. It feels magical!

Edit: The biggest pain point for me was the limited width of the smartphone screen. It is a bit hard to skim over the code quickly because most lines are severely cut. Text wrapping helps this but personally I hate text wrapping. Keeping landscape mode is not an option because the code area is completely hidden when the touch keyboard is displayed. That's why foldable phones are great for coding, as they have a wider screen. My previous phone was Galaxy Fold and it was a wonderful coding machine.


Try pairing tmux with mosh, it's how I've been working for years whenever I'm forced to admin through a brittle straw. Mosh combats lag pretty well and doesn't care if your connection drops intermittently. https://mosh.org/


I tried Mosh but it didn't fit my taste. It tries to "predict" the state of the screen before being acknowledged by the server, but sometimes the prediction is wrong and Mosh reverts the cursor movement and redraws the affected area of the terminal. For example, when I'm using split windows in Vim or tmux, Mosh allows typed characters to overflow beyond the separator, briefly, until being told "no" by the server. Personally I find this behavior very disturbing. Enduring higher lags was more bearable to me.


I can see how that's off-putting, but I've learned to ignore the occasional cosmetic hiccup and just trust that it will sync up correctly. I use it with --predict=experimental (largely undocumented), which seems to be even more aggressive, but it works great for me.


You can try eternal terminal: https://eternalterminal.dev/

I don't remember it doing any sort of prediction but the last time I used it was a while back.


Have you tried any of the various `--predict` options? At least `--predict=never`.


"...admin through a brittle straw." :D That's exactly what it feels like.


mosh has saved some hair pulling, especially when on a train journey with at best spotty 3G and you get pinged about an outage.


I wish I could do it. I find even just texting annoying. Also Galaxy phone. I wonder if my fingers may just be too fat. Although I don't think they are. Actually I hate doing most things through a phone, and e.g. if a food delivery app has a desktop version I will always use that given the chance.


I have been really impressed lately using Samsung Dex on a XReal Air 2. AR glasses have really improved in the recent years. It gives you a better screen than many small laptops.

For longer trips (train, airplane), add a mechanical wireless bluetooth keyboard (my choice would be a NuPhy Air 75) to feel like a king. For the occasional browser + SSH on the go, it's better (less space + better keyboard + larger screen experience) than bringing my 13" laptop (+ phone).


Gosh they look interesting. But ridiculously customer unfriendly product naming, and a website that doesn't provide clear information on international shipping just raises so many red flags for me.


Mosh was suggested in another comment, but I’ve found that et (https://eternalterminal.dev/) suits my needs better.

It does nothing to fix lag, but connection failures are handled without a hitch, same session resumes like normal on spotty train wifi and mobile data.


Do you use a special keyboard app too, or just the default one?


Just the default one. I tried some alternative keyboards and they are better in some ways but in the end the default keyboard was enough. Termius provides input of some special keys (e.g. Ctrl, Alt, Esc, Tab, Home, End) so that's another reason why the default keyboard is enough.


Be sure to check the privacy policy on your default keyboard. I've been burned by that before. The default keyboard on my last galaxy phone was sending every single keystroke to a third party and checking their privacy policy showed they used that data for things like market research, guessing at my level of education, building a psychological profile, detecting my interests, etc. and that they in turn shared that data with others.

I switched to AnySoftKeyboard and although the auto-correct/spellcheck is way worse (understandable since they're not collecting every word everyone's typing) the customization and terminal mode are great. I'd occasionally code on my phone in termux (the largest program written on that device was only around 2000 lines) and it did the job.


Phone keyboards are a big security risk.

Is there a way to completely block them from accessing the network?


Nothing reliable that I know of. To have any hope at all of being able to do that with Android you'd need a rooted device. Without root access "your" phone isn't something you can reasonably hope to secure since Google, your phone carrier, and the manufacture all have privileged access to your device while you don't. Even with a rooted device I'd only use an app that you trust. The default samsung keyboard that phone came with out of the box was downright adversarial so at least I got rid of that, but I don't think of cell phones as something I can really secure or trust in a meaningful way.


You can firewall any app/service on Android with RethinkDNS.


Just FYI, this goes for all Android users. I believe iPhone has similar capabilities but I have never tried myself.

Your phone likely accepts a physical keyboard. I have a USB-C input, but can use a travel dongle (female USB-C device accepting USB-A) to attach.

I used this a few times to do some very light work when travelling. A good setup is picking up a cheap bluetooth keyboard/mouse combo and using the female input to get both. Many alternatives to this too, e.g. you can also attach a dock to your phone to get all devices your phone has the hardware to accept, and you'd be surprised what it does accept.


But how you type?


T9 on a Nokia 3310.


Laptop is superior.. sorry


> if "rm -i" is the default the "rm" level gets disabled, because "rm -i -f" is the same as "rm -f"

You can use "\rm" to invoke the non-aliased version of the command. I made "rm -i" the default using an alias and occasionally use "\rm" to get the decreased safety level you described. I think it is more convenient that way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: