> More importantly, Lua has a crucial feature that Javascript lacks: tail call optimization. There are programs that I can easily write in Lua, in spite of its syntactic verbosity, that I cannot write in Javascript because of this limitation. Perhaps this particular JS implementation has tco, but I doubt it reading the release notes.
> [...] In my programs, I have banned the use of loops. This is a liberation that is not possible in JS or even c, where TCO cannot be relied upon.
This is not a great language feature, IMO. There are two ways to go here:
1. You can go the Python way, and have no TCO, not ever. Guido van Rossum's reasoning on this is outlined here[1] and here[2], but the high level summary is that TCO makes it impossible to provide acceptably-clear tracebacks.
2. You can go the Chicken Scheme way, and do TCO, and ALSO do CPS conversion, which makes EVERY call into a tail call, without language user having to restructure their code to make sure their recursion happens at the tail.
Either of these approaches has its upsides and downsides, but TCO WITHOUT CPS conversion gives you the worst of both worlds. The only upside is that you can write most of your loops as recursion, but as van Rossum points out, most cases that can be handled with tail recursion, can AND SHOULD be handled with higher-order functions. This is just a much cleaner way to do it in most cases.
And the downsides to TCO without CPS conversion are:
1. Poor tracebacks.
2. Having to restructure your code awkwardly to make recursive calls into tail calls.
3. Easy to make a tail call into not a tail call, resulting in stack overflows.
I'll also add that the main reason recursion is preferable to looping is that it enables all sorts of formal verification. There's some tooling around formal verification for Scheme, but the benefits to eliminating loops are felt most in static, strongly typed languages like Haskell or OCaml. As far as I know Lua has no mature tooling whatsoever that benefits from preferring recursion over looping. It may be that the author of the post I am responding to finds recursion more intuitive than looping, but my experience contains no evidence that recursion is inherently more intuitive than looping: which is more intuitive appears to me to be entirely a function of the programmer's past experience.
In short, treating TCO without CPS conversion as a killer feature seems to me to be a fetishization of functional programming without understanding why functional programming is effective, embracing the madness with none of the method.
EDIT: To point out a weakness to my own argument: there are a bunch of functional programming language implementations that implement TCO without CPS conversion. I'd counter by saying that this is a function of when they were implemented/standardized. Requiring CPS conversion in the Scheme standard would pretty clearly make Scheme an easier to use language, but it would be unreasonable in 2025 to require CPS conversion because so many Scheme implementations don't have it and don't have the resources to implement it.
EDIT 2: I didn't mean for this post to come across as negative on Lua: I love Lua, and in my hobby language interpreter I've been writing, I have spent countless hours implementing ideas I got from Lua. Lua has many strengths--TCO just isn't one of them. When I'm writing Scheme and can't use a higher-order function, I use TCO. When I'm writing Lua and can't use a higher order function, I use loops. And in both languages I'd prefer to use a higher order function.
EDIT 3: Looking at Lua's overall implementation, it seems to be focused on being fast and lightweight.
I don't know why Lua implemented TCO, but if I had to guess, it's not because it enables you to replace loops with recursion, it's because it... optimizes tail calls. It causes tail calls to use less memory, and this is particularly effective in Lua's implementation because it reuses the stack memory that was just used by the parent call, meaning it uses memory which is already in the processor's cache.
The thing is, a loop is still going to be slightly faster than TCOed recursion, because you don't need to move the arguments to the tail call function into the previous stack frame. In a loop your counters and whatnot are just always using the same memory location, no copying needed.
Where TCO really shines is in all the tail calls that aren't replacements for loops: an optimized tail call is faster than a non-optimized tail call. And in real world applications, a lot of your calls are tail calls!
I don't necessarily love the feature, for the reasons that I detailed in the previous post. But it's not a terrible problem, and I think it at makes sense as an optimization within the context of Lua's design goals of being lightweight and fast.
I don't think you're wrong per se. This is a "correct" way of thinking of the situation, but it's not the only correct way and it's arguably not the most useful.
A more useful way to understand the situation is that a language's major implementations are more important than the language itself. If the spec of the language says something, but nobody implements it, you can't write code against the spec. And on the flip side, if the major implementations of a language implement a feature that's not in the spec, you can write code that uses that feature.
A minor historical example of this was Python dictionaries. Maybe a decade ago, the Python spec didn't specify that dictionary keys would be retrieved in insertion order, so in theory, implementations of the Python language could do something like:
But the CPython implementation did return all the keys in insertion order, and very few people were using anything other than the CPython implementation, so some codebases started depending on the keys being returned in insertion order without even knowing that they were depending on it. You could say that they weren't writing Python, but that seems a bit pedantic to me.
In any case, Python later standardized that as a feature, so now the ambiguity is solved.
It's all very tricky though, because for example, I wrote some code a decade that used GCC's compare-and-swap extensions, and at least at that time, it didn't compile on Clang. I think you'd have a stronger argument there that I wasn't writing C--not because what I wrote wasn't standard C, but because the code I wrote didn't compile on the most commonly used C compiler. The better approach to communication in this case, I think, is to simply use phrases that communicate what you're doing: instead of saying "C", say "ANSI C", "GCC C", "Portable C", etc.--phrases that communicate what implementations of the language you're supporting. Saying you're writing "C" isn't wrong, it's just not communicating a very important detail: what implementations of the compiler can compile your code. I'm much more interested in effectively communicating what compilers can compile a piece of code than pedantically gatekeeping what's C and what's not.
Python’s dicts for many years did not return keys in insertion order (since Tim Peters improved the hash in iirc 1.5 until Raymond Hettinger improved it further in iirc 3.6).
After the 3.6 changed, they were returned in order. And people started relying on that - so at a later stage, this became part of the spec.
There actually was a time when Python dictionary keys weren't guaranteed to be in the order they were inserted, as implemented in CPython, and the order would not be preserved.
You could not reliably depend on that implementation detail until much later, when optimizations were implemented in CPython that just so happened to preserve dictionary key insertion order. Once that was realized, it was PEP'd and made part of the spec.
I'm saying it isn't very useful argue about whether a feature is a feature of the language or a feature of the implementation, because the language is pretty useless independent of its implementation(s).
I think, for those of us who have been in this industry for 20 years, AI isn't going to magically make me lose everything I learned.
However, for those in the first few years of their career, I'm definitely seeing the problem where junior devs are reaching for AI on everything, and aren't developing any skills that would allow them to do anything more than the AI can do or catch any of the mistakes that AI makes. I don't see them on a path that leads them from where they are to where I am.
A lot of my generation of developers is moving into management, switching fields, or simply retiring in their 40s. In theory there should be some of us left who can do what AI can't for another 20 years until we reach actual retirement age, but programming isn't a field that retains its older developers well. So this problem is going to catch up with us quickly.
Then again, I don't feel like I ever really lived up to any of the programmers I looked up to from the 80s and 90s, and I can't really point to many modern programmers I look up to in the same way. Moxie and Rob Nystrom, maybe? And the field hasn't collapsed, so maybe the next generation will figure out how to make it work.
Okay... are you saying there are no problems with Firefox? And if not, are how do you propose that users get these problems fixed without talking about them?
Yeah. I see this in every thread. Business types that aren't used to how normal human beings communicate see the human firefox users writing and they can never address the points. Instead they always get hung up on the tone and debate over the irrelevant tone becomes the primary/top thread in HN FF posts.
This feels to me like the scenes from the movie Don't Look Up where anyone actually pointing out what's happening gets told to calm down and be less aggressive.
If people saying what's wrong plainly and clearly is "vitriol" to you, then you have a problem with criticism.
> But the final bit in this post is really where I'm at: I have no idea where to go from here.
That's a good question. Mozilla has something like a half-billion dollars of assets, which is more than twice what the Linux Foundation reports. Does maintaining a web browser cost more than twice as much as maintaining an operating system? Hopefully not, but maybe it's time we find out.
full disclosure: one of the devs is a friend of mine
if for some reason you want to use webkit on desktop (linux), there's always gnome web, but in my experience it can't handle anything beyond very basic browsing (for example, a youtube video will cause it to crash)
> Get rid of the pseudo SemVer where you can deprecate functions and then break in minor releases.
I agree, but I think there's a bigger, cultural root cause here. This is the result of toxicity in the community.
The Python 2 to 3 transition was done properly, with real SemVer, and real tools to aid the transition. For a few years about 25% of my work as a Python dev was transitioning projects from 2 to 3. No project took more than 2 weeks (less than 40 hours of actual work), and most took a day.
And unfortunately, the Python team received a ton of hate (including threats) for it. As a natural reaction, it seems that they have a bit of PTSD, and since 3.0 they've been trying to trickle in the breaking changes instead of holding them for a 4.0 release.
I don't blame them--it's definitely a worse experience for Python users, but it's probably a better experience for the people working on Python to have the hate and threats trickle in at a manageable rate. I think the solution is for people like us who understand that breaking changes are necessary to pile love on doing it with real SemVer, and try to balance out the hate with support and
I had a client who in 2023 still was on 2.7.x, and when I found a few huge security holes in their code and told them I couldn't ethically continue to work on their product if they wouldn't upgrade Python, Django, and a few other packages, and they declined to renew my contract. As far as I know, they're still on 2.7.x. :shrug:
That’s probably true. I do think part of the anger is that a lot of the changes didn’t clearly improve the code around it. The obvious example is the change to print from a statement to a function. It makes the language a little cleaner, but it also breaks existing code for little practical benefit. More insidious was the breaks with strings and byte strings. It was a good and necessary change that also could lead to weird quiet breakages.
At least for me, the real blocker was broad package support.
Maintainers should think carefully about whether their change induces lots of downstream work for users. Users will be mad if they perceive that maintainers didn’t take that into account.
> The obvious example is the change to print from a statement to a function. It makes the language a little cleaner, but it also breaks existing code for little practical benefit.
To be clear: I literally do not remember a single example of this breaking anything after running 2to3. There was some practical benefit (such as being able to use print in callbacks) and I don't think it breaking existing code is meaningful given how thoroughly automated the fix was.
I do get the impression that a lot of the complaints are from people who did not do any upgrades themselves, or if they did, didn't use the automated tools. This is just such an irrelevant critique. This is a quintessential example of bikeshedding: the only reason you're bringing up `print` is because you understand the change, not because it's actually important in any way.
> Maintainers should think carefully about whether their change induces lots of downstream work for users. Users will be mad if they perceive that maintainers didn’t take that into account.
Sure, but users in this case are blatantly wrong. You can read the discussions on each of the breaking changes, they're public in the PEPs. The dev team is obviously very concerned with causing downstream work for users, and made every effort, very successfully, to avoid such work.
If your impression is that maintainers didn't take into account downstream work for users, and your example is print, which frankly did not induce downstream work for users, you're the problem. You're being pretty disrespectful to people who put a lot of work into providing you a free interpreter.
I think we essentially agree. My comments about maintainers wasn’t referencing the Python language maintainers. The print change certainly shouldn’t have blocked anyone.
More interesting is how long it took core libraries to transition. That was my primary blocker. My guess is that there were fairly substantial changes to the CPython API that slowed that transition.
Other changes to strings could be actually dangerous if you were doing byte-level manipulations. Maybe tools could help catch those situations. Even if they did, it took some thought and not just find/replace to fix. The change was a net benefit, but it’s easy to see why people might be frustrated or delay transition.
Your definition of "core libraries" is likely a lot broader than mine. I'm old, and I remember back in the day when Perl developers started learning the hard way that CPAN isn't the Perl standard library.
JavaScript's culture has embraced pulling in libraries for every single little thing, which has resulted in stuff like the left pad debacle, but that very public failing is just the tip of the iceberg for what problems occur when you pull in a lot of bleeding edge libraries. The biggest problems, IMO, are with security. These problems are less common in Python's culture, but still fairly common.
I've come onto a number of projects to help them clean up codebases where development had become slow due to poor code quality, and the #1 problem I see is too many libraries. Libraries don't reduce complexity, they offload it onto the library maintainers, and if those library maintainers don't do a good job, it's worse than writing the code yourself. And it's not necessarily library maintainers' fault they don't do a good job: if they stop getting paid to maintain the library, or never were paid to maintain it in the first place, why should they do a good job of maintaining it?
The Python 2 to 3 transition wasn't harder for most core libraries than it was for any of the rest of us: if anything, it was easier for them because if they're a core library they don't have as many dependencies to wait on.
There are exceptions, I'm sure, but I'll tell you that Django, Pillow, Requests, BeautifulSoup, and pretty much every other library I use regularly, supported both Python 2 AND 3 before I even found out that Python 3 was going to have significant breaking changes. On the flip side, many libraries I had to upgrade had been straight up abandoned, and never transitioned from 2 to 3 (a disproportionate number of these were OAuth libraries, for some reason). I take some pride in the fact that most of the libraries that had problems with the upgrade were ones that had been imported when I wasn't at the company, or ones that I had fought against importing because I was worried about whether they would be maintained. It's shocking how many of these libraries were fixable not with an upgrade, but with removing the dependency writing a <100 lines of my own code including tests.
I'd hope the lesson we can take away from this isn't, "don't let Python make any breaking changes", but instead, "don't import libraries off Pypi just to avoid writing 25 lines of your own code".
The core libraries to me include all the numerical and other major scientific computing libraries. I’m guessing those were laggards due to things like that string/byte change and probably changed to the CPython API.
Did you ever look into why the transition took so long for OAuth libraries? Did you consider just rewriting one yourself?
Ah, I'm not so aware of the numerical/scientific computing space beyond numpy--I will say the numpy transition was pretty quick, though.
I did take the approach of writing my own OAuth using `requests`, which worked well, but I don't think I ever wrote in such a general way to make it a library.
Part of the problem is that OAuth isn't really a standard[1]. There are well-maintained libraries for Facebook and Google OAuth, but that's basically it--everyone else's OAuth is following the standard, but the standard is too vague so they're not actually compatible with each other. You end up hacking enough stuff around the library that it's easier to just write the thing yourself.
The problem with the Google and Facebook OAuth libraries is that there were a bunch of them--I don't think any one of them really became popular enough to become "the standard". When Python 3 came out, there were a bunch of new Google and Facebook OAuth libraries that popped up. I did actually port one Facebook OAuth library to Python3 and maintain it briefly, but the client dropped support for Facebook logins because too few users were using it, and Facebook kept changing data usage requirements. When the client stopped needing the library, I stopped maintaining it. It was on Github publicly, but as far as I know I was the only user, and eventually when I deleted the Repo nobody complained.
I don't say anything unless asked, but if asked I always recommend against OAuth unless you're using it internally: why give your sign up data to Google or Facebook? That's some of your most valuable data.
A technique I used on a project was to change the URL, and have the old URL return a 426 with an explanation, a new link, and a clear date when the moved API. This reliably breaks the API for clients so that they can't ignore it, while giving them an easy temporary fix.
Clients weren't happy, but ultimately they did all upgrade. Our last-to-upgrade client even paid us to keep the API open for them past the date we set--they upgraded 9 months behind schedule, but paid us $270k, so not much to complain about there.
I suspect it's not so much that it was considered more cost-effective, and more that it wasn't considered at all. My impression was that nobody was even allocated to work on the transition until 8 months, because that's when we started getting emails from their devs, and the upgrade took them less than a week when they actually did it.
No--the goal was to break the API so users noticed, with an easy fix. A lot of users weren't even checking the HTTP status codes, so it was necessary to not return the data to make sure the API calls broke.
We did roll this out in our test environment a month in advance, so that users using our test environment saw the break before it went to prod, but predictably, none of the users who were ignoring the warnings for the year before were using our test environment (or if they were, they didn't email us about it until our breaking change went to prod).
Jesus, what a terrible idea. This is such a terrible idea that I would not hire this guy based on this post alone.
What I want from code is for it to a) work, and b) if that's not possible, to fail predictably and loudly.
Returning the wrong result is neither of the above. It doesn't draw attention to the deprecation warnings as OP intended--instead, it causes a mysterious and non-deterministic error, literally the worst kind of thing to debug. The idea that this is going to work out in any way calls into question the writer's judgment in general. Why on earth would you intentionally introduce the hardest kind of bug to debug into your codebase?
This problem is actually even worse than the article identifies, because broad definitions of what a "risk" is, result in broad exclusions.
The most pernicious of these problems is that women--yes, more than half the earth's population--are considered a high risk group because researchers fear menstrual cycles will affect test results. Until 1993 policy changes, excluding women from trials was the norm. Many trials have not been re-done to include women, and the policies don't include animal trials, so many rat studies, for example, still do not include female rats--a practice which makes later human trials more dangerous for (human) female participates.
The exclusion of women from clinic trials is one of those things that makes me really angry, there's many women in my life who've been adversely affected by various medications and essentially palmed off about it, being made to feel like they're making it up when there's obviously a problem at hand.
It will be one of those things future historians of medicine will judge our time harshly for in my opinion, and rightly so.
> [...] In my programs, I have banned the use of loops. This is a liberation that is not possible in JS or even c, where TCO cannot be relied upon.
This is not a great language feature, IMO. There are two ways to go here:
1. You can go the Python way, and have no TCO, not ever. Guido van Rossum's reasoning on this is outlined here[1] and here[2], but the high level summary is that TCO makes it impossible to provide acceptably-clear tracebacks.
2. You can go the Chicken Scheme way, and do TCO, and ALSO do CPS conversion, which makes EVERY call into a tail call, without language user having to restructure their code to make sure their recursion happens at the tail.
Either of these approaches has its upsides and downsides, but TCO WITHOUT CPS conversion gives you the worst of both worlds. The only upside is that you can write most of your loops as recursion, but as van Rossum points out, most cases that can be handled with tail recursion, can AND SHOULD be handled with higher-order functions. This is just a much cleaner way to do it in most cases.
And the downsides to TCO without CPS conversion are:
1. Poor tracebacks.
2. Having to restructure your code awkwardly to make recursive calls into tail calls.
3. Easy to make a tail call into not a tail call, resulting in stack overflows.
I'll also add that the main reason recursion is preferable to looping is that it enables all sorts of formal verification. There's some tooling around formal verification for Scheme, but the benefits to eliminating loops are felt most in static, strongly typed languages like Haskell or OCaml. As far as I know Lua has no mature tooling whatsoever that benefits from preferring recursion over looping. It may be that the author of the post I am responding to finds recursion more intuitive than looping, but my experience contains no evidence that recursion is inherently more intuitive than looping: which is more intuitive appears to me to be entirely a function of the programmer's past experience.
In short, treating TCO without CPS conversion as a killer feature seems to me to be a fetishization of functional programming without understanding why functional programming is effective, embracing the madness with none of the method.
EDIT: To point out a weakness to my own argument: there are a bunch of functional programming language implementations that implement TCO without CPS conversion. I'd counter by saying that this is a function of when they were implemented/standardized. Requiring CPS conversion in the Scheme standard would pretty clearly make Scheme an easier to use language, but it would be unreasonable in 2025 to require CPS conversion because so many Scheme implementations don't have it and don't have the resources to implement it.
EDIT 2: I didn't mean for this post to come across as negative on Lua: I love Lua, and in my hobby language interpreter I've been writing, I have spent countless hours implementing ideas I got from Lua. Lua has many strengths--TCO just isn't one of them. When I'm writing Scheme and can't use a higher-order function, I use TCO. When I'm writing Lua and can't use a higher order function, I use loops. And in both languages I'd prefer to use a higher order function.
[1] https://neopythonic.blogspot.com/2009/04/tail-recursion-elim...
[2] https://neopythonic.blogspot.com/2009/04/final-words-on-tail...
reply