> Discrimination is not rooted in economic efficiency so I don’t follow the argument that market forces would correct it.
It absolutely is in this case. The whole reason to target ads is to make the people who receive them more likely to engage with them. For instance, including men, elderly people, and children in the target demographic for a preschool teacher job advertisement would make that advertisement significantly less efficient, which is why it's not done.
Forcing companies to disallow targeting of ads because some people are offended by the population's job preferences is absurd.
It took a long time for doctors to become more balanced despite it not necessarily being economically efficient. There’s inertia where people don’t like changing the status quo. I don’t know if solving the ad targeting changes anything given that the bias is on the advertiser side, but it could conceivably change the candidate pool that is being selected from.
This is basically just a consequence of people being a long-lived species.
The question is whether the side effects of artificially speeding up the process won't negate the original intent.
Also, the very fundament may be wrong. The authors of anti-discrimination statutes seem to be awfully certain of things such as "men can take care of babies in nurseries equally well as women can". We do not know if this is, in fact, statistically true. It is more of an egalitarian article of faith.
There was discrimination for very very long periods of times. For example, Jews weren’t allowed to hold many professions for a very long time in Europe. Black people in America were slaves and continue to feel the effects of discrimination today. It still exists in other cultures today. The idea that capitalism solves discrimination magically does not appear to be borne out in any evidence I can find. Economic efficiency takes advantage of societal changes and removal of discrimination. Not the other way around.
"The idea that capitalism solves discrimination magically does not appear to be borne out in any evidence I can find."
We should distinguish between formal legal disabilities ("Jews are prohibited from X by law") from informal discrimination that is the target of modern anti-discrimination law ("Sean Murphy does not want to employ any goddamn Englishmen"). Emancipation has a reasonably good, though not perfect, record. Anti-discrimination is a much newer idea which is much less proven in practice, though for plenty of people, it sounds convincing on paper.
If you look at the European Jews specifically, upon formal emancipation, they were able to establish themselves very quickly, both in business and the academia. In fact much of the subsequent 20th century anti-Semitism was borne out of jealousy of their success.
You won't find many aftereffects of the long-lived Chinese Exclusion Act or Japanese Internment Camps on the current well-being of Asian Americans either.
As for women, they are now outnumbering men in higher education by a considerable margin and, in the young cohorts, outearn them. By the logic of affirmative actions, there should be one for men probably...
It is true that not every group in the world was able to catch up once their shackles were released, but plenty of them actually were, and there was nothing magical about it.
Notably, the one exceptional group that mostly didn't catch up - American blacks - seems to be struggling even with all sorts of formal crutches constructed with the intent to help them. For example, the diversity programs at Harvard et al. seem to be mostly exploited by recent immigrants from Africa instead of generational American blacks.
Then explain that const isn’t deep and a const container can end up mutating state? Pretending like c++ has a consistent philosophy is amusing and pretending this happened because of pedagogy is amusing. It happened because in c assignment is a copy and c++ inherited this regardless of how dumb it is as a default for containers.
In C++ regular types have the property that const is deep. If you have a const std::vector<int> the you can't modify any of the integers contained in this container. Naturally for flexibility reasons not all types are regular, pointers being the prominent exception, and things like std::string_view being modern examples of non-regular types.
I feel like you could benefit from watching Scott Meyers about the silliness in C++ if you feel like there’s a consistent and logical feel to the language. A lot of this is c++-isms masquerading sensible ideas through official terms (regular and non-regular types)
Doing my own research, ChatGPT summarizes the state as generally unions improve wages and working conditions for employees much more than they pay in premiums. This has gone down since the 1970s but is still a noticeable effect. Indeed the 40 hour work week comes from unions. There is a negative effect on profitability, but that’s subject to interpretation:
> The negative effect on profitability from unionization may reflect that unions raise labour costs (via higher wages/benefits) and may impose work rules or other constraints that reduce flexibility. The classic model: higher labour cost → lower margins, unless offset by higher productivity or price increases. But the productivity and growth effects are less clear: many studies find little or no negative effect on productivity or capital structure, suggesting that unions may shift the distribution of returns (towards workers) rather than clearly kill growth.
So it may be worth revisiting the research you cited so decisively against unions as it likely contradicts your belief about them.
No but one step further than OP went making unsubstantiated claims that actually contradicts the actual research that paints a much more complicated picture.
>... the actual research that paints a much more complicated picture.
Given that ChatGPT is still very much in a "trust, but verify" state (on a daily basis it confidently states falsehoods about subject matter I'm highly familiar with, for instance), I'm wondering if you followed up to confirm that the data at the sources provided by ChatGPT accurately reflected what it told you.
If you're going to insist that others revisit their research, I would hope that you're making a good faith effort towards doing the same.
Pytorch is still pretty dominant in cloud hosting. I’m not aware of anyone not using it (usually by way of vLLM or similar). It’s also completely dominant for training. I’m not aware of anyone using anything else.
It’s not dominant in terms of self-hosted where llama.cpp wins but there’s also not really that much self-hosting going on (at least compared with the amount of requests that hosted models are serving)
I don’t think Fil-C supplants Rust; Rust still has a place for things like kernel development where Fil-C would not be accepted since it wouldn’t work there. But also Rust today has significantly better performance and memory usage so makes more sense for greenfield projects that might otherwise consider C/C++. Not to mention that Rust as a language is drastically easier and faster to develop in due to a modern package management system, a good fast cohesive std library, true cross platform support, static catching of all the issues that would otherwise cause Fil-C to crash instead in addition to having better performance without effort.
Fil-C is an important tool to secure traditional software but it doesn’t yet compete with Rust in the places it’s competing with C and C++ in greenfield projects (and it may never - that’s ok - it’s still valuable to have a way to secure existing code without rewriting it).
And I disagree with the characterization of Graydon’s blog. It’s literally praising Fil-C and saying it’s a valuable piece of tech in the landscape of language dev and worth paying attention to as a serious way to secure a huge amount of existing code. The only position Graydon takes is that safety is a critically important quality of software and Fil-C is potentially an important part of the story of moving the industry forward.
> And leverage the C ecosystem, by transpiling to C
I heavily doubt that this would work on arbitrary C compilers reliably as the interpretation of the standard gets really wonky and certain constructs that should work might not even compile. Typically such things target GCC because it has such a large backend of supported architectures. But LLVM supports a large overlapping number too - thats why it’s supported to build the Linux kernel under clang and why Rust can support so many microcontrollers. For Rust, that’s why there’s the rust codegen gcc effort which uses GCC as the backend instead of LLVM to flush out the supported architectures further. But generally transpiration is used as a stopgap for anything in this space, not an ultimate target for lots of reasons, not least of which that there’s optimizations that aren’t legal in C that are in another language that transpilation would inhibit.
> Rust is fast, uses little memory, but us verbose and hard to use (borrow checker).
It’s weird to me that my experience is that it was as hard to pick up the borrow checker as the first time I came upon list comprehension. In essence it’s something new I’d never seen before but once I got it it went into the background noise and is trivial to do most of the time, especially since the compiler infers most lifetimes anyway. Resistance to learning is different than being difficult to learn.
Well "transpiling to C" does include GCC and clang, right? Sure, trying to support _all_ C compilers is nearly impossible, and not what I mean. Quite many languages support transpiling to C (even Go and Lua), but in my view that alone is not sufficient for a C replacement in places like the Linux kernel: for this to work, tracing GC can not be used. And this is what prevents Fil-C and many other languages to be used in that area.
Rust borrow checker: the problem I see is not so much that it's hard to learn, but requires constant effort. In Rust, you are basically forced to use it, even if the code is not performance critical. Sure, Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python. The main disadvantage of Rust, in my view, is that it's verbose. (Also, there is a tendency to add too many features, similar to C++, but that's a secondary concern).
> Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python. The main disadvantage of Rust, in my view, is that it's verbose.
I think there's space for Rust to become more ergonomic, but its goals limit just how far it can go. At the same time I think there's space to take Rust and make a Rust# that goes further on the Swift/Scala end of the spectrum, where things like auto-cloning of references are implemented first, that can consume Rust libraries. From the organizational point of you, you can see it as a mix between nightly and editions. From a user's point of view you can look at it as a mode to make refactoring faster, onbiarding easier and a test bed for language evolution. Not being Rust itself it would also allow for different stability guarantees (you can have breaking changes every year), which also means you can be holder on tryin things out knowing you're not permanently stuck with them. People who care about performance, correctness and reuse can still use Rust. People who would be well served by Swift/Scala, have access to Rust's libraries and toolchain.
> (Also, there is a tendency to add too many features, similar to C++, but that's a secondary concern).
These two quoted sentiments seem contradictory: making Rust less verbose to interact with reference counted values would indeed be adding a feature.
Someone, maybe Tolnay?, recently posted a short Go snippet that segfaults because the virtual function table pointer and data pointer aren't copied atomically or mutexed. The same thing works in swift, because neither is thread safe. Swift is also slower than go unless you pass unchecked making it even less safe than go. C#/f# are safer from that particular problem and more performant than either go or swift, but have suffered from the same deserialization attacks that java does. Right now if you want true memory and thread safety, you need to limit a GC language to zero concurrency, use a borrow checker, i.e. rust, or be purely functional which in production would mean haskell. None of those are effortless, and which is easiest depends on you and your problem. Rust is easiest for me, but I keep thinking if I justvwrite enough haskell it will all click. I'm worried if my brain starts working that way about the impacts on things other than writing Haskell.
Replying to myself because a vouch wasn't enough to bring the post back from the dead. They were partially right and educated me. The downvotes were unnecessary. MS did start advising against dangerous deserializers 8yrs ago. They were only deprecated three years ago though, and only removed last year. Some of the remaining are only mostly safe and then only if you follow best practice. So it isn't a problem entirely of the past, but it has gotten a lot better.
Unless you are writing formal proofs nothing is completely safe, GC languages had found a sweet spot until increased concurrency started uncovering thread safety problems. Rust seems to have found a sweet spot that is usable despite the grumbling. It could probably be made a bit easier. The compiler already knows when something needs to be send or synch, and it could just do that invisibly, but that would lead people to code in a way that had lots of locking which is slow and generates deadlocks too often. This way the wordiness of shared mutable state steers you towards avoiding it except when a functional design pattern wouldn't be performant. If you have to use mutex a lot in rust stop fighting the borrow checker and listen to what it is saying.
> C#/f# are safer from that particular problem and more performant than either go or swift, but have suffered from the same deserialization attacks that java does.
Yes. I do like Swift as a language. The main disadvantages of Swift, in my view, are: (A) The lack of an (optional) "ownership" model for memory management. So you _have_ to use reference counting everywhere. That limits the performance. This is measurable: I converted some micro-benchmarks to various languages, and Swift does suffer for the memory managment intensive tasks [1]. (B) Swift is too Apple-centric currently. Sure, this might be become a non-issue over time.
The borrow checker involves documenting the ownership of data throughout the program. That's what people are calling "overly verbose" and saying it "makes comprehensive large-scale refactoring impractical" as an argument against Rust. (And no it doesn't, it's just keeping you honest about what the refactor truly involves.)
The annoying experience with the borrow checker is when following the compiler errors after making a change until you hit a fundamental ownership problem a few levels away from the original change that precludes the change (like ending up with a self referencial borrow). This can bite even experienced developers, depending on how many layers of indirection there are (and sometimes the change that would be adding a single Rc or Cell in a field isn't applicable because it happens in a library you don't control). I do still prefer hitting that wall than having it compile and end up with rare incorrect runtime behaviour (with any luck, a segfault), but it is more annoying than "it just works because the GC dealt with it for me".
There are also limits to what the borrow checker is capable of verifying. There will always be programs which are valid under the rules the borrow checker is enforcing, but the borrow checker rejects.
It's kinda annoying when you run into those. I think I've also ran into a situation where the borrow checker itself wasn't the issue, but rather the way references were created in a pattern match causing the borrow checker to reject the program. That was also annoying.
Polonius hopefully arrives next year and reduces the burden here further. Partial field borrows would be huge so that something like obj.set_bar(obj.foo()) would work.
Given the troubles with shipping Polonius, I imagine that there isn't much more room for improvements in "pure borrow checking" after Polonius, though more precise ways to borrow should improve ergonomics a lot more. You mentioned borrowing just the field; I think self-referential borrows are another.
The borrow checker is an approximation of an ideal model of managing things. In the general case, the guidelines that the borrow checker establishes are a useful way to structure code (though not necessarily the only way), but sometimes the borrow checker simply doesn't accept code that is logically sound. Rust is statically analyzed with an emphasis on safety, so that is the tradeoff made for Rust.
> Quite many languages support transpiling to C (even Go and Lua)
Source? I’m not familiar with official efforts here. I see one in the community for Lua but nothing for Go. It’s rare for languages to use this as anything other than a stopgap or a neat community poc. But my point was precisely this - if you’re only targeting GCC/LLVM, you can just use their backend directly rather than transpiling to C which only buys you some development velocity at the beginning (as in easier to generate that from your frontend vs the intermediate representation) at the cost of a worse binary output (since you have to encode the language semantics on top of the C virtual machine which isn’t necessarily free). Specifically this is why transpile to C makes no sense for Rust - it’s already got all the infrastructure to call the compiler internals directly without having to go through the C frontend.
> Rust borrow checker: the problem I see is not so much that it's hard to learn, but requires constant effort. In Rust, you are basically forced to use it, even if the code is not performance critical
Your only forced to use it when you’re storing references within a struct. In like 99% of all other cases the compiler will correctly infer the lifetimes for you. Not sure when the last time was you tried to write rust code.
> Sure, Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python.
Any language targeting the performance envelope rust does needs GC to be opt in. And I’m not sure how much extra verbosity there is to wrap the type with RC/Arc unless you’re referring to the need to throw in a RefCell/Mutex to support in place mutation as well, but that goes back to there not being an alternative easy way to simultaneously have safety and no runtime overhead.
> The main disadvantage of Rust, in my view, is that it's verbose.
Sure, but compared to what? It’s actually a lot more concise than C/C++ if you consider how much boilerplate dancing there is with header files and compilation units. And if you start factoring in that few people actually seem to actually know what the rule of 0 is and how to write exception safe code, there’s drastically less verbosity and the verbosity is impossible to use incorrectly. Compared to Python sure, but then go use something like otterlang [1] which gives you close to Rust performance with a syntax closer to Python. But again, it’s a different point on the Pareto frontier - there’s no one language that could rule them all because they’re orthogonal design criteria that conflict with each other. And no one has figured out how to have a cohesive GC that transparently and progressively lets you go between no GC, ref GC and tracing GC despite foundational research a few years back showing that ref GC and tracing GC are part of the same spectrum and high performing implementations in both the to converge on the same set of techniques.
I agree transpile to C will not result in the fastest code (and of course not the fastest toolchain), but having the ability to convert to C does help in some cases. Besides the ability to support some more obscure targets, I found it's useful for building a language, for unit tests [1]. One of the targets, in my case, is the XCC C compiler, which can run in WASM and convert to WASM, and so I built the playground for my language using that.
> transpiling to C (even Go and Lua)
Go: I'm sorry, I thought TinyGo internally converts to C, but it turns out that's not true (any more?). That leaves https://github.com/opd-ai/go2c which uses TinyGo and then converts the LLVM IR to C. So, I'm mistaken, sorry.
> Your only forced to use it when you’re storing references within a struct.
Well, that's quite often, in my view.
> Not sure when the last time was you tried to write rust code.
I'm not a regular user, that's true [2]. But I do have some knowledge in quite many languages now [3] and so I think I have a reasonable understanding of the advantages and disadvantages of Rust as well.
> Any language targeting the performance envelope rust does needs GC to be opt in.
Yes, I fully agree. I just think that Rust has the wrong default: it uses single ownership / borrowing by _default_, and RC/Arc is more like an exception. I think most programs could use RC/Arc by default, and only use ownership / borrowing where performance is critical.
> The main disadvantage of Rust, in my view, is that it's verbose.
>> Sure, but compared to what?
Compared to most languages, actually [4]. Rust is similar to Java and Zig in this regard. Sure, we can argue the use case of Rust is different than eg. Python.
> I'm not a regular user, that's true [2]. But I do have some knowledge in quite many languages now [3] and so I think I have a reasonable understanding of the advantages and disadvantages of Rust as well.
That is skewing your perception. The problem is that how you write code just changes after a while and both things happen: you know how to write things to leverage the compiler inferred lifetimes better and the lifetimes fade into the noise. It only seems really annoying, difficult and verbose at first which is what can skew your perception if you don’t actually commit to writing a lot of code and reading others’ code so that you become familiar with it better.
> Compared to most languages, actually [4]. Rust is similar to Java and Zig in this regard. Sure, we can argue the use case of Rust is different than eg. Python.
That these are the languages you’re comparing of is a point in Rust’s favor - it’s targeting a significantly lower level and higher performance of language. So Java is not comparable at all. Zig however nice is fundamentally not a safe language (more like C with fewer razor blades) and is inappropriate from that perspective. Like I said - it fits a completely different Pareto frontier - it’s strictly better than C/C++ on every front (even with the borrow checker it’s faster and less painful development) and people are considering it in the same breath as Go (also unsafe and not as fast), Java (safe but not as fast) and Python (very concise but super slow and code is often low quality historically).
Which language would you classify as not corp owned?
It’s also weird to include Java and Swift in that list considering both afaik are maintained by a separate foundation. Java from Sun is even predominantly basically OpenJDK with some remaining proprietary Sun / Oracle bits but it’s the reference open source implementation used by most everyone.
> Java from Sun is even predominantly basically OpenJDK with some remaining proprietary Sun / Oracle bits but it’s the reference open source implementation used by most everyone.
Note that Oracle contributes around 90% of the work to OpenJDK. If they decided to stop working on it, there would be a big gap to fill.
That’s generally true of most languages. Rust is struggling with this right now.
I’d say though that Oracle is highly unlikely to stop working on Java and Google is still invested in the JDK even though they’re trying to shift new code in this space towards Kotlin (another “corp owned” language)
For a significant portion of time Python was funded by Google, Meta and Facebook and maybe some other corps.
Zig doesn’t have any serious adoption in the industry yet but if/when it does I’d expect corps to be hiring the language devs.
JS is a consortium but it’s filled primarily with Google and Apple engineers.
Same goes for C/C++ lot of Apple, MS and Google engineers.
Elixir I’m not sure about. Rust was largely turns out employed by Amazon until the most recent culling.
It’s not surprising. This is technically difficult work and if the language is important to a corp they’ll hire the maintainers. There needs to be a funding source and in the industry that typically means a for profit company paying the salary of the people moving things forward. Indeed - it’s one of the things Rust is struggling with for now.
They fund it cos they want to use it for their thing. Does not mean they own them. They are all community governed projects.
Rust, Julia, Typescript on the other hand are governed by Corps. They are not community projects.
Elixir is BDFL (good one) last I checked. Dont know if they became a company or foundation.
Zig is for all purposes a good example of community governed project. Itcs in production at Bun and TigerBeetle. But also, its not yet production ready (v1.0). So their current trend make sense.
But I could've been wrong with JS and C. Not sure about their governance now that I think about it.
This is patently wrong on at least several of these.
Rust is explicitly a community project having been born out of a non-profit, and if you’re discounting corp-funded but community driven that’s definitely Rust. If not, please indicate the corp that’s driving Rust.
Zig is a BDFL project like Python was (not sure how it is these days) - community contributes sure, but Andrew makes the big calls and directional changes.
> Rust is explicitly a community project having been born out of a non-profit, and if you’re discounting corp-funded but community driven that’s definitely Rust. If not, please indicate the corp that’s driving Rust.
Non-profit doesn't mean community project. Rust foundation is a non-profit 501-c(6). Which is a non-profit category for trade unions and stuff. It's not a charity categorization. It's run by corporate members and works only for the members which are - surprise corporates. A community member like you or me doesn't have any say (Unless you have $325k per year to pay) - https://rustfoundation.org/get-involved/. This is the same case with Linux foundation as well. It's NOT a community project. The only difference is, Linus has more say cos trademark is on him.
PSF and Zig foundation are charity / commuinty projects cos they are non-profit 501 c(3). It's categorised as public charity or for the good of people. You and I can have more say in it. NOT THE CASE WITH RUST.
>Which language would you classify as not corp owned?
I would like to respectfully disagree with you there as well.
The above was the context. I was replying to this which opened the conversation.
Not to mention, end users and consumers don't get a say in the corp funded projects. Everything works as long as it aligns with the goals of the corp. Not otherwise.
I assume training set components have also priorities, low priority data goes to training very few times at the beginning of pretraining, while higher priority data is trained on multiple times until the end.
Ha ha nice one - when your startup is Facebook you'll need that, not for your 12 users.
The reason startups get to their super kubernetes 6 layers mega AWS powered ultra cached hyper pipelined ultra optimised web queued applicatyion with no users is because "but technology X has support for an eventually consistent in-memory caching layer!!"
What about when we launch and hit the front page of HN how will the site stay up without "an eventually consistent in-memory caching layer"?
No it only uses the same LLVM compiler passes and you enable certain optimizations locally via macros if you want to allow reordering in a given expression.
I think the other replies are overcomplicating this.
+ is a binary operation, and a+b+c can’t be interpreted without knowing whether one treats + as left-associative or right-associative. Let’s assume the former: a+b+c really means (a+b)+c.
If + is commutative, you can turn (a+b)+c into (b+a)+c or c+(a+b) or (commuting twice) c+(b+a).
But that last expression is not the same thing as (c+b)+a. Getting there requires associativity, and floating point addition is not associative.
"a+b+c" doesn't describe a unique evaluation order. You need some parentheses to disambiguate which changes are due to associativity vs commutativity. a+(b+c)=(c+b)+a should be true of floating point numbers, due to commutativity. a+(b+c)=(a+b)+c may fail due to the lack of associativity.
You're supposed to do (a+b) to demonstrate the effect, because floating point subtraction that results in a number near zero is sensitive to rounding (worst case, a non-zero number gets you a zero number), which can introduce a huge error when a and b are very similar numbers.
IEEE 754 doesn't (usually) distinguish between different NaN encodings for the purposes of semantics--if the result is a NaN, it doesn't specify which NaN the result is. Most hardware vendors implement a form of NaN propagation: when both inputs are NaN, one of the operands is returned, for example, always the left NaN is returned if both are NaN.
As a side note: all compilers I'm aware of make almost no guarantees on preserving the value of NaN payloads, hence they consider floating-point operations to be fully commutative, and there's no general way to guarantee that they evaluate in exactly the order you specified.
For those to be equal you need both associativity and commutativity.
Commutativity says that a*b = b*a, but that's not enough to allow arbitrary reordering. When you write a*b*c depending on whether * is left or right associative that either means a*(b*c) or (a*b)*c. If those are equal we say the operation is associative. You need both to allow arbitrary reordering. If an operation is only commutative you can turn a*(b*c) into a*(c*b) or (b*c)*a but there is no way to put a in the middle.
We’re in very nitpicky terminology weeds here (and I’m not the person you’re replying to), but my understanding is “commutative” is specifically about reordering operands of one binary op (4+3 == 3+4), while “associative” is about reordering a longer chain of the same operation (1+2+3 == 1+3+2).
Edit: Wikipedia actually says associativity is definitionally about changing parens[0]. Mostly amounts to the same thing for standard arithmetic operators, but it’s an interesting distinction.
It is not a nit it is fundamental, a•b•c is associativity, specifically operator associativity.
Rounding and eventual underflow in IEEE means an expression X•Y for any algebraic operation • produces, if finite, a result (X•Y)·( 1 + ß ) + µ where |µ| cannot exceed half the smallest gap between numbers in the destination’s format, and |ß| < 2^-N , and ß·µ = 0 . ( µ ≠ 0 only when Underflow occurs.)
And yes that is a binary relation only
a•b•c is really (a•b)•c assuming left operator associativity, one of the properties that IEEE doesn't have.
IEEE 754 floating-point addition and multiplication are commutative in practice, even if there are exceptions with NaNs etc..
But remember that commutative is on the operations (+,x) which are binary operations, a+b=b+a and ab=ba, you can get accumulated rounding errors on iterated forms of those binary operations.
reply