Hacker Newsnew | past | comments | ask | show | jobs | submit | saghm's commentslogin

The obvious difference between UNIX tools and LLMs is the non-determinism. You can't necessarily reason about what the output will be, and then continue to pipe into another LLM, etc., and eventually `eval` the result. From a technical perspective you can deal do this, but the hard part seems like it would be how to make sure it doesn't do something you really don't want it to do. I'd imagine that any potential deviations from your expectations in a given stage would be compounded as you continue to pipe along into additional stages that might have similar deviations.

I'm not saying it's not worth doing, considering how the software development process we've already been using as an industry ends up with a lot of bugs in our code. (When talking about this with people who aren't technical, I sometimes like to say that the reason software has bugs in it is that we don't really have a good process for writing software without bugs at any significant scale, and it turns out that software is useful for enough stuff that we still write it knowing this). I do think I'd be pretty concerned with how I could model constraints in this type of workflow though. Right now, my fairly naive sense is that we've already moved the needle so far on how much easier it is to create new code than review it and notice bugs (despite starting from a place where it already was tilted in favor of creation over review) that I'm not convinced being able to create it even more efficiently and powerfully is something I'd find useful.


I'm not sure what your definition of "nice" is, but mine doesn't include saying most of what's here: https://en.wikipedia.org/wiki/James_Watson#Comments_on_race

> In 2007, the scientist, who once worked at the University of Cambridge's Cavendish Laboratory, told the Times newspaper that he was "inherently gloomy about the prospect of Africa" because "all our social policies are based on the fact that their intelligence is the same as ours - whereas all the testing says not really".

> While his hope was that everybody was equal, he added, "people who have to deal with black employees find this is not true".

Yeah, pretty racist


In 2013, I sat in on one of his talks at the Salk Institute. This guy was one of the most openly racist and sexist people I've ever seen. He spent 5 minutes shitting on the former NIH head for not funding him because she was a "Hot blooded Irish woman"

This is the sort of turn-of-century Mr. Burns type racism that I don't think most Americans even remember.


I always wonder with that kind of racist explanation, how the line of reasoning goes.

Suppose for the sake of argument, there's a place where everyone has 10 IQ points less, on average, than the West.

The Flynn effect is about 14 points over a few decades.

How do you square those things? Did the West not have a society a few decades ago? Is there some reason you can't have civilization with slightly dumber people? There was a time when kids were malnourished in the west, and possibly dumber as a result. Also, not everyone in society makes decisions. It tends to be very few people, and nobody thinks politicians are intelligent either.

I've never heard an explanation of intelligence that had any actual real-world impact on a scale that matters to society.

The explanation would have to have quite a lot of depth to it, as you have to come up with some sort of theory connecting how people do on a test to whatever you think makes a good society.


In a clean game-theoretic terms, without making any moral or ideological claims about “who is smarter”, we’ll treat underlying advantage as any positional asset (intelligence, wealth, charisma, skill, social capital, etc.). The question is: If a subset of players has an advantage in a repeated, large-group game, how do they best play to maximize payoff and stability?

Here's the strategy chatgpt came up with (amongst many other):

What Not to Say (Avoid These)

Don’t describe intelligence or talent as intrinsic, innate, or permanent. This triggers resentment and identity defense.

Don’t use language that signals “I am ahead of you.”

Don’t use your advantage to win every interaction—save leverage for important conflicts.

People tolerate talent. They hate being made aware of being lower in the hierarchy.

_____

Is it possible the backlash to Watson could be viewed from this game theocratic perspective, and not that he was racist and wrong?


Hmmm, let's see.

How many people died in wars in the 20th century? How many of them did NOT originate in Europe and Asia?

How much of climate change that has fouled up the earth we depend is NOT attributable to economic activity in the west?

Is there a western/Asian country where late-stage capitalism and the devaluation of of the common has not taken hold?

I could go on...

Are these evidence of intelligence? This is not a rhetorical question.


arguably I'd say wars can be generally indicative of intelligence. Higher-ability groups are more likely to choose war when their greater power raises the expected payoff of fighting. Climate change is also related to intelligence as it can argued that the more advance societies do end up consuming/producing more and thus create more Climate related waste. The end result of it might not be desirable, but probably something these advance societies can deal with.

I'm not sure I understand your point around late-stage capitalism and the devaluation of the common...

Are you really arguing that the western world has not been more advanced?


The European/Asian wars of the 20th century (ironically started by people who thought of themselves as superior races) wiped out ten of millions of lives and an untold amount of wealth. It led to the collapse of entire empires and nations. Surely you are not claiming that the wars were a net positive, are you? One indicator of a lack of intelligence is engaging in actions that are against your own interests.

Also, with climate change, may I remind you of quote from Agent Smith in the Matrix trilogy:

> I'd like to share a revelation during my time here. It came to me when I tried to classify your species. I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet.

Industrialization followed this pattern. Shitting where you live is a textbook case of stupid.

> The end result of it might not be desirable, but probably something these advance societies can deal with.

The people dying in extreme floods and fires tell me otherwise, and it's likely only going to get worse.


> whereas all the testing says not really

This part is (still) true. Is that fact racist?


on top of that, I personally know several women scientists that had to put up with his misogyny first-hand.

There was irony involved.

> They were practically throwing stuff when I got there and explained that lexicographical comparison of the strings would not work.

Versions numbers can go to 10!?!


Getting failures later after coercing something to a reference is even easier than that; just deference a null pointer when passing to an argument that takes a reference; no warnings or errors! https://godbolt.org/z/xf5d9jKeh

If you are working with C++ in this day and age, regardless of which compiler you use to output your actual binaries, you really owe it to yourself to compile as many source files as possible with other diagnostic tools, foremost clang-tidy.

It will catch that and a lot of iffy stuff.

If you want to go deeper, you can also add your own diagnostics, which can have knowledge specific to your program.

https://clang.llvm.org/extra/clang-tidy/QueryBasedCustomChec...


The funny thing is that if you enable ubsan and -O, it optimizes it to unconditionally call __ubsan_handle_type_mismatch_v1; I wonder if it would be tractable to warn when emitting unconditional calls to ubsan traps...

Interestingly, GCC (but not Clang) detects but doesn't warn the UB and emits "ud2" with -O2: https://godbolt.org/z/61aYox7EP

I disavow AI because while neither it or Brenda are perfect, Brenda consistently follows the instructions and can have an actual conversation with me to take feedback into account going forward. An AI will happily participate in those conversations, but whether it actually improves based on that feedback is not at all consistent. It's also a lot easier to find other humans who are meaningfully different than Brenda if I prefer to hire someone with a different working style, whereas right now the issues I describe above with BrendaBot are going to be essentially the same with any other AI I try to use instead.

Or even worse, it's public posturing with full knowledge that the call isn't likely to happen, and it wouldn't resolve anything even if it did.

Yep, and the logical chain itself can often be pretty clear where the discrepancy lies. In order for it to have a noticeable effect, you'd need to be looking at people smart enough to correctly identify circumstances that will make them happy in advance and then be able to influence things in average more than factors outside their control influence them. I don't think most "smart" people are more smart than life is random, without even getting into how common the requisite level of self-awareness is.

> It is entirely their fault. If no one agrees to do performative research, the problem will be solved.

Right, and the prisoner's "dilemma" isn't a real thing; everyone knows it's their own fault for not just all picking the decision that gives them all the best outcome. Every individual within a network effect is obviously responsible for the outcomes the entire system produces.


Everyone is responsible for their own actions, yes.

They're not responsible for the situations that end up encouraging certain actions though, and they shouldn't be blamed for not being able to solve the collective action problem[1]. I'd argue that the only blame that's fair to place on them in situations like this is from direct results of their individual actions, not the propagation of incentives that are beyond their power to change regardless of their own individual decisions.

If you're willing to blame someone for not acting against their own individual interest, doesn't it make more sense for it to be the people who are going out of their way to reward others for acting in that way?

[1]: https://en.wikipedia.org/wiki/Collective_action_problem


Without any specific implementation of a constraint it certainly can happen, although I'm not totally sure that it's something to be concerned about in terms of a DOS as much as a nuisance when writing code with a bug in it; if you're including malicious code, there's probably much worse things it could do if it actually builds properly instead of just spinning indefinitely.

Rust's macros are recursive intentionally, and the compiler implements a recursion limit that IIRC defaults to 64, at which point it will error out and mention that you need to increase it with an attribute in the code if you need it to be higher. This isn't just for macros though, as I've seen it get triggered before with the compiler attempting to resolve deeply nested generics, so it seems plausible to me that C compilers might already have some sort of internal check for this. At the very least, C++ templates certainly can get pretty deeply nested, and given that the major C compilers are pretty closely related to their C++ counterparts, maybe this is something that exists in the shared part of the compiler logic.


C++ also has constexpr functions, which can be recursive.

All code can have bugs, error out and die.

There are lots of good reasons to run code at compile time, most commonly to generate code, especially tedious and error-prone code. If the language doesn't have good built-in facilities to do that, then people will write separate programs as part of the build, which adds system complexity, which is, in my experience, worse for C than for most other languages.

If a language can remove that build complexity, and the semantics are clear enough to the average programmer (For example, Nim's macro system which originally were highly appealing (and easy) to me as a compiler guy, until I saw how other people find even simple examples completely opaque-- worse than C macros.


D doesn't have macros, quite deliberately.

What it does have are two features:

1. compile time evaluation of functions - meaning you can write ordinary D code and execute it at compile time, including handling strings

2. a "mixin" statement that has a string as an argument, and the string is compiled as if it were D source code, and that code replaces the mixin statement, and is compiled as usual

Simple and easy.


So if someone goes and pirates something on their own time on a whim, it's a criminal issue, but if 100 people correctively pirate a few orders of magnitude more stuff because their boss's boss's boss's boss told everyone to, it's just a licensing issue? Here's my suggestion: either throw out all the convictions that have ever occurred for software piracy and allow them to sue for reparations, or charge the people that are making those backroom deals for extortion and obstruction of justice. Either it should be a crime for everyone or no one; being rich enough to bribe your way out of it isn't just, and it's preposterous to claim otherwise.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: