"I have spent more than $500 dollars on pants before, because I'm lucky to be in a position where for me these things are a matter of taste and whim, rather than budget, and don't really affect my finances too much whether I do or don't buy them."
I don't think I have to explain to you how the gap between what you said, and what I wrote above, is what is causing offense here. You likely deserve 100% of your success, but its just common sense to obscure the specifics of it if you are way out of band in relative terms.
Its like saying: "You know, I never really get ill" at the cancer ward. Sure, its true, but read the room.
Well, after all it’s HN and this is the kind of content that attracts much of the users. I’d certainly be more careful with that wording on a website that caters to a very different audience, but it’s not long ago when indiehackers posts of people “bragging” about their successes were consistently at the top of the front page.
Not convinced I misread the room, especially considering the upvotes.
> don't really affect my finances too much whether I do or don't buy them
Isn't this the same brag as before?
I can't tell how this is different than throwing some numbers in the mix, the person relating their personal experience expresses they have fuck-it-bucks either way
Not naming numbers is precisely the point, because you obfuscate the reality of the size of the gap, which in the end is what everything is about. The gap creates the offense. Everybody knows there's rich people, but being confronted by exactly how rich, to the detail of a number, is the offensive part (if done by that rich person without any clear reason).
I'm not sure why people keep piling up to pretend this is such a normal thing, this is literally why people don't discuss salaries despite it technically being in their own interest: specifics ground the fuzzy notion of inequality into reality like nothing else.
The offensive post inflates the perceived inequality from "500$ pants is too much for pants" to "10k means nothing to me" while my version leaves the specifics outside of the conversation. In my version, the person could put the level of "too expensive for pants" at 1k, still an order of magnitude lower than the offensive post.
Finally, I acknowledge that this is a privileged position to be in explicitly, because that signals that you are aware that this is an exceptional situation to be in (which I'm not sure the offensive post author is aware of, even now).
Just the convenience of having an ordinal number to say? Rather than saying "chapter 0, chapter 1, chapter 2" one can say "the fourth chapter"? Or is it the fact that the chapter with number 4 has 3 chapters preceding it?
On first glance I find this all rather meaningless pedantry.
If I use ordinal numbers to count, then counting tells me the number of objects. Sometimes I want to know the number of objects.
EDIT: Yeah, I don't know why book chapter labels shouldn't start with "0". It seems fine to me. They could use letters instead of numbers for all I care.
When I'm counting letters it's more convenient to go "one, two, three." When I'm finding the offset between letters it's more convenient to go "zero, one, two." Neither of these methods is going to displace the other.
Definitions are fine, and I agree that "A" is the first letter. But that's no use to people who need to think clearly about the offset between "A" and "C" right now. Should I tell them they're wrong, they have to count to three and then subtract one? Because the dictionary says so?
Offset is an answer to the question "where does Nth memory location start from?". The answer is "after N-1 locations". It's the count of locations that need to be skipped by the reader, to reach the start of Nth memory location.
I don't know but I feel like you are making a point out of something arbitrary. When I listen to an audio book, everyone always says: "Chapter 1", not "the first chapter" so why is this important?
I think extreme attention to to arbitrary meaningless details is how we ended up with most rules in language that we are starting to collectively detest.
I'm not sure that's fair. Its like saying gamblers 'want' to gamble, or addicts 'want' to do drugs. Its deeper, more evil, and includes less autonomy than 'wanting' implies.
This isn't our politics in general. We do not see absolute glee at being mean and cruel from people with power on the left. Biden spent four years actively refusing to be vindictive towards Trump.
This is specifically Trump activating the politics of hatred on the right.
AFAIK its theorized much of the GDP growth of the US, as well as the stock market growth of important index funds, is mostly as a result of select tech companies surfing the AI hype.
In that case, a prolonged recession may occur (that would've occurred anyway), and the effect will be felt throughout the economy.
But, again, that's just a general recession being triggered by the AI bubble bursting, i.e. AI no longer propping up the economy, so that's not a bad thing. What the results of that are in terms of severity or impact I wouldn't know, I don't think anyone knows.
Why do you think it would be a prolonged recession? Dot com bust, for example, was just two quarters of GDP decline, followed by solid growth. 2007 was much worse, because it was a crisis of financial institutions. AI bubble may be bigger than dot com, but feels more like it in that is a narrow section of the economy. Even more narrow than dot com.
Well I think nobody knows this stuff, so don't take my word for it. I think prolonged makes sense because AI is holding off a depression, but that depression does not have a singular obvious cause. I think there's a bunch of reasons to be pessimistic about the global economy, including of course (geo)political, (trade)wars, extremism, stagnating production, etc. Contrary to the dotcom bubble, current AI might not actually be a useful-but-overvalued tech. It may just not be that useful at all. In that case, a rally like dotcom is out.
I am no expert here what I remember is mostly from CS courses, but isn't the entire point of a formal program proof that you can reason about the combinatorics of all data and validate hypothesis on those?
It's one thing to say: "objects of this type never have value X for field Y", or "this function only works on type U and V", but its a lot more impressive to say "in this program state X and Y are never achieved simultaneously" or "in this program state X is always followed by state Y".
This is super useful for safety systems, because you can express safety in these kinds of functions that are falsifiable by the proof system. E.g: "the ejection seat is only engaged after the cockpit window is ejected" or "if the water level exceeds X the pump is always active within 1 minute".
> isn't the entire point of a formal program proof that you can reason about the combinatorics of all data and validate hypothesis on those?
You're right. Validating input is a part of this process. The compiler can trivially disprove that the array can be safely indexed by the full set of any valid integer that the user can input. Adding a runtime check to ensure we have a valid index isn't a very impressive use of formal proofs, I admit. It's just a simple example that clearly demonstrates how SPARK can prove 'memory-safety' properties.
But isn't the entire point of rust's verbose type system to declare valid states at compile time? I don't exactly see how this can't be proved in rust.
> But isn't the entire point of rust's verbose type system to declare valid states at compile time?
Different type systems are capable of expressing different valid states with different levels of expressivity at compile time. Rust could originally express constraints that SPARK couldn't and vice-versa, and the two continue to gain new capabilities.
I think in this specific example it's possible to write a Rust program that can be (almost) verified at compile time, but doing so would be rather awkward in comparison (e.g., custom array type that implements Index for a custom bounded integer type, etc.). The main hole I can think of is the runtime check that the index is in bounds since that's not a first-class concept in the Rust type system. Easy to get right in this specific instance, but I could imagine potentially tripping up in more complex programs.
I am not a Rust expert either, but just a general remark: using a programming language like Rust as its intended to be used, i.e. functional/imperative programming of the problem domain, does not lend itself well to proving/disproving the kind of statements I showed above.
Yes, you could theoretically generate a Rust program that does not compile if some theorem does not hold, but this is often times (unless the theorem is about types) not a straightforward Rust program for the problem at hand.
I also think that, although Rust is blurring the lines a bit, equating formal verification and type-checking is not a valid stance. A type checker is a specific kind of formal verification that can operate on a program, but it will only ever verify a subset of all hypotheses: those about object types.
> In practice, however, LLMs don't seem to have any problem interpreting natural language instructions
I can think of a couple of reasons this may be the case.
1. There is a subset of English that you use unknowingly that has a socially accepted formal definition and so can be used as a substitute for programming language. LLMs have learned this definition. Straying from this subset or expecting a different formal definition will result in errors.
2. The level of detail in your English description is such that ambiguity genuinely does not arise. Unlikely, you would not consider that "natural language".
3. English is not ambiguous when describing program features, and formal definitions can be skipped. Unlikely, because the entire product owner role is built on the frequently exclaimed "that's not what I meant!".
I think its #1, and I think that makes the most sense: through massive statistical data LLMs have learned which natural language instructions cause which modifications in codebases, for a giant amount of generic problems that it has training data on.
The moment you do something new though, all bets are off.
> The time you spend learning Rust is so much time saved afterwards because the language is specifically designed to reduce the risk of programming errors
Great vision, challenging the "scale" of current AI solutions is super valid, if only for the reason that humans don't learn like this.
Architecture: despite other comments, I am not so bothered with MMAP (if read only) but rather with the performance claims. If your total db is 13kb you should be answering queries at amazing speeds, because you're just running code on in-cache data at that point. The performance claim at this point means nothing, because what you're doing is not performance intensive.
Claims: A frontal attack on the current paradigm would at least have to include real semantic queries, which I think is not currently what you're doing, you're just doing language analytics like NLP. This is maybe how you intend to solve semantic queries later, but since this is not what you're doing, I think that should be clear from the get-go. Especially because the "scale" of the current AI paradigm has nothing to do with how the tokenization happens, but rather with how the statistical model is trained to answer semantic queries.
Finally, the example of "Find all Greek-origin technical terms" is a poor one because it is exactly the kind of "knowledge graph" question that was answerable before the current AI hype.
Nevertheless, love the effort, good luck!
(oh and btw: I'm not an expert, so if any of this is wrong, please correct me)
I'm not sure why you keep spinning this as a valid response to anything.
This is the full quote of the parent:
> As for people getting sick and dying, they either don’t care, or they want people to get sick and die.
Lets break it down. Lets say some of your actions are causing harm, there's basically three options:
1. you don't know this is happening
2. you know, but continue because you don't care, and you can make money not caring
3. you know, and somehow this is beneficial to you, unlikely but possible
(The default option, which is always available, is to stop operations, which they have obviously also not done.)
Since DuPont obviously knew this was causing harm, #1 is out, so #2 and #3 remain. This is just deduction by elimination, not a value judgement.
No amount of spinning this argument is going to change this. I think your last line here makes it obvious who's straw-manning.
I don't think I have to explain to you how the gap between what you said, and what I wrote above, is what is causing offense here. You likely deserve 100% of your success, but its just common sense to obscure the specifics of it if you are way out of band in relative terms.
Its like saying: "You know, I never really get ill" at the cancer ward. Sure, its true, but read the room.
reply