Hacker Newsnew | past | comments | ask | show | jobs | submit | porridgeraisin's commentslogin

Not too much window to see anyway

Yeah noticed it too. I wonder why

I don't know why iOS share vs Android is relevant? Any company that's making a mobile app will not ignore (a very affluent) 30% of the market. So all consumer apps _will_ have an iOS version.

I don't know anything about mobile share of overall SW development.


Steam machine is x86

There's also another way to interpret that. For me for example, I have the same PC for 5 years now, and before that I had the same one for 10 years. Basically, I run it till I really really can't. I can afford replacements more than once every 10 years, but I don't buy it simply cos it's not necessary. But in this situation if I had a similar fondness for valve, I might go ahead and buy it and cut short my not-for-affordability-reasons wait.

I know a lot of people that behave this way with phone purchases.


I've heard this for many years now, can you give me a concrete example or two?

Note: I'm just curious, not even from the US


Not bringing it back is crazy though!

In India too many colleges didn't keep their entrance exams and used 12th standard marks to admit people that year. But the next year it was back to normal.


TIL that in the python REPL `_` automatically has the previous expr's result. That's cool

First of all, consider asking "why's that?" if you don't know what is a fairly basic fact, no need to go all reddit-pretentious "citation needed" as if we are deeply and knowledgeably discussing some niche detail and came across a sudden surprising fact.

Anyways, a nice way to understand it is that the LLM needs to "compute" the answer to the question A or B. Some questions need more compute to answer (think complexity theory). The only way an LLM can do "more compute" is by outputting more tokens. This is because each token takes a fixed amount of compute to generate - the network is static. So, if you encourage it to output more and more tokens, you're giving it the opportunity to solve harder problems. Apart from humans encouraging this via RLHF, it was also found (in deepseekmath paper) that RL+GRPO on math problems automatically encourages this (increases sequence length).

From a marketing perspective, this is anthropomorphized as reasoning.

From a UX perspective, they can hide this behind thinking... ellipses. I think GPT-5 on chatgpt does this.


A citation would be a link to an authoritative source. Just because some unknown person claims it's obvious that's not sufficient for some of us.

Expecting every little fact to have an "authoritative source" is just annoying faux intellectualism. You can ask someone why they believe something and listen to their reasoning, decide for yourself if you find it convincing, without invoking such a pretentious phrase. There are conclusions you can think to and reach without an "official citation".

Yeah. And in general, not taking a potshot at who you replied to, the only people who place citations/peer review on that weird faux-intellectual pedestal are people that don't work in academia. As if publishing something in a citeable format automatically makes it a fact that does not need to be checked for reason. Give me any authoritative source, and I can find you completely contradictory, or obviously falsifiable publications from their lab. Again, not a potshot, that's just how it is, lots of mistakes do get published.

I was actually just referencing the standard Wikipedia annotation that means something approximately like “you should support this somewhat substantial claim with something more than 'trust me bro'”

In other words, 10 pages of LLM blather isn’t doing much to convince me a given answer is actually better.


I approve this message. For the record I'm a working scientist with (unfortunately) intimate knowledge of the peer review system and its limitations. I'm quite ready to take an argument that stands on its own at face value, and have no time for an ipse dixit or isolated demand for rigor.

I just wanted to clarify what I thought was intended by the parent to my comment, especially aince I thought the original argument lacked support (external or otherwise).


People love to assert all kinds of meritless things about AI as if they were self-evident when they are anything but.

What's fex? I wasn't able to google search it (didn't try too hard admittedly)


x86 to arm compatibility layer they are using to run windows games on the machine/frame

Steam Machine is x86_64

The VR headset isn't and it's cable of both running games standalone and displaying games through the dongle from whatever PC you run.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: