Hacker Newsnew | past | comments | ask | show | jobs | submit | jasonfarnon's commentslogin

Whats unhinged about a periodic integrity check? Doesn't seem much different than a startup/boot check. If you're talking about security, you've come to the wrong OS.

"Not to mention, the idea of the OS owning the machine "

I agree--back then when computers had <=4MB or RAM I would've called hogging unused memory for some selfish speculative future use "professional malpractice".


"What's more, Microsoft never paid the really big bucks like the FAANG companies"

I never knew this open secret. In my day msft was very glamorous and I guess something like oracle played the role you're ascribing to msft now. I wonder what their strategy was? (I tend to doubt this was a careless/unexamined decision.) Maybe they figured that paying extra for individuals doesn't get you much if you have enough structure in place? A Bill Bellichik approach to hiring. Is the relationship you're making (FAANG salaries == better products) accepted as true?


I only have my own observations of their products and secondhand info but my understanding is Microsoft simply doesn’t care about engineering. They have a sales pitch (product idea), then they build and ship the MVP that can earn money. If something sells, they figure they can solve scaling by throwing enough money at it. Classic b-tier tech company (and startup) garbage. They never work out the unit economics, etc.

FAANG (at least the few I’m familiar with) tend to be engineering companies. They hire talented engineers who can work from first principles and build products with profitable unit economics that solve interesting new problems. I don’t think Microsoft even knows what software engineering would mean.


Good question. For a long time I think the justification was location: Microsoft is in Seattle, and it’s only the Bay Area that is getting inflated salaries.

Can I ask what product you chose?

For in-house monitoring it's tricky because pretty much every vendor who makes more than bare-bones ones goes out of business or discontinues the product 12-18 months after you've bought it (Air Mentor, Awair, BlueAir, EdiGreen, Foobot, the list goes on). The best one I've found are QingPing, colour LCD touch-screen display with WiFi access, been around for years, regularly update the firmware and hardware, actually provide real product support, and have things like MQTT integration if you're using HA.


I have a few of those around my house as well. However I have noticed that my VOC readings are not consistent, even if nothing in the area of the room has changed. I've reached out to their support about it but they're not much help. One thing I have noticed is a correlation with VOCs and C02, one(C02) seems to impact another(VOC)...which I don't think supposed to be the case. I was digging to the forums awhile back but the only conclusion I came up with is, you can't trust the VOC readings on these (or most consumer) devices...just too many variables and the sensors don't know/measure the full picture. It still bothers me though to look down in our son's room and see VOCs measurements elevated.

There was a moment where the VOC measurement was stuck at an elevated level. The suggested solution was to blow some air in there to knock (presumably) dust off the sensor, which worked. It could be that the VOC sensor is not great, but it could also be that it gets dirty.

I'm ignorant of the tech here. But I have noticed that ctrl-F search doesn't work for me on these longer chats. Which is what made me think they were doing something like virtual scrolling. I can't understand how the UI can get so slow if a bunch of the page is being swapped out.

Ctrl-A for select all doesn't work either. I actually wondered how they broke that.

I don't see how to interpret your claims. How do you yourself know that you're right when you "recognize" Claude or ChatGPT? How do you know how much of the text you don't recognize as any LLM is actually LLM-generated? My recollection is whenever I've seen data on this--the educators who think they can spot students cheating--the conclusion is people are really bad at identifying LLM-generated content.

I'm not claiming to be able to spot 100% of LLM written output

However the default tone and output style of Claude and ChatGPT are very obvious.

> My recollection is whenever I've seen data on this--the educators who think they can spot students cheating--the conclusion is people are really bad at identifying LLM-generated content.

If you can share that data we can discuss it, but there's nothing really to discuss here without a source

Among people who review a lot of user-submitted content, it becomes easy to spot the consistent voice of LLM writing. Wikipedia has a full page on it: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing


I don't know how but it works beautifully for me on windows 11. What I mean is, I have been using windows for decades and I do not like any changes at all, they are all forced on me. But this change successfully turned me around. I find I rarely use File Explorer/file managers any more and access most applications and documents through the search.

I do remember it sucking on previous versions. I did use winaero tweaker to turn off the web results (and many other annoyances).


When was this golden age of western civilization again? like 10 years ago, are you suggesting we were in this golden age? I mean, the paper this link is discussing is from 2014, so I guess it was more like 15 years ago that the golden age sunsetted?

The golden age of western civilisation was n - 5 years ago, where n is the year the speaker got their first job.

The speaker, as in the person who started talking about golden ages? Well it surely if they think there was a golden age it couldn't be now, what with all the late stage capitalism and rising phobias and supremacies and people voting the incorrect people into office or countries out of unions.

What do you mean when was it again? I don't understand your questions or how they relate to what I wrote.

They are insinuating that the consensus you're talking about never existed as you have described it.

If anything, I think the Internet has made it easier to expose bad science. People like Andrew Gellman and websites like pubpeer have had a huge impact on the practice of the social sciences (psychology especially) just using blogs. In the past he would have been ignored. Journals and authors do their best to ignore, dismiss, and discredit him now. Having a direct voice to the public is what saves him.

Nobody is looking at that, they're watching TikTok and ReelShorts

That would be strange and misguided because I didn't talk about a consensus, I was talking about a mechanism for consensus. And consensus has existed many times on many issues now, and then.

Right, the mechanism you mentioned, reason, never existed. That's how I read their comment anyways.

Yes I have noticed people get extremely emotional and upset at the suggestion that not everything in society may be monotonically improving.

Sorry for being flippant. My analysis is that the mix of reason or emotions is unchanged over time. Take the case of this management science paper. What is irrational about defending a bad paper you wrote when it brings you all the accolades and benefits Andrew has described? The authors' personal goals aren't aligned with the public's goals of getting good science. That's not a failure of reason. Maybe it's selfish. That's different.

That's great, I'd love to see this analysis of yours.

Certainly knowing how many/which people are working on a problem you are looking at, and how long it will take you to solve it, are critical skills in being a working researcher. What kind of answer are you looking for? It's hard to quantify. Most suck at this type of assessment as a PhD student and then you get better as time goes on.

The link has an entire section on "The infeasibility of finding it by brute force."

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: