Hacker Newsnew | past | comments | ask | show | jobs | submit | niteshade's favoriteslogin

I have a coworker who usually say that billions of flies can't be wrong. Maybe we should just start eating what they eat.

Using Windows as a server feels like using your lounge room as a commercial kitchen. I can never shake the feeling that this isn't a serious place to do business.

I have this impression from years of using both Windows and linux servers in prod.


This might not be sufficient for a lot of folks and I do notice sometimes a bit of struggle here and there, but for someone like me who mostly uses one widnow at a time on the Mac, or two screens when I have a external monitor setup along wiht my laptop, this thing kinda does it for me (but then I have never been a heavy "tiling" user at all) – https://support.apple.com/en-in/guide/mac-help/mchl9674d0b0/...

> LLMs specialize in self-apologetic catastrophe

Quote of the year right there


In Windows, hardware vendors have a bad habit of installing useless stuff. Latest trick is that the motherboard contains the payload that gets automatically installed when installing Windows. I had no idea it was even possible. This 'feature' can apparently be disabled from bios, but it needs to be done before installing Windows.

I happen to have both of those DLLs, but I had already disabled all ASUS-related services. I use this script to disable all services starting with "Asus" on startup. [1]

To disable the MyASUS auto-installer in BIOS go to Advanced, there is an option to disable auto-downloading of MyAsus in Windows. [2]

[1]: https://gist.github.com/Ciantic/76ade5f2731cbe87b70d17ff2898...

[2]: https://github.com/sammilucia/ASUS-G14-Debloating/blob/main/...


I've written a similar `useDerivedState` hook before, which is basically formalizing the lastValue !== value / setLastValue pattern that the docs teach you.

But there is a major blind spot. The reason you want the dependencies is to reset the state when the outside demands it. But only way you can reset such a state is if the dependency _changes_. So it's not possible to reset the state back to the _same_ value as before.

To do that, you either need to manually manage the component lifecycle with a `key={..}` on the outside, or, you need to add e.g. a `version={N}` counter as an extra prop, to handle the edge case. Except, at that point, it makes more sense to rely on `version` entirely.

The 'proper' solution I've found is to actually write code to do what the policy and etiquette demands. E.g. for a number input, you can't aggressively reformat the contents because that's unusable, but you can check if the current text parses to the same value as before (if it is a valid number). There is no way to hide this sort of nuance with a generic state hook, it's too context-specific.

What is most useful is to treat the bridge between controlled and uncontrolled state as its own wrapper component, e.g. called <BufferedInput>, which has a render prop to actually render the <input> itself. It accepts a `value / onChange` on the outside, but passes on a different `text / onChange` to the inside. Give it a `parse`, `format` and `validate` prop that take functions, and you can clean up a lot of messy input scenarios.


If it's not worth writing, it's not worth reading.

When working with AI for software engineering assistance, I use it mainly to do three things -

1. Do piddly algorithm type stuff that I've done 1000x times and isn't complicated. (Could take or leave this, often more work than just doing it from scratch)

2. Pasting in gigantic error messages or log files to help diagnose what's going wrong. (HIGHLY recommend.)

3. Give it high level general requirements for a problem, and discuss POTENTIAL strategies instead of actually asking it to solve the problem. This usually allows me to dig down and come up with a good plan for whatever I'm doing quickly. (This is where real value is for me, personally.)

This allows me to quickly zero in on a solution, but more importantly, it helps me zero in strategically too with less trial and error. It let's me have an in-person whiteboard meeting (as I can paste images/text to discuss too) where I've got someone else to bounce ideas off of.

I love it.


I completely disagree with idea of moving fast at any cost. There's a saying in the marines (I think, maybe navy)...

"slow is smooth, smooth is fast"

It's about first getting the basics right, so you go slow, which then allows you to perform smoothly, and when you perform smoothly you can then accelerate and move quickly. This applies to software development, in my experience. I think codebases have this quality of snowballing, and the snowball can either be good or bad. At first you add code quickly, which means that your snowball will be one of complexity later. If instead you slow down at the beginning and deal with complexity early, the snowball becomes productivity later.

This is hard to explain on non-technical terms though, and it's a hard sell for business and marketing oriented people, mostly because they can't immediately connect the dots to profit (understandably, but also it's just so obvious companies hit the same problems over and over again). The business value is indeed there, after you snowball into compounded productivity because you architected an application that scales smoothly.


Whenever I ask myself "should I use YAML?" I answer myself "Norway".

As the person who personally ran 10.6 v1.1 at Apple (and 10.5.8), you are wrong(ish).

The new version of the OS was always being developed in a branch/train, and fixes were backported to the current version as they were found. They weren't developed linearly / one after another. So yeah, if you are comparing the most stable polished/fixed/stagnant last major version with the brand new 1.0 major version branch, the newer major is going to be buggier. That would be the case with every y.0 vs x.8. But if you are comparing major OS versions, Snow Leopard was different.

Snow Leopard's stated goal internally was reducing bugs and increasing quality. If you wanted to ship a feature you had to get explicit approval. In feature releases it was bottom up "here is what we are planning to ship" and in Snow Leopard it was top down "can we ship this?".

AFAIK Snow Leopard was the first release of this kind (the first release I worked on was Jaguar or Puma), and was a direct response to taking 8 software updates to stabilize 10.5 and the severity of the bugs found during that cycle and the resulting bad press. Leopard was a HUGE feature release and with it came tons of (bad) bugs.

The Apple v1.1 software updates always fixed critical bugs, because:

1. You had to GM / freeze the software to physically create the CDs/DVDs around a month before the release. Bugs found after this process required a repress (can't remember the phrase we used), which cost money and time and scrambled effort at the last minute and added risk. This means the bar was super high, and most "bad, but not can't use your computer bad" bugs were put in v1.1...which was developed concurrently with the end of v1.0 (hence why v1.1s came out right away)

2. Testing was basically engineers, internal QA, some strategic partners like Adobe and MS, and the Apple Seed program (which was tiny). There was very little automated testing. Apple employees are not representative of the population and QA coverage is never very complete. And we sometimes held back features from seed releases when we were worried about leaks, so it wasn't even the complete OS that was being tested.

A v1.1 was always needed, though the issues they fixed became less severe over time due to larger seeds (aka betas), recovery partitions, and better / more modern development practices.


> Am I right in assuming that this works only with local text files

One of the screen shots shows a .xlsx in the “Temporary Resources” area.

Also: I haven’t checked, but for a “Local-first” app, I would expect it to leverage Spotlight text importers from the OS, and run something like

  mdimport -t -d3 *file*
on files it can’t natively process.

When you're not just an IC, you have other priorities. That means your IC work can be derailed at any moment. _That_ means you can't take on work anywhere near the critical path or you're just blocking others or handing things off.

Reviews? Sure. Design meetings? Sure. But taking critical work will end up causing issues.


I've scaled websockets before, it isn't that hard.

You need to scale up before your servers become overloaded, and basically new connections go north to the newly brought up server. It is a different mentality than scaling stateless services but it isn't super duper hard.


I made the logic / color space behind that.

There isn't actually 4, but that 4th one you perceive...Notice ex. search doesn't do this, it's a stubborn Android engineering politics thing.

I can't even begin to explain how this happened, but, tl;dr: that 4th color was a huge problem to everyone involved. But, once it became a Big Thing, engineer middle managers...sigh. Not worth trying to explain. The amount of chicanery was really astonishing, odds are I'll never work at a BigCo again.

Now, they're stuck with it, even though VPs were insisting it to be fixed since day -100.

To all of your points , it sure looks like he's imitating the same logic to get that off-white...but then isn't using the important part: gotta use my/Google's color space.

That's the magic to get contrast without even having to see the colors or measure ratios or any of that BS. (tl;dr: HCT color space, it's CAM16 strapped to Lab* L, and then you can describe contrasting colors by a delta in tone / L)


I've called this ritual-taboo programming for decades. It happens for user interfaces and APIs when the documentation is absent, or only consists of examples. If there's no reference documentation, everything is a copy of something someone else did. Nobody understands how it really works.

Now, for some interfaces, this isn't too bad. Most people don't know why US AC power plugs are polarized, or what the ground prong is for. Electricians have to, but users do not.

For more complex interfaces, it means that many functions will either be misused or undiscovered. This is the source of the plaint that only 10%-20% of a product's features are used.

On user interface design, the classic is "Tog on Interface", Bruce "Tog" Tognazzini, 1992. That's from the Mac UI era. A more modern take is "The Gamer's Brain", by Celia Hodent, designer of Fortnite's UX.

Software internal documentation seems to suffer today from a mindset that comments are unnecessary and waste space. Especially in the Javascript era. It's worst in languages that don't have data declarations. There's no place to properly document the data.

Rust has good conventions and tools for documenting data. There's a well defined place where the documentation for each structure and field goes, and reasonable tools for checking it and turning that into documentation. If you fail to do this, when you publish your crate on crates.io, the documentation pages will come up blank, which screams "loser".

Rust is weak on function parameter documentation. There should have been a defined place where each formal parameter gets a comment, which then appears in the documentation of the call.

Most other languages don't take such a hard line. More should.


Sorry to hear. For the benefit of others, I read this as the importance of sizing the bet relative to your portfolio and nobody else’s, and is broadly applicable.

If someone bets a million bucks on stock A, but is worth a billion bucks, then that’s not $1m conviction, it’s <1% conviction. And that information is then factored into the size of my bet.

Unfortunately I also learned this the hard way.


I am always glad to see people pursuing creating their own programming language. More people should give it a try, IMHO.

A tricky thing that comes up with Rust comparisons is that often, Rust has a feature that's weird or hard to use, but it's because that's the only solution that makes sense within the constraints Rust has placed upon itself. Clicking some links gets me to https://git.yzena.com/Yzena/Yc/src/branch/master/docs/yao/de...

> Yao's biggest goals are: Correctness, Convenience, Performance, in that order.

Having clear goals is a great thing when making a language, because they give you guidance on what is appropriate to include and what is not.

Rust has certainly demonstrated having similar goals, but in a slightly different order: Correctness, Performance, and then Convenience. So it wouldn't shock me if Yao could do some things better than Rust, in accordance with its goals. But that also means that sometimes, Rust would be useful where Yao cannot be. Everything is tradeoffs.

Incidentally, I actually think that figuring out what your values are, and what your needs are, is a great way to pick a programming language. Decide what matters to you as an engineer, and then find a language that shares similar values. I gave a conference talk a few years back on this idea, and how I viewed Rust's values at the time https://www.infoq.com/presentations/rust-tradeoffs/

This was based off of bcantrill's Platform as a Reflection of Values, which was very influential on me. https://www.youtube.com/watch?v=Xhx970_JKX4

If you've ever heard about Oxide's focus on values, this is some of the older background on that.


Finally, a positive use case for monopoly money:

Playing games with it


https://personal.utdallas.edu/~liebowit/keys1.html

Is the original paper that kicked it off, I think there was a few follow ups.

This wiki page has a snippet from a site talking about it that is no longer online:

https://wiki.c2.com/?TheFableOfTheKeys

http://www.mwbrooks.com/dvorak/dissent.html

edit to add:

https://reason.com/1996/06/01/typing-errors/

Vs

https://dvorak-keyboard.com/anti-dvorak/


Real security researchers know that requiring symbols and upper case letters actually reduces security. Those requirements are explicitly rejected by the latest NIST recommendations:

https://pages.nist.gov/800-63-3/sp800-63b.html

So I'm basically agreeing with you, that a lot of people "in security" are just cargo culting.


They could have tilted the example even more in their favor by using csvs that had a delimiter inside the string (“Smith, John”). Or quoted new lines. That’s an “edge case” where I know I cannot lean on cli tooling and need to use a real language.

Some programmers have serious math envy. This can be good if they are self aware about it and keep it in check, because it makes them better programmers. Otherwise they can a pain to work with. Seniors should be people that have dealt with this aspect of their own talent, not juniors who are promoted in spite of or because of it

Generally speaking, advertising is necessary and can be done in a way that benefits everyone. In practice, how advertising is actually done (and particularly how it's done online) is utterly despicable.

> The entire industry is rotten to the core.

I tend to agree.


Honestly it follows the design of the rest of the language. An incomplete list:

1. They wrote it to replace C++ instead of Objective-C. This is obvious from hearing Lattner speak, he always compares it to C++. Which makes sense, he dealt with C++ every day, since he is a compiler writer. This language does not actually address the problems of Objective-C from a user-perspective. They designed it to address the problems of C++ from a user-perspective, and the problems of Objective-C from a compiler's perspective. The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).

2. They designed the language in complete isolation, to the point that most people at Apple heard of its existence the same day as the rest of us. They gave Swift the iPad treatment. Instead of leaning on the largest collection of Objective-C experts and dogfooding this for things like ergonomics, they just announced one day publicly that this was Apple's new language. Then proceeded to make backwards-incompatible changes for 5 years.

3. They took the opposite approach of Objective-C, designing a language around "abstract principles" vs. practical app decisions. This meant that the second they actually started working on a UI framework for Swift (the theoretical point of an Objective-C successor), 5 years after Swift was announced, they immediately had to add huge language features (view builders), since the language was not actually designed for this use case.

4. They ignored the existing community's culture (dynamic dispatch, focus on frameworks vs. language features, etc.) and just said "we are a type obsessed community now". You could tell a year in that the conversation had shifted from how to make interesting animations to how to make JSON parsers type-check correctly. In the process they created a situation where they spent years working on silly things like renaming all the Foundation framework methods to be more "Swifty" instead of...

5. Actually addressing the clearly lacking parts of Objective-C with simple iterative improvements which could have dramatically simplified and improved AppKit and UIKit. 9 years ago I was wishing they'd just add async/await to ObjC so that we could get modern async versions of animation functions in AppKit and UIKit instead of the incredibly error-prone chained didFinish:completionHandler: versions of animation methods. Instead, this was delayed until 2021 while we futzed about with half a dozen other academic concerns. The vast majority of bugs I find in apps from a user perspective are from improper reasoning about async/await, not null dereferences. Instead the entire ecosystem was changed to prevent nil from existing and under the false promise of some sort of incredible performance enhancement, despite the fact that all the frameworks were still written in ObjC, so even if your entire app was written in Swift it wouldn't really make that much of a difference in your performance.

6. They were initially obsessed with "taking over the world" instead of being a great replacement for the actual language they were replacing. You can see this from the early marketing and interviews. They literally billed it as "everything from scripting to systems programming," which generally speaking should always be a red flag, but makes a lot of sense given that the authors did not have a lot of experience with anything other than systems programming and thus figured "everything else" was probably simple. This is not an assumption, he even mentions in his ATP interview that he believes that once they added string interpolation they'd probably convert the "script writers".

The list goes on and on. The reality is that this was a failure in management, not language design though. The restraint should have come from above, a clear mission statement of what the point of this huge time-sink of a transition was for. Instead there was some vague general notion that "our ecosystem is old", and then zero responsibility or care was taken under the understanding that you are more or less going to force people to switch. This isn't some open source group releasing a new language and it competing fairly in the market (like, say, Rust for example). No, this was the platform vendor declaring this is the future, which IMO raises the bar on the care that should be taken.

I suppose the ironic thing is that the vast majority of apps are just written in UnityScript or C++ or whatever, since most the AppStore is actually games and not utility apps written in the official platform language/frameworks, so perhaps at the end of the day ObjC vs. Swift doesn't even matter.


In databases, never rely on data you don't control. "Natural" keys are an example of this.

Names can be natural keys, but you don't control them. You don't control when or how a name changes, or even what makes a valid name.

Addresses change. Or disappear. Or somehow can't be ingested by your system suddenly.

Official registration numbers (SSNs, license plate numbers, business numbers etc) seem attractive, but once again you don't control them. So if the license plate numbering scheme changes in some way that breaks your system, too bad. Or people without an SSN. Or people in transition because an SSN needs to be changed in a government system somewhere. Or any other number of things that happen in a government office that affect you, yet you have no control over.

Phone numbers? Well, we've already seen that mess with many messenger platforms.

Fingerprints? Guess what? They evolve over time, and your system will eventually break.

Retrofitting a system that relies on "natural" keys that have broken SUCKS.

Use a generated unique key system that YOU ALONE control.

The first rule of software design is: Don't try to be clever. You're not clever enough to see all of the edge cases that will eventually bite you.


You can sort of convolute a reason why 401 Unauthorized is valid, based on the fact that most systems which control access to resources have a (often implicit) policy that users for whom the identity is not known are not allowed to access anything.

Therefore the request is unauthorized because the server wasn't able to authenticate the user. But that's still not consistent with 403 though, so it's not very satisfying.

But this also speaks to one of the nubs of the terminology issue. "Actors" are authenticated, "Actions" are authorized.


About the _expansionists vs extinctionists_ duality Musk is promoting, I'd say it all fall into a Yin and Yang kind of proposition, that answers come from equilibrium between opposite forces, rather than chosing one or the other.

About the fear of being awake (as in woke), it sure reveals just how some are mostly afraid of being afraid. I'll gladly take a trans woke AI I can reason with over anything else, because I can't stand the opposite, aggression, at intimate level (as I believe AI is a potentially very intimate tech): BWTB (better woke than bigot).


“Do you care to know why I’m in this chair… why I earn the big bucks? I'm here for one reason and one reason alone. I'm here to guess what the music might do a week, a month, a year from now, that’s it, nothing more.” John Tuld, Margin Call

I think about this movie all the time now that I’ve been in the corporate world for a while and I have realized one consistent outcome: the leaders who “guess” right a lot survive a hell of a lot longer than those who get it wrong.


macOS 14.2 implemented something like this in Core Audio, but it is not user facing (and also documented extremely poorly). You can create a "Tap" that can capture audio from a particular application, or subset of applications, or an output device. This can then be added to a private or public Aggregate Device (depending on the Tap being private or public).

https://developer.apple.com/documentation/coreaudio/4160724-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: