There is no secret ingredient, Yandex just put more effort into supporting languages that are spoken in ex-USSR countries, because that's the most important market for them.
Other translation tools do not consider i. e. Kyrgyzstan an important market therefore do not put much effort into supporting Kyrgyz.
I meant it as a job-seeker proving they spent time doing a sample project on a real OS, such as QNX, instead of studying leet-code trivia and thinking they're ready for a job. As an interviewer, I would always choose the one who has skin in the game.
I imagine it's not, but can also see automated systems flagging the number as being the recipient of messages for over a hundred different financial institutions.
Maybe not but I bet that this many unique credit card emails eventually tripped over some threshold in a risk model. It’s too hard to adjust the model for one person, and putting in an exception for this one person means they take on additional risk if that person then goes on to actually do bad things.
To be clear I’m not saying it’s ok. Google should make it right and then invest in a scalable way to not keep doing this.
The most interesting idea in my opinion is biased reference counting [0].
An oversimplified explanation (and maybe wrong) of it goes like this:
problem:
- each object needs a reference counter, because of how memory management in Python works
- we cannot modify ref counters concurrently because it will lead to incorrect results
- we cannot make each ref counter atomic because atomic operations have too large performance overhead
therefore, we need GIL.
Solution, proposed in [0]:
- let's have two ref counters for each object, one is normal, another one is atomic
- normal ref counter counts references created from the same thread where the object was originally created, atomic counts references from other threads
- because of an empirical observation that objects are mostly accessed from the same thread that created them, it allows us to avoid paying atomic operations penalty most of the time
Anyway, that's what I understood from the articles/papers. See my other comment [1] for the links to write-ups by people who actually know what they're talking about.
AFAIK the initial prototype called nogil was developed by a person named Sam Gross who also wrote a detailed article [0] about it.
He also had a meeting with Python core. Notes from this meeting [1] by Łukasz Langa provide more high-level overview, so I think that they are a good starting point.
I always wondered who even decided that averaging the input is a good idea.
It sounds like it makes sense at first glance, but if you think about it a little bit more it actually doesn't make any sense.
The average of two inputs is basically garbage, it doesn't do what either of the pilots want to do and it breaks feedback for both of the pilots.
After watching tons of Mentour Pilot videos (who, by the way, covered [0] this incident) I am convinced that this feature shouldn't exist at all.
And no, I don't think that I'm smarter than people who originally designed this system. I just think that this particular feature was not designed at all. It seems like an afterthought. Like, "hey, there is this corner case that we haven't thought about, what should we do if both pilots input something on the controls? - well, let's just average it, kinda makes sense, right?"
> After watching tons of Mentour Pilot videos (who, by the way, covered [0] this incident) I am convinced that this feature shouldn't exist at all.
There is some selection bias at play here. We don't know how many situations happened where averaging the input was the right thing to do and avoided an accident, as Mentour Pilot does not make videos about those.
I'm not saying averaging is good. I have no idea. But a number of videos about crashes (which I watch and think are awesome) are not a good reason to form beliefs.
> I don't think that I'm smarter than people who originally designed this system.
This sentence says one thing, the other sentences in your comment say the opposite. It certainly reads like you think you're smarter than those people. Which as far as I know could be true, no idea. My point is a disclaimer does nothing if you actually do the mistake you know you should avoid.
Yeah, that's a good point though, of course in most situations it is more urgent so this choice makes sense. BUT in this case not being aware of the dual input made the GPWS worse. So in this paricular case it was not.
Personally I would do a different type of alarm for dual input, like a big red light somewhere. Or just not allowing dual input somehow (always requiring the use of the takeover button).
I mean, what choice, besides averaging, would make sense? Completely disregarding one pilots input seems worse, and averaging is what happens in a mechanically connected system. The crucial difference is that in that case the pilots can feel that this is happening. I don't know what sort of force feedback the Airbus sidesticks provide, but this lack of feedback seems to me to be the real root of the problem, not the averaging itself.
Disregarding one pilot input seems better: one pilot can correctly fly the plane while the other does nothing vs two pilots getting confused and flying planes into the ground. Even better would be a system that somehow follows the "I have the stick" procedure, although I don't know if that is possible.
You are right though that either way force feedback make sense. You could even just do a buzzing if there's dual inputs, like when you take your hand off a lane-controlled vehicle.
Say one of the pilots is suicidal, or had a heart attack and is unconsciously while holding the stick in the wrong direction, how does the airplane know which input to ignore?
Other translation tools do not consider i. e. Kyrgyzstan an important market therefore do not put much effort into supporting Kyrgyz.