> Sure some few adults can learn languages as fast as kids, but you completely missed my main points around gatekeeping that language skills always has on adults and less so on kids.
Adults in general are actually way faster at learning languages than kids if you control for time actually spent learning the language, but generally adults are required to fit language learning in around a full time job (and are also full of shame/embarrassment)
Can't concur. As a kid I learned foreign languages effortlessly, compared to now as an expat. And every other expat here my age shares the same experiences, where their 8 year old already speak the host country's language better than they do.
As another expat, I'd concur with him, with an asterisk. The thing is - your kids are surrounded by the language nonstop. Depending on your situation it may be spoken at school, certainly spoken by some of their friends, teachers, and so on endlessly. But "you" (speaking in generalities of expats and not necessarily literally you)? Unless you happen to have a local wife, then you probably speak it extremely rarely, there's a reasonable chance you can't even read it if it's non-latin, and there's no real need to move beyond that.
Living in one country for a rather long time, my fluency was basically non-existent beyond simple greetings, shopping/eating, and other basic necessities. By contrast somewhat recently I've taken a major interest in another language, one that's generally considered extremely difficult, and I've reached at least basic fluency in about 3 years. The difference? I immersed myself in the other language, my music playlist is overwhelmingly in that language, I've watched endless series and movies in that language, I've made efforts to read books in the other language, and any time I find another speaker I make sure to use the opportunity to talk with him in that language, and so on. If I was in a country where it was the native language, then I'd probably be near fluent by now.
On the contrary, an object moving across your field of vision will produce a level of motion blur in your eyes. The same object recorded at 24fps and then projected or displayed in front of your eyes will produce a different level of motion blur, because the object is no longer moving continuously across your vision but instead moving in discrete steps. The exact character of this motion blur can be influenced by controlling what fraction of that 1/24th of a second the image is exposed for (vs. having the screen black)
The most natural level of motion blur for a moving picture to exhibit is not that traditionally exhibited by 24fps film, but it is equally not none (unless your motion picture is recorded at such high frame rate that it substantially exceeds the reaction time of your eyes, which is rather infeasible)
Was it actually repeating packets or was it sending out pause frames?
In my experience USB ethernet adapters send out pause frames which shit-tier switches replicate to all ports in direct contravention of the ethernet specifications.
USB A->C cables are supposed to have a Rp pullup on CC1, and leave CC2 disconnected. Huawei made some A->C cables which (incorrectly, and spec-violatingly) have Rp pullups on both CC lines, which is how you signal you're a power sourcing Debug Accessory
Your Pixel 4A is entering debug accessory mode (DebugAccessory.SNK state in the USB-C port state machine); other devices probably don't support debug accessory mode and just shrug.
> In my tests with assorted 24-bit sRGB monitors, a difference of 1 in a single channel is almost always indistinguishable (and this might be a matter of monitor tuning); even a difference of 1 simultaneously in all three channels is only visible in a few places along the lerps. (Contrast all those common shitty 18-bit monitors. On those, even with temporal dithering, the contrast between adjacent colors is always glaringly distracting.)
Now swap the sRGB primaries for the Rec.2020 primaries. This gives you redder reds, greener greens, and slightly bluer blues (sRGB blue is already pretty good)
This is why Rec.2020 specifies a minimum of 10-bit per channel colour. It stretches out the chromaticity space and so you need additional precision.
This is "just" Wide Colour Gamut, not HDR. But even retaining the sRGB gamma curve, mapping sRGB/Rec.709 content into Rec.2020 without loss of precision requires 10-bit precision.
Swap out the gamma curve for PQ or HLG and then you have extended range at the top. Now you can go super bright without "bleeding" the intensity into the other colour channels. In other words: you can have really bright things without them turning white.
Defining things in terms of absolute brightness was a bit of a weird decision (probably influenced by how e.g. movie audio is mixed assuming the 0dBFS = 105dB(SPL) reference level that theaters are supposed to be callibrated to) but pushing additional range above the SDR reference levels is reasonable, especially if you expect that range to be used judiciously and/or you do not expect displays to be able to hit their maximum values on that across the whole screen continuously.
Being that prescriptive is fundamentally unworkable in practice. Propagating unknown attributes is fundamentally what made the deployment of 32-bit AS numbers possible (originally RFC 4893; unaware routers pass the `AS4_PATH` attribute without needing to comprehend it), large communities (RFC 8092), the Only To Customer attribute (RFC 9234) and others.
A BGP Update message is mostly just a container of Type-Length-Value attributes. As long as the TLV structure is intact, you should be able to just pass on those TLVs without problems to any peers that the route is destined for.
The problem fundamentally is three things:
1. The original BGP RFC suggests tearing down the connection upon receiving an erroneous message. This is a terrible idea, especially for transitive attributes: you'll just reconnect and your peer will resend you the same message, flapping over and over, and the attribute is likely to not even be your peer's fault. The modern recommendation is Treat As Withdraw, i.e. remove any matching routes from the same peer from your routing table.
2. A lack of fuzz testing and similar by BGP implementers (Arista in this case)
3. Even for vendors which have done such testing, a number of have decided (IMO stupidly) to require you to turn on these robustness features explicitly.
PNG solved this problem when BGP was still young: each section of an image document is marked as to whether understanding it is necessary to process the payload or not. So image transform and palette data is intrinsic, but metadata is not. Adding EXIF for instance is thus made trivial. No browser needs to understand it so it can be added without breaking the distribution mechanism.
This is also how BGP (mostly) solved it. Each attribute has 'transitive' bit. Unknown attributes with 'transitive' bit set are passed, one without are discarded.
You're suggesting that being liberal in what you accept is necessary for forward evolution of the protocol, but I think you're presenting a false dichotomy.
In practice there are many ways to allow a protocol to evolve, and being liberal in what you accept is just about the worst way to achieve that. The most obvious alternative is to version the protocol, and have each node support multiple versions.
Old nodes will simply not receive messages for a version of the protocol they do not speak. The subset of nodes supporting a new version can translate messages into older versions of the protocol where it makes sense, and they can do this because they speak the new protocol, so can make an intelligent decision. This allows the network to function as a single entity even when only a subset is able to communicate on the newer protocol.
With strict versioning and compliance to specification, reference validators can be built and fitted as barriers between subnetworks so that problems in one are less likely to spread to others. It becomes trivial for anyone to quickly detect problems in the network.
The thing (Fast)CGI had, that http proxying doesn't (and lots of web frameworks/libraries a bit too tied to http, like go net/http don't) have is the SCRIPT_NAME (path processed so far) / PATH_INFO (path left to handle) distinction
> To resolve this issue, Google could implement two immutable identifiers within > its OpenID Connect (OIDC) claims:
> 1. A unique user ID that doesn’t change over time.
> 2. A unique workspace ID tied to the domain.
1. is the OIDC `sub` claim! I strongly suspect that in those 0.04% of accounts where the anonymous quoted engineer reports that the `sub` claim changed, what actually happened was some provisioning/onboarding/offboarding system resulted in the account being deleted and recreated.
2. is sensible, and is just a versioned version of the `hd` claim.
If services are not respecting the `sub` claim in this case, then they are giving the new Google account access to the old account's data. Companies probably wouldn't complain about this because they think it is the expected/reasonable behavior. Also it's likely that in many scenarios it is the same human behind the different accounts, e.g. if they leave a company then return.
Reinforcing the face-palm at the heart of this, which is that anyone deciding to you know, just use email instead of asking why an immutable ID changed... just probably enabled information leakage. Seriously, I'm so thankful that my colleagues would be principaled about this and ask questions instead of just doing something to make it "work". Where "work" means some GSuite user probably logged into some other defunct GSuite user's RP-account.
The slides claim that this problem happens almost 600 times per week. There's no way it makes sense to manually validate all of those sessions.
The secure thing would be to kick those users out and tell them to go figure out with Google why their account IDs keep changing. The easy and more profitable solution is to just use the email address as an account ID and keep the customers.
Google did re-open the bug so I think there may be something wrong on Google's side, but for 99% of companies just using the `sub` value like it's intended to won't cause anyone any headache.
1. The slides cite an anonymous, context-less, singular sentence claims that they change. With no indication of why they changed, or if that change was valid.
2. The sub can change, while keeping the same email, because it is in fact a different user. Just using the email is categorically wrong.
Again, so much hysterics, and I have serious doubts about the, thus unsubstantiated, entire premise.
Let me be more clear, I _do not believe that claim at all_. I find no other evidence of it. I've worked with RPs that allow Google auth and similarly have never experienced this or heard of it happening.
Adults in general are actually way faster at learning languages than kids if you control for time actually spent learning the language, but generally adults are required to fit language learning in around a full time job (and are also full of shame/embarrassment)