I'm not going to voice support for compulsory national service but at least their purpose isn't to purge the social identity of an entire ethnic minority. Which, if you actually dig a little deeper into the Tibet issue, is exactly what China is trying to achieve (and having some success at too).
So China has a free press now to find such proof! How silly of us. This myopic, appeasement view of an authoritarian regime has been shown time and again through our history to not end well.
So what you are saying is that if there's no free press, it's a valid reasoning to just invent things out of thin air and assume they are true, like in this case with "forced labor".
No. Some state take in billions more in federal tax dollars then they send in with states like Alabama and Kentucky often at the top of the list. Without California and New York floating most red states they'd be sunk.
This is why when a red state chooses to lower taxes or otherwise increase their reliance on federal help I laugh at the hypocrisy of the bootstraps party and wonder why my federal taxes, in a blue state without federal support, should go to such irresponsible people.
The suggestion is that these cities are potentially paying more money to the federal government than they get back in federal aid. The population is irrelevant.
When I saw Doom 2016 running on OpenGL I was shocked. I would’ve never thought you could write an AAA game with it. It made me wonder why game developers use DX instead.
If you want to target Xbox, you need to write a D3D renderer anyway (although the Xbox API has some significant differences, if I understand correctly). There's little point writing an OpenGL renderer if your target platforms are Windows, Xbox, and PlayStation (which has its own graphics API).
Also, my understanding is that on Windows, OpenGL generally runs into more issues with driver bugs than D3D does.
I work for a large video games company and we usually write two backends - DirectX for Windows and Xbox, and a special one for PlayStation. That is changing slightly with Stadia, because that forces us to write a Vulkan renderer too. But on the few games that have it, the DX12 performance is better on windows than that of the Vulkan renderer backend - so we haven't released it to the public.
So.....no, because gamer press is ruthless :P We've had one graphical bug in one of our games(which I'm certain everyone on the internet has seen by this point), and in reality it was only happening if you played on one specific GeForce card, and only when using beta Nvidia drivers. But of course no one cared that the bug only happened when using experimental drivers - it was somehow a proof of how broken our games are. If this(the vulkan backend) lead to any graphical bugs in our games, I can almost guarantee that they would be paraded everywhere, no matter how clearly we mark it as experimental and unfinished.
> Also, my understanding is that on Windows, OpenGL generally runs into more issues with driver bugs than D3D does.
I'm not sure if that's the case anymore, but it definitely has a bad rep on Windows. Initially (throughout XP and maybe some of Vista) OpenGL support on Windows was done by a OpenGL -> DirectX translation layer, so performance was always worse in OpenGL mode unless a game's Direct3D implementation was especially awful. This stopped being the case when NVIDIA started shipping a full OpenGL driver. (I'm not sure when AMD/ATI started shipping theirs)
> Initially (throughout XP and maybe some of Vista) OpenGL support on Windows was done by a OpenGL
Initially (Windows 95), OpenGL support was provided directly by the OS. Starting with Windows 98, Microsoft stopped updating the OGL version of their reference driver, so users were stuck with OGL 1.1 unless the graphics card driver shipped with a custom OpenGL implementation.
So whenever an application uses an OGL version higher than v1.1, it is provided by the graphics card driver and that has nothing to do with DirectX. There is no translation layer in that case (unless of course, that's what the driver does internally, but that's up to the manufacturer).
TL;DR Custom OGL drivers shipped with every graphics card that supported OGL in Windows since 1998.
That hasn't been the case, then, because I clearly remember OpenGL being translates to DirectX in XP/Vista days. Whether it was because anything >1.1 called that up or because the driver vendors chose translation over native, I don't know.
Original link seems to be dead but Slashdot references Vista layering OpenGL on top of Direct3D:
That comment references something else entirely - namely the Aero rendering system of the OS. This didn't affect applications that didn't use the Aero glass scheme (it was an option during window creation), however, or apps running in full-screen mode.
The Aero glass scheme was hardware accelerated and only worked with Direct3D, thus any OGL context created for a window using this renderer would have to run via Direct3D.
This is a very special case and as noted earlier, easily circumvented by simply not using this feature in your app.
Because game developers mostly don't pick the backend, game engine developers do. The vast majority of game developers pick a game engine and that drives most of their other technical decisions. There are really only a dozen game engines that have enough market share to matter, and a decent chunk of the biggest ones were built on top of DX for various reasons. OpenGL, while a great concept, was a fairly flawed execution for quite a while (its gotten a lot better in the last 10 years or so), so I can at least partially understand why in the past someone who doesn't care at all about cross-platform support might have steered clear of it.
> One of the recent experiences of Linux Plumbers Conference convinced me that if you want to be part of a true open source WebRTC based peer to peer audio/video interaction, you need an internet address that’s not behind a NAT.
UPnP is completely useless for CGNAT, and PCP would work only IF the CGNAT gateway would support it, which most don't (or it's disabled).
I don't think great difficulty about NAT is concerning CPE NAT, i.e. your local network, as it's trivial to forward ports (or port ranges) manually. Most problems are with CGNAT, i.e. on mobile broadband or some cable and DSL providers.
Where I am the number of ISPs that support PCP is the same as the number of ISPs that support IPv6 - zero.
My point wasn't that IPv6 is currently the answer, my point was that it's impossible to host anything or be reachable by anyone for A LOT of people in the world.
"A study in the Journal of Applied Physiology found that men had an average of 26 lbs. (12 kilograms) more skeletal muscle mass than women. Women also exhibited about 40 percent less upper-body strength and 33 percent less lower-body strength, on average, the study found."
Just breaking this down so we can map the possibilities and not fall apart into binaries: First, the female gender has, until very recently been literally hobbled and socialized to avoid sweat and any kind of physical efforts. My grandmother wore girdles and her mother wore corsets and they both only wore heels. So, we are new to sport and training and, we are catching up. Give us a century. Second, It depends on how you define stronger: “The hypothesis that the survival advantage of women has fundamental biological underpinnings is supported by the fact that under very harsh conditions females survive better than males even at infant ages when behavioral and social differences may be minimal or favor males.” https://www.pnas.org/content/115/4/E832.full and https://www.bbc.co.uk/bbcthree/article/7b6484fb-3b00-46d6-a5...
This is delusional nonsense. Just because women have better endurance and survival advantages in specific situations like very long distance running and conditions in that paper doesn’t mean they aren’t physically weaker than men by massive margins. And it certainly doesn’t mean that their weaker bodies are due to social conditioning.
I always wondered if these physical switches are actually reliable and now reading your comment actually terrifies me. This behavior can cause irreversible social damage in say during high steak business meetings. One could absolutely sue the microphone manufacturer for this.
Oh I love them high steak business meatings, with rare proposals and well done outcomes.
ontopic: all mechanical switches fail, and have multiple ways to fail, from subtle contact jitter to loose/broken springs to whatever else could break
I upvoted your 'steak' puns, but wanted to point out that "the all mechanical switches fail" is not necessarily a useful observation. All things fail, and it's absolutely possible to design a product where either
a) the switches are trivially repairable (like switching out RAM on my Lenovo, which requires exactly two Philips screws), or
b) designing the switches such that their mean time to failure is far longer than the mean time to failure of any of the other critical components, which is absolutely do-able using the right materials and tolerances.
It's not as though audio on a laptop is pumping enormous amounts of current that presents a serious electrical challenge in that respect, and it's unlikely that the mechanism is going to be used 1000 times per day for 10 years.
Digital built-in microphones use DMIC, which aiui is a one-wire interface where the microphone just sends a delta-sigma bitstream. If you implement the switch through a multiplexer or logic gate this kills the signal 100 % essentially.
Analog electrets can't be just shorted, because that causes a loud BANG when you switch due to the bias voltage, so you use a capacitor in series, which only shorts the AC portion. Because of the impedances involved, this only gives you 40-60 dB of attenuation, which isn't enough for a good ADC.
Similar for XLR microphones (hot+cold are shorted, not disconnected, because of phantom power).
Why do you need to short it? Just move the wires from "on" to "high impedance" (similar to what a pair of scissors would do when applied to the cable).
The assumption is you want a click/pop-free mute. If you don't, then just SPDT the signal input to ground, problem solved. But if you do want it to be pop-free, you can't be disturbing the DC bias path (as explained above), so that isn't an option.
No, the output of electret capsules is generally wired as a common source amplifier at Ugs ~0 V with an N-JFET. Without bias, there will be neglibible output (~essentially only the capacitive coupling from the gate to the output; if you SPDT the bias voltage to ground, you're having a >100 MOhm source impedance (the capsule) fighting a couple kOhms (bias + input resistance) through perhaps 5 pF or so.
Anyway the thing is connected to an ADC. The computer can sense when the switch is turned on/off, and turn off (or flush) the audio pipeline at the appropriate times.
How does the ADC distinguish the transient from switching the bias voltage from an intentional signal? (Yes, if you see the entire waveform, this is quite easy, but because of the low frequency, this would incur another ~20-50 ms in latency, which is unacceptable).
Ramping the bias voltage requires additional components (cheapest way these days would probably be a separate DAC integrated into the audio codec, but then you are back to not having a physical kill switch) and also incurs extra delay for turning off and on (probably 100-200 ms).