The "code poem" at the beginning of the post reminds me of the "Bit Shift Variations in C-Minor" [1] by Robert S K Miles (chiptune music in 214 bytes of C; featured in the computerphile video "Code Golf & the Bitshift Variations").
As you expected, you probably shouldn't read to much into these calculations. ;-)
The Shannon-Nyquist sampling theorem guarantees that we (as in the DAC in your computer) can perfectly reconstruct the analogue signal for any discretised signal that is bandlimited to frequencies below Nyquist, i.e., 24 kHz for a 48 kHz sample rate. No matter how crooked the sample points may look to you. And 440 Hz is way below the 24 kHz limit.
Sure, this doesn't take quantisation into account, but 16 bit is sufficient to encode the difference in amplitude at the individual sample points between 440 and 440.8175 Hz with plenty of headroom (about 210 digital steps at 109 samples). Indeed, the smallest frequency difference that would have a zero difference after 109 samples due to quantisation is about 0.001 Hz (modulo mistakes in my hasty calculations). And this doesn't take dithering into account. Dithering essentially gives you an infinite dynamic range (depending on your definition of dynamic range) at the exchange of a higher noise floor. Of course your signal is likely also longer than 109 samples.
See this excellent video [1] by Xiph.Org's Chris Montgomery
for a whirlwind overview of digital signal processing.
Here is a recent update from Christian Schaller on PipeWire [1]. It looks quite promising; it's especially good to see that PipeWire comes with shims implementing the ALSA, PulseAudio, and Jack API, so it should be a drop-in replacement.
There are ways around this. See "content-aware chunking", e.g. implemented using rolling hashes [1]. This is for example what rsync does.
The idea is to make blocks (slightly) variable in size. Block boundaries are determined based on a limited window of preceding bytes. This way a change in one location will only have a limited impact on the following blocks.
Rolling hashing is really only useful for finding nonaligned duplicates.
There isn't a way to advertise some "rolling hash value" in a way that allows other people with a differently-aligned copy to notice that you and them have some duplicated byte ranges.
Rolling hashes only work when one person (or two people engaged in a conversation, like rsync) already has both copies.
I think you misunderstood how the rolling hash is used in this context. It's not used to address a chunk; you'd use a plain old cryptographic hash function for that.
The rolling hash is used to find the chunk boundary: Hash a window before every byte (which is cheap with a rolling hash) and compare it against a defined bit mask. For example: Check if the first 20 bytes are zero. If so, you'd get chunks with about 2^20 bytes (1 MiB) average length.
If I discover that the file I want to publish shares a range with an existing file, that does very little because the existing file has already chosen its chunk boundaries and I can’t influence those. That ship has sailed.
I can only benefit if the a priori chunks are small enough that some subset of the identified match is still addressable. And then I may only get half of a two thirds of the improvement I was after.
that does very little because the existing file has already chosen its chunk boundaries
If they both used the same rolling hash function on the same or similar data, regardless of the initial and final boundary and regardless of when they chose the boundaries, they will share many chunks with high probability. That’s just how splitting with rolling hashes work. They produce variable-length chunks.
The idea is that on none random data, you are able to use a heuristic that would create variable-sized chunks that fit the data. The simplest way seems to detect padding zeros and start a new block on the first following none zero byte. There probably are other ways, knowing the data type should help.
That seems fairly unlikely. Not a lot of big files have zero padding, and if they did them compress them. It will reduce your transfers more than and range substitutions ever will.
If you worry about artifacts introduced by sample rate conversion, you shouldn't use a lossy format in the first place. The sample rate converter used by Opus (i.e., the speex resampler used in the opus-tools library) is completely transparent and does not introduce any audible artifacts. As per [1], the distortion caused by any lossy codec even at the highest bitrates is larger than that caused by re-sampling.
As for playback, most likely your sound card is already running at 48kHz; 44.1kHz may actually not be supported properly by your DAC (I guess since it requires a higher quality anti-aliasing filter). As [1] continues to explain, Opus is essentially shifting the burden of resampling to the encoding rather than the decoding side of things.
That being said, Opus technically supports odd sample rates such as 44.1kHz, but this has to be signalled in a side-channel. See [1] downwards.
Yep, I've read all that before. I didn't mean to focus the discussion on the resampling--what I was trying to get across is, this codec acts differently than codecs that have been extensively tested and ABXed at high bitrates for years. I didn't even mention other factors like how it injects noise into bands on purpose (where you can also find references claiming that's a benefit and not a downside, of course). It was about a year ago, but beyond my own ABX testing I looked around quite a bit, and didn't see many high-bitrate tests out there. All the focus seemed to be on the 64kbit range.
This should not matter to me personally, as I have proven to myself that pretty low bitrates are transparent to me, regardless of the codec. But... I have the same psychosis that a lot of people have, where I think I can hear differences when I know which is which.
If space were an issue I'd use 90kbit/s opus (that was the threshold for me in my testing). It's actually pretty amazing, but since I have the storage space, I archive FLAC and carry around 256kbit/s vorbis, and don't even question the quality. It's easier to use more space than to fix my faulty perception!
Well, it's not a scan, it's a recording from a micro-electrode array (MEA) implanted directly into patient's brains who's skull was already opened for (I guess) surgery.
As others have pointed out, this article is not very convincing. I don't agree with the point that WASM is somehow more suitable for nefarious purposes than obfuscated JavaScript. I suppose that if anything, the execution model of WASM is much simpler than that of JS and it should thus be much easier to analyse.
> This prevents the user from escaping the scam by pressing keys like ESC or the CTRL+ALT+DELETE combination, or others as shown in the table.
The part about CTRL+ALT+DELETE is just nonsense. This key combination is directly handled by the Windows kernel and cannot be captured by a user space application. Hence the "Press CTRL+ALT+DELETE to login" prompt. [1]
I remember receiving an email from the BSI ("Bundesamt für Sicherheit in der Informationstechnik"; engl. "Federal Office for Security of Information Technology") regarding a misconfigured NTP server that could be abused for NTP reflection attacks.
The functions of the BSI are explained in English here [1] based on the following law [2]. I guess initiatives such as informing about the NTP problem fall into what is listed under §3.2.
The night began with a low-frequency hum that began in the northwest, then subsided into a softening, then continued on to the north, ending in a violent thud, which was followed by a pause of silence, followed by a long shower of thunder.
[1] http://txti.es/bitshiftvariationsincminor