It seems like there should be limits to how many waveforms you could combine without needing more spectrum.
Let's say I wanted to email N distinct 1MB attachments to N users. Before sending, every user generates a unique key and sends it to me. I then use my super math to encode/compress the N distinct 1MB attachments into a single a single combined 1MB attachment. I then send that resulting combined 1MB attachment to all N users. Each user then uses his unique key to decode/decompress the 1MB combined attachment and, viola, he gets the distinct 1MB attachment intended for him.
Now scale that up to very high values of N. Linearly. While the keys are changing constantly. And keeping the combined attachment fixed at 1MB.
Is my analogy way off? If not, I don't see how this would be possible.
This is a bit off. Consider it rather like this: you can transmit a sine wave of varying phase and/or amplitude. With a single transmitter and no multi-path interference, each receiver sees the exact wave, plus some noise. Shannon's paper defines the limits on what you can transfer.
Now consider when you have multiple coordinated transmitters transmitting with different phases and amplitudes. Each receiver receives the sum of these functions at a relative offset equal to the varying distances, so will decode as a different symbol.
That's the simplest interference pattern you'll see. It's obvious to see that the wave a receiver gets will be dependent on location (e.g. notice the bands of 180 degree flipped phase). As a thought experiment you could imagine varying the phase and amplitude of the two transmitters such that receivers in 2 different places would see either similar, or different waveforms. There is almost certainly a limit to how many users you could support with N transmitters, but with good enough math and feedback, it's potentially fairly high, which is what these guys claim they can do.
There is an important detail. In this system, you need N transmiters to send the information to the N recivers.
All the trasmiters use the same frecuency, so the difficult part is to "mix" and "sincronize" the N transmitions in a way that each one of the N recivers see only the data it needs.
I think your analogy refers to data limits in a single channel. According to the article, dido creates different channels for users so that data rate in each channel isn't affected.
Let's say I wanted to email N distinct 1MB attachments to N users. Before sending, every user generates a unique key and sends it to me. I then use my super math to encode/compress the N distinct 1MB attachments into a single a single combined 1MB attachment. I then send that resulting combined 1MB attachment to all N users. Each user then uses his unique key to decode/decompress the 1MB combined attachment and, viola, he gets the distinct 1MB attachment intended for him.
Now scale that up to very high values of N. Linearly. While the keys are changing constantly. And keeping the combined attachment fixed at 1MB.
Is my analogy way off? If not, I don't see how this would be possible.