Right, and I guess this is in particular a problem that Linux/BSD distributions face, where they want to apply small patches for interoperability and the like, but still want to offer it to their users as "Rust".
As someone from the EU: Also a blocker for all these Cookie notices (and I guess also Cookies).
I've been test-driving Mozilla's experimental Fenix browser. It already supports Tracking Protection (which happens to block most ads as well) and it's in general a really nice browser, but I can't stand using it for anything that isn't basically Wikipedia, because of those annoying Cookie notices.
Half-offtopic: I find it very interesting that Rasbian sticks to LXDE. I thought that was going to slowly disappear, with LXQt having developer attention and there being no path forward towards HiDPI and well, Wayland (because neither GTK2 nor Openbox support these).
LXQt currently still uses Openbox, but you can replace it with KWin rather easily. I don't know what is then still missing to create a proper LXQt Wayland session, but it seems feasible.
I guess, one does not really need HiDPI on a Raspberry Pi, but yeah, Wayland would be nice.
You'll have to nudge the F-Droid maintainers for that. They grab the source code and compile it themselves. The app developer isn't really involved, they just provide the source code and ideally ensure that it can be easily built by others.
Though you may want to give them a week or two before you go nudge them. They might build it on their own when they find time for it.
As aasasd pointed out, it exists, but isn't part of the core distribution.
The openSUSE folks include it (Geeko) in the version that you find in their repositories, so that's why you might think that it is included by default.
> This is a common error. Macroscopic "everyday" objects don't have a definite position and momentum. Macroscopic objects are quantum objects. But when the mass is big enough, the position and momentum can be defined simultaneously with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
To put this into simpler terms:
Whenever we measure something, we need to throw something at it and then have that something rebound and hit us again.
In most experiments, we throw photons and have them rebound into our eyes.
Throwing a photon against a "classical object" - a chair, a ladder, bacteria - is like throwing a tennis ball against a skyscraper. You throwing that does not have no effect at all, but it's very much negligible.
But when trying to measure quanta, you're now throwing your tennis ball at a football, or at another tennis ball. You're gonna be lucky, if it rebounds at all, instead of just pushing the object that you're trying to measure out of the way. (You also don't have any smaller balls to throw.)
That's why when you measure something in quantum physics, you only know that it has this exact value in the moment that you measure it. It's going to be pushed away because you threw something at it, so after your measurement it has a different value.
You also can't observe it over a longer period, so there's no way to know whether it was only in that moment at your measured position or a long time beforehand.
This is not a correct description at all of QM complementary observables. This is a purely classical explanation (and was one of the first layman "explanations" back in 1920, but that was 100 years ago and QM is much better understood now).
Could you elaborate on that? From my extremely limited knowledge it does seem like a just-so explanation (what you're responding to), but I'm not sure why.
Yet the observer effect is not the reason why we can't know an object's position and velocity at the same time. There are two ways we can see that this supposed explanation is a red herring:
I don’t think this analogy holds up. Consider the double slit experiment: throw a bunch of basketballs at a wall and see what pattern of hits they leave by looking at where they hit the wall. If the wall is being looked at (observed), we see one pattern. If we look away, conduct the experiment, then check it, we find another.
To me that suggests the act of “observance” effects the probability distribution of likely states. If a tree falls in a forest and no one is around, then it doesn’t really fall, it just has a probability of having fallen that is not resolved until someone goes to check. How does your analogy account for those effects? For me, it looks like quantum collapse is causing the states of these objects to become “resolved” where at first they were “unresolved” and this suggests we live in a universe that knows how to save on memory and is fundamentally probabilistic.
If you ever ran into a space leak in Haskell, you would see how having unresolved thinks can use more memory than eager evaluation.
But that has some merit to it in that you can describe QC as merging equivalent paths and then sampling from a wave distribution afterwards.
One fun variant on the double slit experiment is taking a coherent laser beam (everything is in phase) and splitting it, sending it through two paths, A and B, then merging it and shining it on the wall.
If the two path lengths are equal, there is no effect from splitting it. But if we make B take slightly more time we can get a interference pattern. If we have it get shifted by half a wavelength the light will cancel out!
Now if you insert a polarizing filter along path B, when you merge the streams, you could tell with path the light came from, and the interference pattern disappears. This is not exactly measuring which path it took, but making it possible if you added a sensor to tell.
Observation is not required just making the streams distinguishable.
But now if we add another polarizing filter downstream we can erase the distinction between them, and now you get interference effects again!
Adding a polarizer is a nice variant of the experiment. I think I never heard it before. I like it, but I disagree with the expected result.
If the slit A has no polarizer and slit B has a polarizer, then in the "wall" you will the sum of 50% of the interference pattern and 50% of the diffraction pattern of A(I'm not sure about the 50%-50% split, something like that.) I.E. you will see the interference pattern, but it will not be so sharp, the black lines will not be so black, the white lines will not be so white.
I think it's better to put an horizontal polarizer on A and a vertical polarizer in B. If you don't add any other polarizer you will see the sum of the diffraction patters of A an B, without interference lines.
If you put a polarizer, the result depends on the direction:
* If it is horizontal you will see only the diffraction pattern of A (without interference lines).
* If it is vertical you will see only the diffraction pattern of B (without interference lines).
* At 45° you will see the diffraction pattern like in the original double slit experiment.
* At the other 45° you will see the inverted diffraction pattern, the black lines will be white and the white lines will be black. (All of this bounded by the diffraction pattern.)
* At other angles, you get some mix of the diffraction patterns and the interference patterns.
It would be nice to see an experimental realization of this.
There are two walls. One wall has two slits, the other wall is where the particles/waves/balls/whatever colide and form the interference pattern (or not).
You don't need someone observing the second wall to get the interference patters. You can replace the person with a photographic plate, a CCD sensor of a camera, or other equipment. All off them are more precise, reliable and even cheaper than a graduate student with paper and pencil.
The problem is if you try to add some type of equipment to first wall to collect information about how the particles/waves/balls/whatever passed thru it. Whatever equipment you add it will disturb the flow and it will kill the interference pattern.
This is not a technological problem. It is how the universe work. If you propose to use some particular method (like using light to detect the balls) you will sooner or later find that there is something that gets broken (see the former comment).
An important detail is that if you use a macroscopic object like a basketball, the slits size and the slits separation must be tiny (less than a millionth of the size of the nucleus of an atom, probably much less). So you intuition about how thinks work in the macroscopic level is not a good guide to how thinks work in the microscopic level. In the macroscopic level you can approximate the basketball as a perfect classic solid. It's just an approximation, a very good approximation.
> This is not a technological problem. It is how the universe work. If you propose to use some particular method (like using light to detect the balls) you will sooner or later find that there is something that gets broken (see the former comment).
what confuses me in various explanations like this is that the whole 'act of observing affects what you observe' thing seems to be rather particular in that it turns the wave-like behavior into particle-like behavior, which strikes me as rather weird/counter-intuitive. Why don't we just get slightly different interference patterns? Or some spectrum of effect between wave-like and particle-like?
Is my confusion mostly a result of the limits of the analogies presented to me as a layman?
>Consider the double slit experiment: throw a bunch of basketballs at a wall and see what pattern of hits they leave by looking at where they hit the wall. If the wall is being looked at (observed), we see one pattern.
If the basketball was of energy 1 quantum, if the energy used to observe is 1 quantum or more the (shining light to see the result in realtime) then the pattern is different due to interference. If we don't use any energy to see the result in realtime, then result is different due to non-interference.
I could be wrong, but I have a different understanding on how all of that works. You keep talking about basketballs instead of waves or probability fields and I guess this is where we diverge.
> If the basketball was of energy 1 quantum, if the energy used to observe is 1 quantum or more the (shining light to see the result in realtime)
How would we observe the light that bounced off the basket ball? Would we need to hit it with another light in order to detect where that light is? How would we detect that second particle of light; would we hit it with a third? And so on.
The answer is that we don't need to shine light to see the basketball. We can detect the basketball itself; for example, if the basketball were representing a photon of light, we could cover the wall with photomultiplier tubes ( https://en.wikipedia.org/wiki/Photomultiplier_tube )
As sibling comments have pointed out, the parent is wrong in saying that observing the wall will change the pattern. Rather, it's observing the slits that will change the pattern.
If we don't observe the slits, but we do mark the point on the wall where the basketball hits, and we do this over and over again, then the marks on the wall will show an interference pattern. Note that we're not throwing anything at the basketball: we're just waiting for it to hit the wall on its own. Also note that the marks themselves don't change anything; we could note them down on some paper instead, or type them into a spreadsheet, or whatever.
What if we do observe the slits, e.g. by putting a baseball in one and a cricket ball in the other? In this case, we'll detect the basketball hitting the wall and either a baseball or cricket ball. After many goes, the pattern on the wall made by the basketball will have two peaks (one in front of each slit), not an interference pattern. This seems analogous to your 'bounce a photon off it' explanation.
However, what if we got rid of the cricket ball? Half the time we would detect the baseball hitting the wall too, the other half we wouldn't (when the basketball went through the other slit). Yet the basketball will still make the two-peak-no-interference pattern, even though we didn't interact with it half of the time!
In fact, we could randomise which slit we put the baseball in, and mark only those goes that the basketball didn't hit the baseball, and we would still see two peaks without an interference pattern, even though those basketballs didn't hit anything (they always went through empty slits)!
This hopefully shows that your explanation (known as the observer effect) doesn't explain the interference pattern in the double-slit experiment.
That's a nice explanation but doesn't it give the impression that if we could find a better way to do that experiment, we could find a way around the problem, when instead it's a fundamental limit on what we can know about a quantum system?
No, under most interpretations of QM, things literally behave differently at that scale. Under Copenhagen, the wave literally collapses into a fixed position/momentum. The pre measurement wave isn't a statement of our ignorance of the system but rather a description of reality. The many worlds is even more serious in its quantum literalism. Far from pushing around the subject of your experiment with a too-big measuring device, you're actually branching worlds where all predictions of the wave function occur.
To me, many worlds + time (as an inviolate observed vector) being merely a consequence of our inability to observe without moving foreward in time based on our entropic process driven cociousness, seems by far the most comprehensive explanation of observable phenomenon.
That observational uncertainty increases as the probability of direct interaction decreases (distance, time) strongly supports the hypothesis that observable phenomena are dictated strongly by the presentation and characteristic relationship of the observer to the phenomenon.
We know on the micro scale that all possible states exist simultaneously.
It seems logical, even axiomatic then that on the macro scale the same applies, but that we can only observe the bandwidth of states in which it is possible for us to exist to make the observation.
To claim that this state uncertainty is magically resolved in all cases and coherently for all possible observers into a single set of states seems an extraordinary claim requiring extraordinary evidence.
> The pre measurement wave isn't a statement of our ignorance of the system but rather a description of reality.
Post measurement particle is description of our ignorance not a description of reality.
It still evolves according to Schrödinger equation (which degrades to newtonian dynamics for sharp and narrow waves) but for historical reasons we choose to talk about it as it was little billiard ball, not still a wave just sharpened and narrowed down by intraction we call measurement.
The problem is that fundamentally, there is a fixed amount of information that there is, that has to be distributed over two dimensions. Particles that are constrained to a small area (e.g. photons going through a slit, electrons bound to an atom) simply do not have a well-defined momentum. In fact the effect is something that you can experience with a sharp enough camera lens: as you close the aperture (therefore forcing the light going through it to be in a specific place) you slowly lose resolving power as the light stops behaving nicely and diffracts around/through the aperture.
I agree with both. This explanation is easier to understand, but it makes it look like a technological problem that can be solved, instead of a fundamental property of the universe.
I think david927's intuition is more correct here. The uncertainty in the position and momentum is intrinsic to quantum mechanics - it's built into the 'wave function'.
The suggestion that if one pushes away something by throwing something else builds on a purely classical intuition and wouldn't require quantum mechanics to explain if this was all we observed. The uncertainty in quantum mechanics is fundamental (to quantum mechanics) and emerges through a different, as yet unknown, mechanism.
3Blue1Brown has an extremely good explanation[1] of the intrinsic uncertainty, and why it's separate from measurement uncertainty. (the previous episode[2] is a recommended prerequisite for background on how the Fourier Transform works)
> emerges through a different, as yet unknown, mechanism.
In 3Blue1Bron's explanation[1], he shows how the intrinsic uncertainty is an inherent trade-off of trying to measure both position and frequency. A short wave packet only a few wavelengths long correlates with a narrow (precise) range of positions, but also correlates well with a very wide range of frequencies due. A Heisenberg-like uncertainty exists any time you are working with weave packets with length near the wavelength. 3Blue1Brown gives a very good example using Doppler radar.
Yes, I like these sources too. Good for building intuition. I would just add that Heisenberg-like here means that both systems share features of wave mechanics. Doppler type effects aren't quantum mechanical though.
When I suggest the mechanism is unknown, I mean that Heisenberg uncertainty is a postulate of quantum mechanics. In other words the fundamental reason that quantum mechanics should appeal to wave mechanics isn't really established - we don't really know yet the fundamental objects and interactions that lead to quantum mechanics (despite much effort).
I'm not sure about the historical part, but now the uncertainty principle is not an independent postulate. It's deduced form the non commutation of the operations to measure the position and the momentum of a particle. This can be done in the wave representation or in the matrix representation.
Moreover, similar calculations can be done with other measurements that don't conmute. One that is very important is the spin of a particle in the x, y, and z axis.
Another is the polarization of a photon in directions that are at 45°. For example, most of (all?) the experiments of the EPR paradox are done with polarization instead of position-momentum, because polarization is much easier to measure. https://en.wikipedia.org/wiki/EPR_paradox
It's a good point that uncertainty relations exist for all kinds of physical observables. But whether they're expressed as commutation relations or as in Heisenberg's original formulation, or whatever formulation you choose (wave mechanics, matrix mechanics, dirac representation, qft, or anything else one can think of) it's still asserted, rather than derived from an underlying set of fundamental physical objects and interactions.
Why unknown? Heisenberg's uncertainty principle can be derived mathematically, using a property of the Fourier transform. It has nothing to do with disturbing the system during measurement.
I'd say that's more a mathematical statement than physical derivation. The effort of subjects like string theory is to lay down fundamental objects and interactions from which other theories (quantum mechanics, gravity) emerge. But I don't think there is a final word at the moment of what the fundamental theories than result in quantum mechanics should look like.
From an accessibility point of view, it's also recommended to avoid links that only span over such half-sentences.
Screen reader users will often navigate your page by cycling through the links that are on the page and then they'll get only the link-text read out, not the surrounding text.