> Bohr postulated a distinction between the quantum world and the world of everyday objects. A “classical” object is an object of everyday experience. It has, for example, a definite position and momentum, whether observed or not. A “quantum” object, such as an electron, has a different status; it’s an abstraction. Some properties, such as electrical charge, belong to the electron abstraction intrinsically, but others can be said to exist only when they are measured or observed.
This is a common error. Macroscopic "everyday" objects don't have a definite position and momentum. Macroscopic objects are quantum objects. But when the mass is big enough, the position and momentum can be defined simultaneously with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
(Looking at them as classical objects is just a good approximation, like ignoring the gravity force of the Moon in most common situations.)
Anyway, the measurement problem is a real problem and nobody knows how to solve it. The current fade is to use decoherence to explain it. It is a promising idea, so I hope that in a few years/decades/centuries we can give a good explanation of the measurements that avoid anything that looks like magic.
> This is a common error. Macroscopic "everyday" objects don't have a definite position and momentum. Macroscopic objects are quantum objects. But when the mass is big enough, the position and momentum can be defined simultaneously with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
To put this into simpler terms:
Whenever we measure something, we need to throw something at it and then have that something rebound and hit us again.
In most experiments, we throw photons and have them rebound into our eyes.
Throwing a photon against a "classical object" - a chair, a ladder, bacteria - is like throwing a tennis ball against a skyscraper. You throwing that does not have no effect at all, but it's very much negligible.
But when trying to measure quanta, you're now throwing your tennis ball at a football, or at another tennis ball. You're gonna be lucky, if it rebounds at all, instead of just pushing the object that you're trying to measure out of the way. (You also don't have any smaller balls to throw.)
That's why when you measure something in quantum physics, you only know that it has this exact value in the moment that you measure it. It's going to be pushed away because you threw something at it, so after your measurement it has a different value.
You also can't observe it over a longer period, so there's no way to know whether it was only in that moment at your measured position or a long time beforehand.
This is not a correct description at all of QM complementary observables. This is a purely classical explanation (and was one of the first layman "explanations" back in 1920, but that was 100 years ago and QM is much better understood now).
Could you elaborate on that? From my extremely limited knowledge it does seem like a just-so explanation (what you're responding to), but I'm not sure why.
Yet the observer effect is not the reason why we can't know an object's position and velocity at the same time. There are two ways we can see that this supposed explanation is a red herring:
I don’t think this analogy holds up. Consider the double slit experiment: throw a bunch of basketballs at a wall and see what pattern of hits they leave by looking at where they hit the wall. If the wall is being looked at (observed), we see one pattern. If we look away, conduct the experiment, then check it, we find another.
To me that suggests the act of “observance” effects the probability distribution of likely states. If a tree falls in a forest and no one is around, then it doesn’t really fall, it just has a probability of having fallen that is not resolved until someone goes to check. How does your analogy account for those effects? For me, it looks like quantum collapse is causing the states of these objects to become “resolved” where at first they were “unresolved” and this suggests we live in a universe that knows how to save on memory and is fundamentally probabilistic.
If you ever ran into a space leak in Haskell, you would see how having unresolved thinks can use more memory than eager evaluation.
But that has some merit to it in that you can describe QC as merging equivalent paths and then sampling from a wave distribution afterwards.
One fun variant on the double slit experiment is taking a coherent laser beam (everything is in phase) and splitting it, sending it through two paths, A and B, then merging it and shining it on the wall.
If the two path lengths are equal, there is no effect from splitting it. But if we make B take slightly more time we can get a interference pattern. If we have it get shifted by half a wavelength the light will cancel out!
Now if you insert a polarizing filter along path B, when you merge the streams, you could tell with path the light came from, and the interference pattern disappears. This is not exactly measuring which path it took, but making it possible if you added a sensor to tell.
Observation is not required just making the streams distinguishable.
But now if we add another polarizing filter downstream we can erase the distinction between them, and now you get interference effects again!
Adding a polarizer is a nice variant of the experiment. I think I never heard it before. I like it, but I disagree with the expected result.
If the slit A has no polarizer and slit B has a polarizer, then in the "wall" you will the sum of 50% of the interference pattern and 50% of the diffraction pattern of A(I'm not sure about the 50%-50% split, something like that.) I.E. you will see the interference pattern, but it will not be so sharp, the black lines will not be so black, the white lines will not be so white.
I think it's better to put an horizontal polarizer on A and a vertical polarizer in B. If you don't add any other polarizer you will see the sum of the diffraction patters of A an B, without interference lines.
If you put a polarizer, the result depends on the direction:
* If it is horizontal you will see only the diffraction pattern of A (without interference lines).
* If it is vertical you will see only the diffraction pattern of B (without interference lines).
* At 45° you will see the diffraction pattern like in the original double slit experiment.
* At the other 45° you will see the inverted diffraction pattern, the black lines will be white and the white lines will be black. (All of this bounded by the diffraction pattern.)
* At other angles, you get some mix of the diffraction patterns and the interference patterns.
It would be nice to see an experimental realization of this.
There are two walls. One wall has two slits, the other wall is where the particles/waves/balls/whatever colide and form the interference pattern (or not).
You don't need someone observing the second wall to get the interference patters. You can replace the person with a photographic plate, a CCD sensor of a camera, or other equipment. All off them are more precise, reliable and even cheaper than a graduate student with paper and pencil.
The problem is if you try to add some type of equipment to first wall to collect information about how the particles/waves/balls/whatever passed thru it. Whatever equipment you add it will disturb the flow and it will kill the interference pattern.
This is not a technological problem. It is how the universe work. If you propose to use some particular method (like using light to detect the balls) you will sooner or later find that there is something that gets broken (see the former comment).
An important detail is that if you use a macroscopic object like a basketball, the slits size and the slits separation must be tiny (less than a millionth of the size of the nucleus of an atom, probably much less). So you intuition about how thinks work in the macroscopic level is not a good guide to how thinks work in the microscopic level. In the macroscopic level you can approximate the basketball as a perfect classic solid. It's just an approximation, a very good approximation.
> This is not a technological problem. It is how the universe work. If you propose to use some particular method (like using light to detect the balls) you will sooner or later find that there is something that gets broken (see the former comment).
what confuses me in various explanations like this is that the whole 'act of observing affects what you observe' thing seems to be rather particular in that it turns the wave-like behavior into particle-like behavior, which strikes me as rather weird/counter-intuitive. Why don't we just get slightly different interference patterns? Or some spectrum of effect between wave-like and particle-like?
Is my confusion mostly a result of the limits of the analogies presented to me as a layman?
>Consider the double slit experiment: throw a bunch of basketballs at a wall and see what pattern of hits they leave by looking at where they hit the wall. If the wall is being looked at (observed), we see one pattern.
If the basketball was of energy 1 quantum, if the energy used to observe is 1 quantum or more the (shining light to see the result in realtime) then the pattern is different due to interference. If we don't use any energy to see the result in realtime, then result is different due to non-interference.
I could be wrong, but I have a different understanding on how all of that works. You keep talking about basketballs instead of waves or probability fields and I guess this is where we diverge.
> If the basketball was of energy 1 quantum, if the energy used to observe is 1 quantum or more the (shining light to see the result in realtime)
How would we observe the light that bounced off the basket ball? Would we need to hit it with another light in order to detect where that light is? How would we detect that second particle of light; would we hit it with a third? And so on.
The answer is that we don't need to shine light to see the basketball. We can detect the basketball itself; for example, if the basketball were representing a photon of light, we could cover the wall with photomultiplier tubes ( https://en.wikipedia.org/wiki/Photomultiplier_tube )
As sibling comments have pointed out, the parent is wrong in saying that observing the wall will change the pattern. Rather, it's observing the slits that will change the pattern.
If we don't observe the slits, but we do mark the point on the wall where the basketball hits, and we do this over and over again, then the marks on the wall will show an interference pattern. Note that we're not throwing anything at the basketball: we're just waiting for it to hit the wall on its own. Also note that the marks themselves don't change anything; we could note them down on some paper instead, or type them into a spreadsheet, or whatever.
What if we do observe the slits, e.g. by putting a baseball in one and a cricket ball in the other? In this case, we'll detect the basketball hitting the wall and either a baseball or cricket ball. After many goes, the pattern on the wall made by the basketball will have two peaks (one in front of each slit), not an interference pattern. This seems analogous to your 'bounce a photon off it' explanation.
However, what if we got rid of the cricket ball? Half the time we would detect the baseball hitting the wall too, the other half we wouldn't (when the basketball went through the other slit). Yet the basketball will still make the two-peak-no-interference pattern, even though we didn't interact with it half of the time!
In fact, we could randomise which slit we put the baseball in, and mark only those goes that the basketball didn't hit the baseball, and we would still see two peaks without an interference pattern, even though those basketballs didn't hit anything (they always went through empty slits)!
This hopefully shows that your explanation (known as the observer effect) doesn't explain the interference pattern in the double-slit experiment.
That's a nice explanation but doesn't it give the impression that if we could find a better way to do that experiment, we could find a way around the problem, when instead it's a fundamental limit on what we can know about a quantum system?
No, under most interpretations of QM, things literally behave differently at that scale. Under Copenhagen, the wave literally collapses into a fixed position/momentum. The pre measurement wave isn't a statement of our ignorance of the system but rather a description of reality. The many worlds is even more serious in its quantum literalism. Far from pushing around the subject of your experiment with a too-big measuring device, you're actually branching worlds where all predictions of the wave function occur.
To me, many worlds + time (as an inviolate observed vector) being merely a consequence of our inability to observe without moving foreward in time based on our entropic process driven cociousness, seems by far the most comprehensive explanation of observable phenomenon.
That observational uncertainty increases as the probability of direct interaction decreases (distance, time) strongly supports the hypothesis that observable phenomena are dictated strongly by the presentation and characteristic relationship of the observer to the phenomenon.
We know on the micro scale that all possible states exist simultaneously.
It seems logical, even axiomatic then that on the macro scale the same applies, but that we can only observe the bandwidth of states in which it is possible for us to exist to make the observation.
To claim that this state uncertainty is magically resolved in all cases and coherently for all possible observers into a single set of states seems an extraordinary claim requiring extraordinary evidence.
> The pre measurement wave isn't a statement of our ignorance of the system but rather a description of reality.
Post measurement particle is description of our ignorance not a description of reality.
It still evolves according to Schrödinger equation (which degrades to newtonian dynamics for sharp and narrow waves) but for historical reasons we choose to talk about it as it was little billiard ball, not still a wave just sharpened and narrowed down by intraction we call measurement.
The problem is that fundamentally, there is a fixed amount of information that there is, that has to be distributed over two dimensions. Particles that are constrained to a small area (e.g. photons going through a slit, electrons bound to an atom) simply do not have a well-defined momentum. In fact the effect is something that you can experience with a sharp enough camera lens: as you close the aperture (therefore forcing the light going through it to be in a specific place) you slowly lose resolving power as the light stops behaving nicely and diffracts around/through the aperture.
I agree with both. This explanation is easier to understand, but it makes it look like a technological problem that can be solved, instead of a fundamental property of the universe.
I think david927's intuition is more correct here. The uncertainty in the position and momentum is intrinsic to quantum mechanics - it's built into the 'wave function'.
The suggestion that if one pushes away something by throwing something else builds on a purely classical intuition and wouldn't require quantum mechanics to explain if this was all we observed. The uncertainty in quantum mechanics is fundamental (to quantum mechanics) and emerges through a different, as yet unknown, mechanism.
3Blue1Brown has an extremely good explanation[1] of the intrinsic uncertainty, and why it's separate from measurement uncertainty. (the previous episode[2] is a recommended prerequisite for background on how the Fourier Transform works)
> emerges through a different, as yet unknown, mechanism.
In 3Blue1Bron's explanation[1], he shows how the intrinsic uncertainty is an inherent trade-off of trying to measure both position and frequency. A short wave packet only a few wavelengths long correlates with a narrow (precise) range of positions, but also correlates well with a very wide range of frequencies due. A Heisenberg-like uncertainty exists any time you are working with weave packets with length near the wavelength. 3Blue1Brown gives a very good example using Doppler radar.
Yes, I like these sources too. Good for building intuition. I would just add that Heisenberg-like here means that both systems share features of wave mechanics. Doppler type effects aren't quantum mechanical though.
When I suggest the mechanism is unknown, I mean that Heisenberg uncertainty is a postulate of quantum mechanics. In other words the fundamental reason that quantum mechanics should appeal to wave mechanics isn't really established - we don't really know yet the fundamental objects and interactions that lead to quantum mechanics (despite much effort).
I'm not sure about the historical part, but now the uncertainty principle is not an independent postulate. It's deduced form the non commutation of the operations to measure the position and the momentum of a particle. This can be done in the wave representation or in the matrix representation.
Moreover, similar calculations can be done with other measurements that don't conmute. One that is very important is the spin of a particle in the x, y, and z axis.
Another is the polarization of a photon in directions that are at 45°. For example, most of (all?) the experiments of the EPR paradox are done with polarization instead of position-momentum, because polarization is much easier to measure. https://en.wikipedia.org/wiki/EPR_paradox
It's a good point that uncertainty relations exist for all kinds of physical observables. But whether they're expressed as commutation relations or as in Heisenberg's original formulation, or whatever formulation you choose (wave mechanics, matrix mechanics, dirac representation, qft, or anything else one can think of) it's still asserted, rather than derived from an underlying set of fundamental physical objects and interactions.
Why unknown? Heisenberg's uncertainty principle can be derived mathematically, using a property of the Fourier transform. It has nothing to do with disturbing the system during measurement.
I'd say that's more a mathematical statement than physical derivation. The effort of subjects like string theory is to lay down fundamental objects and interactions from which other theories (quantum mechanics, gravity) emerge. But I don't think there is a final word at the moment of what the fundamental theories than result in quantum mechanics should look like.
> Macroscopic objects are quantum objects. But when the mass is big enough, the position and momentum can be defined simultaneously with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
Exactly. And, due to quantum tunnelling, there's a teeny tiny chance I could walk through a wall, but because there are a lot of particles in me that all have to tunnel a relatively large distance, and the probability of even one of my particles tunnelling that far is so tiny, it won't happen.
Intuitively, the difference between quantum and classical objects is a lot like the central limit theorem. Add up a bunch of uncertainties, all of which have similar distributions, and you're going to get something with a very small variance (uncertainty).
I skimmed a paper about twenty years ago that pretty much explains the deference to me.
Macroscopic objects are subject to quantum mechanical cascades. The end result is the probability constraints for an object composed of an enormous number of interacting particles rapidly goes to zero.
The example was a playing card stood on a knife edge. The probability cascade causes it to fall one direction or the other. We can't predict which side it will fall over on. But it will fall.
I thought the current fad was to use holography to explain it. The problem is that to measure something in quantum mechanics requires separating the world into two systems. To make exact measurements, the observing system has to be infinitely large to avoid quantum fluctuation, and the experiment has to be repeated infinitely often because it is probabilistic.
To deal with the infinite size, particle physicists push the observer systems out to infinity. This is fine for scattering experiments because collider detectors are so large and far away from a collision it might as well be infinity, but this poses a problem for quantum gravity and cosmology.
It seems that the most popular approaches to dealing with it have been based on Juan Maldecena's AdS/CFT correspondance, which is a toy model that allows those observers off at infinity to live on the boundary of spacetime and to describe the behavior of the interior entirely in terms of the projection of that behavior onto the boundary.
The challenge is that our universe doesn't look like Anti de Sitter space, but rather like (plain) de Sitter space, and so there is no natural boundary to project onto, which has led some physicists to question the legitimacy of dividing the quantum mechanics need to divide the world into 2 systems.
The measurement problem seems like an artifact of that, and what's really happening is that measurement causes the two systems to become entangled. Wave function collapse would then be the subjective result of the two systems entering a mixed state, because the observed system can no longer be described independently of the system that just observed it.
A route other than "dividing the world into two categories," has been available since the early days of quantum mechanics, if you believe in many worlds you don't have that problem. It is important to recognize that we are not talking about physics here, we are talking about philosophy.
If anything, the greatest shock of quantum mechanics is how it shows that "questions we naturally want to ask about the natural world" and "totally unresolvable philosophical disputes" are not completely disjoint.
And that is exactly why it isn't a paradox in quantum mechanics. Quantum mechanics is consistent for the domain of questions it was designed to ask: particle scattering experiments. It's when people push it into philosophy or other unintended domains that they run into problems.
Quantum mechanics is also totally consistent for materials science problems, astrophysical problems (pending gravity), chemical problems (including human beings), and so on. It's really the philosophy that has a quantum mechanics problem. ;)
A paradox is an internal inconsistency. Noone claims that quantum mechanics is the fundamental theory of everything. Like the convergence of a taylor series, there is a range of energies within which it produces correct calculations, and if you go outside that the result is undefined.
Strictly speaking that is a QFT problem, or really a standard model problem, moreso than a quantum mechanics problem. In order for it to be a quantum mechanics problem it would have to happen everywhere in quantum mechanics, and the single electron Schrodinger equation hydrogen atom definitely has no (theoretical) problem.
> with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
I’d say that’s the gist of it, that we cannot “just ignore” the uncertainty because it’s too small, because if you do that then your model and the real world are indeed different. Also, at the end of it all what does “too small” mean? “Too small” compared to what? To a galaxy? To a super-nova? To a planet? To a cat? To the things we try to “discover” at CERN? To things smaller than them? To say nothing of the fact that comparing a number to physical stuff will eventually bring you head on against Zeno’s paradox, one way or the other.
I agree though that using the “too small” trick does generally allow us to do great things, like send stuff to the confines of the solar system or to build nuclear bombs, i.e. it allows us to be efficient, but that does not mean that by being efficient our models are also identical representations of what reality really is, so to speak.
What I’m saying is that maybe the “mathematization” of the physical world is a leaking abstraction, and that maybe we’d be better off by saying “we’ll never really know what the world is made up of”. But the problem is that they haven’t awarded science Nobel prizes to people saying “there’s really no way for us to learn how the Universe really works”, at least not that I know of, at best you’re seen as a mysticist when saying that, at worst as a know-nothing or a cynic.
> To say nothing of the fact that comparing a number to physical stuff will eventually bring you head on against Zeno’s paradox, one way or the other.
Zeno's paradoxes are soluble by basic calculus. Once you distinguish between countable and uncountable infinities, the problem of crossing a bounded interval in finite time ceases to be paradoxical.
This is basically to say I don't think this is a particular problem for the resolution of outstanding inconsistencies in theoretical physics.
>Zeno's paradoxes are soluble by basic calculus. Once you distinguish between countable and uncountable infinities, the problem of crossing a bounded interval in finite time ceases to be paradoxical.
They're not mathematically paradoxical, but that doesn't necessarily mean that the paradoxes are solved, because there's more than just math going on. A lot of the paradoxes hinge on the question of whether it is in fact possible to traverse an infinite series of positions in space or moments in time. I have no idea whether it is or it isn't, but the issue isn't settled by calculus. Calculus allows you to figure out what the result would be if such a traversal were to occur.
No, calculus does in fact resolve them. More specifically, formalizing continuity and completeness obviates the issue. Like I said, if you distinguish between countable and uncountable infinities, there is no longer a paradox.
The only reason it appears to be paradoxical is because you're mandating someone move from a real coordinate (a, b, c) to another real coordinate (a', b', c') on the interval [x, y] while also passing through the set of all real points between them, without first defining a notion of distance of time. That's not possible for the same reason you can't ask someone to count all reals on an interval, because continuity implies uncountability. Between any pair of real numbers is another real number, and it takes an equal amount of effort (and time) to count any given number.
To a first glance, this seems like a paradox because we can clearly move from (a, b, c) to (a', b', c), yet we shouldn't be capable of any movement whatsoever. Calculus solves this problem by formalizing Zeno's demand as a geometric series with a notion of distance. The requirement is that you move from one position to another position while passing through every halfway position between them. Equip the vector space ℝ^3 with the Euclidean metric so you have a metric space (defined distance). Then we have the sequence of steps
(a, b, c) -> |(a, b, c) - (a', b', c')|/2 -> ... -> (a', b', c')
More concretely: an infinite expansion such as 0.99999999... is equal to 1. Each half step will take only half as long to traverse as the half step preceding it to it once you've defined Euclidean distance on a continuous space. The first step to formalizing sequences and series like this is by constructing the real numbers as a continuous set and distinguishing between different types of infinities. Then you can define limits, and from there you're essentially done.
Note that at no point am I talking about what happens when you reach 1, or (a', b', c'), or anywhere else. I'm just explaining how you reach it in finite time. If you can get arbitrarily close to a point, you can get to the point itself.
I guess I should be more technical and say that real analysis solves this problem, because what's really doing the heavy lifting here is the topology induced by defining a metric on ℝ^3 in combination with the notion of limits.
Clearly, not all infinite sequences can be summed. So e.g., 1, -1, 1, -1, … has no sum.
Now suppose that Achilles takes alternate forward and backward steps a (countably) infinite number of times. The first step takes one second, the second step takes half a second, and so on. (Each step covers the same distance.) Where does he end up after 2 seconds?
There’s no sensible answer to that question. Does that mean that Achilles can’t in fact traverse that particular sequence of steps? But then, why should he be unable to traverse a particular infinite sequence of steps merely because its sum is undefined? After all, the result of each individual step is perfectly well defined. If it’s possible in general to traverse infinite sequences, what stops him traversing that one?
To me, this just seems like Zeno’s paradox all over again. The mathematical treatment is more sophisticated, but the underlying paradox remains.
Zeno himself probably wouldn’t have distinguished carefully between summing an infinite sequence and spatially or temporally traversing it, since both notions would have seemed equally absurd from his point of view. Modern mathematics has shown us that the former isn’t in fact absurd. But Zeno’s paradoxes are arguably about the latter.
> Clearly, not all infinite sequences can be summed. So e.g., 1, -1, 1, -1, … has no sum.
Your geometric series is not a summation of the steps or positions, but rather the time required to complete each step. Therefore your example is characterized by an identical geometric series to the model I used in my previous comment.
More generally, Zeno’s paradox can be succinctly resolved by citing the monotone convergence theorem. Every bounded, monotonically decreasing function converges. The time required to complete the infinite series of half steps converges, because (again, with the definition of a metric) the time required to complete each individual step decreases commensurate with the change in distance.
>Therefore your example is characterized by an identical geometric series to the model I used in my previous comment.
I am not sure what you mean here. You can calculate the sum of the time series, but you can't calculate Achilles' final position, which is the question at issue. The question remains: if it's possible in general to traverse an infinite sequence of steps in space, why is it not possible to traverse the one that I specified? "Solving" Zeno's paradox by admitting the possibility of traversing an infinite series of points in space or time seems to give rise to paradoxes just as deep as the originals.
> The time required to complete the infinite series of half steps converges, because (again, with the definition of a metric) the time required to complete each individual step decreases commensurate with the change in distance.
Yes, that was Aristotle's observation and a key part of his proposed solution to the paradox. The problem is that this explains why it's possible to sum the series, not why it's possible to traverse it. You seem to be taking the position that any series that cannot be summed cannot be traversed. But why should that be so?
Thinking about this a bit more, I think what I'm trying to say is that Zeno's paradox is more about supertasks than it is about the problem of computing the sum of an infinite series. There's a nice summary article that I found here:
What??? Every Physicist believes that the uncertainty principle apply to any object from galaxies to elementary particles. So if you try to not apply the uncertainly principle in a particle collision in the CERN, they will think that you are crazy. But if you try to add the uncertainty principle to the simulation of the movement of the objects in a galaxy, they will think that you are crazy too because the difference is very small and the calculations are much more complicated.
There are some application of the uncertainly principle to neutron stars, and IIRC to the background radiation. Nobody thinks that the uncertainly principle doesn't apply to big objects, it just that in most cases the difference is ridiculously small.
Perhaps the “mathematization” of the physical world is a leaking abstraction, perhaps no. We don't know. If you can prove that “we’ll never really know what the world is made up of” you will get a Nobel prize. But you will need a real proof, not handwaving.
"Too small" means either it can not be detected experimentally, or even in principle, depending on whether you are allowing gedankenexperiments.
I don't think it is reasonable to expect our models to be identical to reality. Otherwise, they wouldn't be models, they would be reality. A theory is correct if it produces experimentally verified predictions.
No. It would take far more digits of accuracy than we can measure.
For example, here's an object of 1 kg mass. The product of the uncertainty of its position and momentum is h-bar / 2, or h / 4 pi, which is about 5e-35 kg m^2/s. We're going to measure the position to one wavelength of visible light (say, 500 nm wavelength, so 5e-7 m). That means that we need to be able to measure the momentum to an accuracy of 1e-28 kg m/s, which for a mass of 1 kg means measuring the velocity to within an accuracy of 1e-28 m/s. Good luck with that...
One of my homework assignments in a quantum mechanics class was to calculate the uncertainty of a cheetah that was running at 65 mph (and given its weight).
You can in theory but the numbers get so small that they're clearly unmeasurable.
I believe gus_massa was just saying that we simply define out of existence this supposed problem with measurement (for macro objects). This is done because even though the 'furniture of the world'/everyday objects/macroscopic objects are still quantum entities, at this level, error rates are such that they can be treated in a classical manner.
This is a common error. Macroscopic "everyday" objects don't have a definite position and momentum. Macroscopic objects are quantum objects. But when the mass is big enough, the position and momentum can be defined simultaneously with an error that is so small that you can just ignore the uncertainty and approximate them as classical objects.
(Looking at them as classical objects is just a good approximation, like ignoring the gravity force of the Moon in most common situations.)
Anyway, the measurement problem is a real problem and nobody knows how to solve it. The current fade is to use decoherence to explain it. It is a promising idea, so I hope that in a few years/decades/centuries we can give a good explanation of the measurements that avoid anything that looks like magic.