The article is actually is rather short; it points out the differences in the mathematical notation used in the development of (then new) calculus, which is an interesting topic in itself since it has been a source of some confusion up to this day.
For example, using the dot to denote the (time) derivative is extremely confusing for many beginning students of analytical mechanics due to the casual mixing it with the other notation adopted in most of the literature. A first attempt to parse an expression containing the time derivative of the partial derivative of a function (the Lagrangian) by the "time derivative" of a generalized coordinate, the latter derivative being denoted by the dot, may be quite challenging.
My advice is to see the dot, at least in such contexts, merely as some kind of diacritic which is used to denote another independent variable - with the reminder that this new variable will, in some other appropriate context, be taken to be the time derivative of the other one (i.e. the one without the dot).
> to denote another independent variable [which] will ... be taken to be the time derivative of [another variable]
That makes no sense to me. If one variable is a function of another then by definition it is not an independent variable.
And I don't see why the dot notation should be any more confusing than any of the other myriad arbitrary squiggles, spatial relationships, fontifications, and punctualizations which are assigned semantic meaning in mathematics. The whole enterprise is a freakin' mess (referring to the notation, not mathematics in general).
I knew that would be confusing! Let me give you an example of the context where the dot should be seen just as a "reminder" rather than a differentiation symbol. When talking about a Lagrangian (let's say, in one dimension), which is a function of three independent variables, these variables are customarily denoted as t, q, and q-dot, thus obscuring the fact of their being independent variables, which would be much clearer conveyed by using, say, a, b, c instead. The even more confusing part comes later when they differentiate by q-dot, and it is this wrong impression of "differentiating by a derivative" that also could have been easily avoided by seeing the dot as just some kind of diacritic.
I've never seen the overdot in a presentation of Lagrangian mechanics mean anything other than a time derivative (i.e. a dependent variable). Can you point me to an example where it denotes an independent variable? Because that would be really weird.
Sure: what I was referring to in the example that I gave was the q-dot that appears in the denominator of the expression of the partial derivative of the Lagrangian.
Ah. I think I get it now. I misinterpreted what you meant by the word "see" (in "My advice is to see the dot..."). I thought you meant, "understand that in fact q-dot is an independent variable" when in fact you meant (AFAICT) "pretend that q-dot is an independent variable even though in fact it is not."
No, it is not. I can not exactly pin down where the mistake in the linked paper is after looking at it for five minutes but I am pretty sure it is related to his choice of r(t) in equation 4 as a piecewise polynomial so that a t = T there is a discontinuity in the 4th derivative which is non-physical, i.e. without a force tangential to the dome at the apex the object would never move away from the apex of the dome. If you only look at lower order derivatives like velocity and acceleration, then there is no discontinuity and everything seem superficially fine.
I find it fairly hilarious since at it looks like the standard Divide by Zero algebra problem, but then error after error compounds the result.
The first error appears to be in an un-numbered equation:
sin(theta)=dh/dr which is equal to 0 at r=0!!
which is then substituted into the differential equation:
F=g dh/dr = sqrt(r) = d^2r/dt^2 again equal to ZERO at r=0
Of course this is also an abuse of Leibniz notation (mathematicians squirm whenever physicists do this sort of "algebra" on differentials) which is then integrated twice and then differentiated twice (ignoring constants of integration) to get the equations for r(t) and a(t).
The second key is the square root, which allows for multiple solutions of the differential equation (one of which is the normal r=a=0), but that's just part of the side show. Physics has all sorts of "non-physical" results from integrations, DiffEq, etc. A classic being a negative time solution to parabolic trajectories.
I would add the use the statistics of continuos probability distributions to say that there is in fact ZERO probability that any ball is at r=0 just to complete the silliness. Perhaps he's willing to co-author a follow up?
Newtonian mechanics does not support the idea of a "spontaneous movement". On the other hand, the theory is incomplete (by Goedel), so it is inevitable that there are assertions that cannot be proved (or disproved) based on just Newton's Laws.
It's incomplete not by Godel, it's incomplete because Newtonian mechanics doesn't even attempt to describe most forces. It describes what forces are and how they act, but then it is left up to the reader to experimentally determine what forces there are in the universe. The only specified forces are gravitational.
That's not quite true. The third law (every action has an equal and opposite reaction) places hard constraints on what forces are possible. For example, if there is an apple resting on the table in front of me I can observe that it is not accelerating despite being in a gravitational field. I can deduce therefore that something must be exerting an upward force to counter the force of gravity, and that it's probably the table.
Of course, Newton doesn't let me prove it's the table. It could be, for example, an apple-suspending demon which only acts in the presence of tables (and other solid objects). Newton does not allow me to eliminate that possibility (but Occam does).
Well, in a way it does, as long as these forces are "conservative", which includes gravitational field that you mentioned as well as (static) electric field.
On the other hand, you are correct, it is not a goal of classical mechanics as a mathematical theory to describe any specific physical force in particular. In that sense it can be seen as "incomplete" - as a physical theory. But classical mechanics as a (modern) physical theory is not only "incomplete", it is also plain "wrong"! (But that is a topic for another time, perhaps.)