Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Step 1: Let S1 = 1 - 1 + 1 - 1 ...

Because it's not so clearly stated in the article, step 1 is already flawed.

> the algebraic rules that apply to regular numbers do not apply to non-converging infinite sums

I'm sure the author knows this but they don't even apply to all converging infinite sums. (Only absolutely converging ones.) E.g. 1 - 1/2 + 1/3 - 1/4 ... converges to a value, but you can also re-arrange the terms to get any value you want.



And, strangely enough, the value that 1 - 1/2 + 1/3 - 1/4 + 1/5... converges to if evaluated left to right is the natural log of 2. It doesn't converge very quickly, though. It takes a billion terms to reach just six decimal places of precision.


Actually, it only takes a million terms to reach six decimal places.

Why is it strange that this sum is ln 2?


It's strange that it converges to a number we can describe succinctly, as opposed to a random number that has no relation to anything else.


Oh, this is actually surprisingly easy to see with elementary calculus knowledge.

High school algebra tells us that

1 - x + x^2 - ... + (-1)^n x^n = 1 + (-x) + (-x)^2 + ... + (-x)^n = (1 - (-x)^{n+1})/(1-(-x)).

Thus, for the infinite sum we have:

1 - x + x^2 - x^3 + ... = lim_{n -> inf} (1 - x + x^2 - ... + (-1)^n x^n) = lim_{n -> inf} (1 - (-x)^{n+1})/(1-(-x)) = 1/(1+x).

But then if you integrate the infinite sum term by term, you'll get:

\int (1 - x + x^2 - x^3 + ...) = x - x^2/2 + x^3/3 - ...

On the other hand, if you integrate 1/(1+x), you'll get precisely log(1+x). Now, one would want to argue that:

log(1+x) = x - x^2/2 + x^3/3 - ...

and here this is actually true, but in general this may not be true that the integral of the sum of infinite series is equal to sum of the integrals of each term -- what you need here is the notion of uniform convergence, but fortunately 1 - x + x^2 - ... converges uniformly.

But, since Leonard Euler wouldn't really be pedantic about stuff like this, as long as the result is correct, neither should the occasional math hacker. Thus, we put x = 1 in both sides of the equality and we get 1 - 1/2 + 1/3 - ... = log(2).

Apart from integrating term by term, there's one more problem with above reasoning, and there are bonus points for people who notice that (hint: uggc://ra.jvxvcrqvn.bet/jvxv/Nory'f_gurberz )


The natural logarithm of 2 is only described so succinctly because that number (and related ones) were so common and useful that we introduced a shorthand to denote the limiting process which tends to it. I think you'd find that, were the definition expanded completely, that its description is not so succinct after all. :)


Now that you mention it, I seem to remember us working through examples of this in our elementary calculus class. But I had also forgotten that they needed to be absolutely converging.


Actually, I didn't know that. And it surprises me, actually. Intuitively I would have guessed that converging infinite sums would be well-behaved under normal algebraic manipulations. But you learn something new every day :-)

Can you point me to a reference?


The basic idea is pretty simple. If Σa_i is conditionally convergent — that is, Σa_i converges but Σ|a_i| does not — then there are an infinite number of both positive and negative a_i. See [1] below for a hint on how to prove this; it's straightforward.

Given that, let's say we want to rearrange the a_i in Σa_i to converge to some arbitrary real number M. Take all the positive a_i in order and call that set A_+ and take all the negative a_i in order and call that set A_-. From [1] we know that both A_+ and A_- have an infinite number of elements.

You then construct a new series like so. Take the fewest elements from A_+ (in the order they appear in the original series) such that their sum is greater than M. Next, add to that sum the fewest number of elements from A_- (also in order) such that this new sum is smaller than M. Go back to taking elements from A_+ until you're just larger M and then back to taking elements from A_- until you're just under M. Repeat this ping-pong maneuver ad infinitum to construct a new series.

You can prove that this new series converges (conditionally) to M.

[1]: Here's a hint. What can we say about Σ|a_i| if we know that Σa_i converges but has only a finite number of negative terms?


> If Σa_i is conditionally convergent then there are an infinite number of both positive and negative a_i.

That's true, but that's not enough for your proof -- e.g. \sum (-1)^n/n^2 also has infinitely many positive and negative terms, but converges absolutely.

What you need is that sum of all elements in at least one of A_+, A_- (thus also the sum of all elements in the second one) diverges (the order you take the sum in doesn't matter here, as you surely know).


I left out lots of other details, too, because it was meant to be an outline for the original commenter, not a proof. In any case,

  \sum_{i=1}^{\infty} \frac{(-1)^i}{i^2}
is not conditionally convergent, so it is not a counterexample to my original statement. Here I take conditionally convergent to mean "the series converges, but it does not converge absolutely." :)

What's more, if Σa_i is conditionally convergent then the sums of both A_+ and A_- diverge. You're right that one has to use this fact in the full proof.


http://mathworld.wolfram.com/ConditionalConvergence.html gives a nice intuitive explanation:

"A series is said to be conditionally convergent iff it is convergent, the series of its positive terms diverges to positive infinity, and the series of its negative terms diverges to negative infinity. […] The Riemann series theorem states that, by a suitable rearrangement of terms, a conditionally convergent series may be made to converge to any desired value, or to diverge. The Riemann series theorem can be proved by first taking just enough positive terms to exceed the desired limit, then taking just enough negative terms to go below the desired limit, and iterating this procedure. Since the terms of the original series tend to zero, the rearranged series converges to the desired limit. A slight variation works to make the new series diverge to positive infinity or to negative infinity."


The problem is that 1 - 1 + 1 - 1 ... is not convergent. The sum is not infinite, but it’s neither a finite number, simply “not exist”.

The correct method is to calculate the partial sum of the first N term: S_N=sum_{n=1}_N (-1)^n. When N is even the amount of 1 and -1 are equal and the partial sum is 0. When N is odd then there is an additional 1 and the partial sum is 1. So the partial sum has no limit and then 1 - 1 + 1 - 1 ... is not convergent.

Conditionally convergent series like 1-1/2+1/3-1/4+1/5-1/6+ ... =ln(2) are much better. (The problem is that 1+1/2+1/3+1/4+1/5+1/6+ ... = infinity.) You can operate with them almost freely, but you can’t rearrange the order of an infinite number of term.

The unconditionally convergent series like 1+1/4+1/9+1/16+...=pi^2/6 are almost like finite summations. You can do whatever sensible operation you want and the result will be correct.


OK, but what you said was:

"they [normal rules of algebra] don't even apply to all converging infinite sums"

I know that 1 - 1 + 1... is not convergent, but that's a non-sequitur relative to the claim that you made.

> The unconditionally convergent series like 1+1/4+1/9+1/16+...=pi^2/6 are almost like finite summations. You can do whatever sensible operation you want and the result will be correct.

Yes, that's what I thought.


I guess any introduction to real analysis should cover that.

http://www.amazon.com/Calculus-Vol-One-Variable-Introduction...

Unfortunately, I don't know any online references out of the top of my mind.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: