> I have recently read an article about calculating the Nth Fibonacci number in O(log N) arithmetic operations.
Either you assume fixed-precision integers, in which case you just use a lookup table and call it O(1), or you assume arbitrary-precision integers, in which case you cannot have anything less than a O(n) algorithm, as the number of digits of the Nth Fibonacci number is O(n) (as the Fibonacci sequence is exponential).
I see this all over the place, and it's frustrating every time.
I believe OP is aware of this as he specifically uses the phrase "arithmetic operations" instead of "time", so adding two very large numbers only counts once. It's true that you can't have an O(log N) time algorithm, as arithmetic operations cannot be performed in constant time.
A similar situation arises in, say, mergesort when we say that the algorithm takes O(n log n) comparisons. That's not the same as an O(n log n) time algorithm; for instance if you think of sorting distinct integers, each integer will take at least O(log n) bits to represent, and to compare two integers you could potentially need to read all the bits.
In which case, as I said, just use a lookup table, which is constant-time under those same constraints, and has the bonus of automatically caching frequently used values for you.
And as the Fibonacci sequence is exponential, any lookup table will be relatively small. (The maximum number of elements in your lookup table is linear w.r.t. the number of digits of your (fixed-precision) integers. I mean, even a 128-bit return value only requires 185 elements in the table, for a total of 2960 bytes.)
> A similar situation arises in, say, mergesort
This is why you have to specify exactly what you mean by the big-O notation you specify. Do you mean comparisons? Time? Time assuming everything but the variables specified are constant? Etc.
>It's true that you can't have an O(log N) time algorithm, as arithmetic operations cannot be performed in constant time.
Yep. You can perform some operations in sublinear time w.r.t. the number of bits, however, by exploiting hardware-level parallelization. It would be interesting to see what the best (in terms of big-O) possible implementation of fib(n) would be, given constant-time operations for a fixed-length machine word.
> This is why you have to specify exactly what you mean by the big-O notation you specify.
He did specify it, it's O(log N) arithmetic operations.
Note: I don't mean "assuming arithmetic operations take constant time, we can compute F(n) in O(log N) time", in which case you can correctly say that, since arithmetic operations don't take constant time in arbitrary precision arithmetic, we must be using fixed precision (or something essentially equivalent), in which case a lookup table takes O(1) time and O(precision) space. I mean "F(n) can be computed with O(log N) arithmetic operations". If I perform my arithmetic operations over arbitrary precision integers, and I implement simple arithmetic operations (so each arithmetic operation takes O((log n)^2) time), then F(n) takes O((log n)^3) time.
> I have recently read an article about calculating the Nth Fibonacci number in O(log N) arithmetic operations.
Either you assume fixed-precision integers, in which case you just use a lookup table and call it O(1), or you assume arbitrary-precision integers, in which case you cannot have anything less than a O(n) algorithm, as the number of digits of the Nth Fibonacci number is O(n) (as the Fibonacci sequence is exponential).
I see this all over the place, and it's frustrating every time.