let total = (1..).map(|x| x*x).take(10).fold(0, |acc, x| acc+x);
You could replace the fold with ".sum()" if you use the unstable std::iter::AdditiveIterator. Or you could incorporate the squaring map into the fold, but then we're no longer comparing the same thing.
int total = recurrence!"n+1"(0)
.map!(x => x*x)
.take(10)
.sum();
Replacing that last sum() with reduce() from std.parellelism and you can sum them using multiple threads (obviously not useful with a set this small but the documentation for reduce() coincidentally includes a benchmark for the sum of squares running 2.5x faster when done in parallel).
You beat me to the J solution, but here it is in APL:
+/2*⍨⍳10
No need for increment, since index is set to start at 1. You can set it to 0 too, if you work that way.
+ addition
/ reduce
2 (to be fed to exponential operator * for square)
* exponential
⍨ commute
⍳ iota - index generator
I would consider C++ with this library over Haskell just because you get the world of C++, but with a bit more of the expressiveness I like in the functional languages.
Syntax is like getting used to math symbols, but once you do, you can really do a lot with a little - very expressive. I find myself in the J interpreter or APL workspace to hack away at solutions much more so than in any other language. I am not a web developer.
don't confuse standard libraries, libraries and languages :)
Prelude is functional, C++ STL and the C library are imperative.
There is a valid argument that a 'good ol' fashioned' bit of procedural code (for loop) makes this same algorithm easier to read than either of these (it certainly makes it shorter), since it tells you explicitly what it is doing rather than assuming (usually a safe assumption) that programmers can understand and trust things like sum, map and take/limit.
i'm thinking of something like:
int iTotal = 0;
for( int i = 0; i < 10; ++i )
{
iTotal += i * i;
}
That argument is not invalid, but the counter-argument is that the functional version does a much better job of capturing the intention of the code, rather than the procedure of it. The intention is to get the sum of the squares of the first 10 numbers counting from 1. My gut tells me that I have fewer bugs when I'm capturing intention rather than process, but I won't claim that it's definitely true.
(Not to say this is an example of bug-due-to-process-over-intention, but you do have a bug – you're summing from 0 to 9 instead of 1 to 10.)
why is it not valid? these things are part of the libraries involved. as languages both Haskell and C++ are extremely free form to the point where you can abuse them into behaving like domain specific languages.
or do you mean the bit about it being less code and easier to read?
well done on catching the bug. :)
this is my own fault for being sloppy and not paying attention.
You (understandably, due to the double-negative) misread! I said it is not invalid. Which was really a weaselly way of saying that it's valid but I don't really agree with it – sorry about that!
It would be nice to see a performance comparison here. I wouldn't be surprised if Haskell would win, given the fact that lazy evaluation is its bread and butter.
I'm sure that Haskell is fabulously performant here, but I'm not sure that it would be either significantly faster or slower than a C++ library that properly implements laziness. I can't speak for Streams, but Rust makes use of lazy iterators pervasively (all implemented entirely in the stdlib), and I'd expect the equivalent Rust code:
let total = (1..).map(|x| x*x).take(10).sum();
to have no more overhead than the ideal Haskell code (though I could be oblivious to some very important GHC optimization... when posed this question my Haskell-using friends simply respond with "it's complicated").
It's certainly complicated and my answer probably does not apply to such a trivial example code that is using mostly stdlib functions. But the potential gain for Haskell is with loop fusion. The user could implement all of those functions themselves (sum, take, etc) and then loop fusion will bind them into a single performant loop that the lower optimizers will turn into a straight-line code, whereas in procedural languages only stdlib functions and certain patterns of general functions get the fusion treatment. Worse, throw in an unfortunate alias into the procedure and the whole thing collapses into sequential loops.
Hell, if anything on a short sequence the naive (one loop at a time) approach might be faster if it keeps the instruction and data caches more coherent.
Laziness slows everything down by a constant factor. In this example, Haskell would perform about the same as C++, because of optimizations that turn operations on linked list cells into loops (arriving at a result much like the external iterators of Rust and the expression template based iterators of C++).