> I think what's far more common is prototyping something in Python or R, and then realizing that 99% of its CPU time is already spent in wrapped C code. And then you call it a day and move on to other things.
Someone has to write those libraries.
> This is why Julia hasn't taken off nearly as quickly as I initially expected
Not sure what you were expecting, but consider:
- the most conservative userbase estimate I would believe is 20k users (based on mailing list subscriptions, website stats, and download numbers).
Yes, but the discussion is about the "extremely common" case of prototyping, which definitely should not require library building, and usually should not require veering much from established libraries.
>Not sure what you were expecting
The data community can coalesce around a tool extremely quickly, in the matter of a year or two. Spark is about the same age as Julia, and has a thriving ecosystem around it. In 2004 R was a fairly esoteric analysis tool, but 2008-2010 it was the de facto data science language. Python made similar advancements in just a few years in the data science community.
Julia? I don't know a single person who uses it day-to-day, but I know a lot of people who tried very hard (myself included). The critical mass simply is not there: people aren't building packages because the users don't exist, and they don't exist because the packages don't exist. Your chart shows a linear growth in packages, which implies a constant amount of development work. This means it's not a growing language. This is what a growing language looks like: http://blog.revolutionanalytics.com/2010/01/r-package-growth...
Someone has to write those libraries.
> This is why Julia hasn't taken off nearly as quickly as I initially expected
Not sure what you were expecting, but consider:
- the most conservative userbase estimate I would believe is 20k users (based on mailing list subscriptions, website stats, and download numbers).
- sustained package ecosystem growth: http://pkg.julialang.org/pulse.html