I like the ML languages, and as many others I spent a lot of time with F#.
I would love to spend more time but even though Microsoft gives it plenty of support (nowhere near as much as C#), the community is just too small (and seems to have gotten smaller).
Anecdotally we are using F# more than ever before, and it works for us for a large sized organisation. Fast code, it is keeping up with the .NET features that matter for that (e.g. spans), and tbh has still been getting better over the years. In fact I find some of the new features like Span, SIMD/intrinsics somewhat synergise with existing F# features (e.g. inline). C# IMO still hasn't quite caught up but is getting there. Comparing to Go/Java/etc I can usually get faster code out of the .NET runtime as well especially in our domain which requires large scale computation. If I was to move to something else for our domain it would be C++ or Rust. F# piggy backing of the .NET platform in general, has a decent ecosystem and easy onboarding cross platform and gets a lot "for free" (e.g. CLI, GC improvements).
Community is an interesting thing, and for some people I guess it is important. For me language is just a tool having coded for quite some time and seen communities come and go; don't care about being known or showing an example per se. If the tool on the balance allows me to write faster code, with less errors quicker and can be given to generic teams (e.g. ex Python, JS devs) with some in house training its a win. For me personally I just keep building large scale interesting systems with F#; its a tool and once you get a hang of its quirks (it does have some small ones) quite a good one that hits that sweet spot IMO.
My feeling however is with AI/LLM's communities and syntax in general is in decline and less important especially for niche languages. Language matters less than the platform, ecosystem, etc. Its easier to learn a language then ever before for example, and get help from it. Any zero cost abstraction can be emulated with more code generation as well as much as I would hate reviewing it. More important is can you read the review the code easily, and does the platform offer you the things you need to deliver software to your requirements or not and can people pick it up.
Interesting take. I agree with you mostly but regarding "community" I am more thinking of the side effects there in terms of _other_ people developing interesting libraries etc.
I don't know if AI can change that but when using python, there is a feeling that there is an awesome quality library for just about anything.
The retail price of eggs was a significant talking point in the campaign for the last US presidential election (which I'd forgotten until I saw this post).
That's because they slaughtered a huge amount of chickens due to bird flu. As you can see from the chart, the egg prices have plummeted as chicken populations have recovered.
I have some experience. Variants of regularization are a must. There are just too few samples and too much noise per sample.
In a related problem, covariance matrix estimation, variants of shrinkage is popular. The most straight forward one being Linear Shrinkage (Ledoit, Wolf).
Excepting neural nets, I think most people doing regression simply use linear regression with above type touches based on the domain.
Particularly in finance you fool yourself too much with more complex models.
Yes these are good points and probably the most important ones as far as the maths is concerned, though I would say regularisations methods are really standard things one learns in any ML / stat course.
Ledoit, Wolf shrinkage is indeed more exotic and very useful.
> There are just too few samples and too much noise per sample.
Call it 2000 liquid products on the US exchanges. Many years of data. Even if you approximate it down from per tick to 1 minutely, that doesn't feel like you're struggling for a large in sample period?
It sounds like you are assuming the joint distribution of returns in the future is equal to that of the past, and assuming away potential time dependence.
These may be valid assumptions, but even if they are, "sample size" is always relative to between-sample unit variance, and that variance can be quite large for financial data. In some cases even infinite!
They may have been referring to (for example) reported financial results or news events which are more infrequent/rare but may have outsized impact on market prices.
It's to the point that it's embarrassing to use in a professional setting.
You are sharing your screen and suddenly you access the stupid "news" menu, popping up with tabloid and perhaps the weather at best.
I try my best to use this software and disable what I can, but after an update to the OS or Edge you have to reconfigure stuff, and suddenly it has added back some horrible defaults.
I would love to spend more time but even though Microsoft gives it plenty of support (nowhere near as much as C#), the community is just too small (and seems to have gotten smaller).
Looking at https://www.tiobe.com/tiobe-index/ numbers fall off pretty quickly from the top 5-7.
Guessing this is the same for OCaml, even if the language as such is nice.
reply