That is probably right for most jobs in the industry. But it is not always so. You can use these algorithms in side projects (I did so for Go for instance, because it didn't have sets implemented and wanted to understand the language better).
Or maybe you lust for a career in the gaming industry, which yes have their own engines ready to use, but you might have to do optimisations of your own. Or work with SOC and embedded systems. Maybe you would like to work with Linux core tools one day.
I think there is a broader world than that of the common jobs in the silicon valley.
I always joke that the best thing I learned in university is the ability to learn. Most of the actual things that were taught there were pretty standard stuff (compilers, algorithms, a bit of functional programming, monads, formal methods, linear algebra and other math, etc.).
Almost none of that is stuff I use on a daily basis and I've frankly forgotten most of it. And I work with successful programmers that skipped college entirely and never learned any of that. I don't find myself building compilers that often and while I enjoy the functional programming renaissance in languages in the last ten or so years, I had to relearn a lot of that stuff as I hadn't touched any of it in 20 years. A lot of the expert systems and Bayesian belief network stuff in the nineties got obsoleted by machine learning later.
While I've forgotten most of that stuff, I remember enough to get back into it when I need to. Which has happened a couple of times. Mostly, I just hit Wikipedia, read up on a bunch of things and then figure out what tools and libraries are appropriate.
And there are a bunch of things that I was taught that didn't click until I learned it properly by doing it in practice. Like dealing with concurrency. Which, as it turns out is less about formal methods (and temporal logic, which was a pet topic of my teacher) and more about engineering practical solutions to very real problems which my teacher had never experienced because he was an academic. I knew all the lingo but hadn't really experienced any of the pain. Nothing like debugging a misbehaving system where you can see all this play out in real life.
I never cease to be amazed by the amount of stuff I have to learn on new projects and have dealt with some amazingly niche stuff over the years. That's stuff from computer science, medical stuff, material science, legal stuff, and more. If you do software for companies working in some niche field, you end up absorbing a lot of knowledge about what they do, how they do it, and why.
Traditionally the masters degree is where you start specializing and becoming an industry expert.
The undergrad degree is intended to teach you how to learn and then how to learn within your specific specialization without becoming an expert in the specialization.
> And I work with successful programmers that skipped college entirely and never learned any of that.
I also noticed this and questioned what I have to offer more. But once and a while there is this complex problem, and then you can see that some developers run into their limits, while others (mostly university degrees) can go further.
> a broader world than that of the common jobs in the silicon valley.
I'd hope it was the opposite: common jobs outside the valley ought to be all the plumbing of comesouttas into goesintos, and the valley ought to be concentrating on the fewer problems where deeper skills yield much higher values?
(I left the valley in the 1980s, so this may be just rose-coloured nostalgia)
Go still doesn't have Set types. The usual thing to do is using a map of your type to either an empty struct, or maybe a bool, if I got that correctly (have only been using Go for a couple months so far)
Did you implement your own Set type and primitive functions?
This is funny to me, because Go is often touted as a language that has "everything you need" in the stdlib. Of course, I'm coming from the nodejs world, where before you can start working you first need to evaluate and choose: typescript or not typescript and if yes what build tool chain, which test runner, which assertion lib, which mocking lib, etc etc. Not really that long ago when you had to choose which async lib (now you just need to choose which async color).
Thankfully a lot of that stuff is in nodejs now, and there's a commitment to maintain parity with Web APIs which also helps, but there's still a lot of half-done work and path-dependencies on outside libs.
> This is funny to me, because Go is often touted as a language that has "everything you need" in the stdlib.
My usual example of something missing from the standard library is
func max(a, b int) int {
if a > b {
return a
} else {
return b
}
}
math.Max only operates on two float64s (due to the absence of function overloads; many standard library functions such as this predated generics), so if you want to avoid casting (and/or potential loss of precision if you're working with int64), you need to reinvent the wheel.
Go 1.21 is the version from which it is noticeable that they are pushing towards (re)implementing things that were lacking before due to missing support for generics.
Like the new slices package, meant to do common operations on slices, that is brand new on 1.21.
I wouldn't be surprised to see more and more new additions leveraging generics in next releases.
Good to know! I admittedly haven't written much Go in the past year, and while min / max were some of the most egregious examples, I'm sure there are plenty more that still exist.
Most languages implement sets using maps. The only major difference you can optimise for is the fact that in majority usage, keys in maps are likely to exist, whereas sets are used for membership checks. Bar that, you can just as well use a map with a dummy value.
Can you give a reference for that? It sounds very unlikely to me, although I've only read/gone through two relevant standard library implementations: F# and OCaml.
Neither of them use dummy values as you suggest. They have similar implementations (both are AVL trees) but sets really do contain only one value while maps contain two. I would be surprised if many other languages did as you suggest.
That is true for the internal implementation details but HashSet provides a different API, it is not just a type alias. So you don't have to worry about semantics of what the values mean (like the bools in Go), and it has no size overhead.
Unlike the dictionary implementation, the lookkey function can return
NULL if the rich comparison returns an error.
Use cases for sets differ considerably from dictionaries where looked-up
keys are more likely to be present. In contrast, sets are primarily
about membership testing where the presence of an element is not known in
advance. Accordingly, the set implementation needs to optimize for both
the found and not-found case.
Or maybe you lust for a career in the gaming industry, which yes have their own engines ready to use, but you might have to do optimisations of your own. Or work with SOC and embedded systems. Maybe you would like to work with Linux core tools one day.
I think there is a broader world than that of the common jobs in the silicon valley.