Hacker Newsnew | past | comments | ask | show | jobs | submit | jlouis's commentslogin

Talent eventually get paid their value. Doesn't matter where they live. If you have a brain, you rank up. Quickly.

The Just World Fallacy? In my Hacker News?

It's more likely than you think.


I don't think it's a spectrum.

Languages have features/constructs. It's better to look at what those are. And far more importantly: how they interact.

Take something like subtyping for instance. What makes this hard to implement is that it interacts with everything else in your language: polymorphism, GADTs, ...

Or take something like Garbage Collection. It's presence/absence has a large say in everything done in said language. Rust is uniquely not GC'ed, but Go, OCaml and Haskell all are. That by itself creates some interesting behavior. If we hand something to a process and get something back, we don't care if the thing we handed got changed or not if we have a GC. But in Rust, we do. We can avoid allocations and keep references if the process didn't change the thing after all. This permeates all of the language.


It's not isolation which hampers throughput. That's a red herring. In fact, isolation increases throughput, because it reduces synchronization. A group of isolated tasks are embarrassingly parallel by definition.

The throughput loss stems from a design which require excessive communication. But such a design will always be slow, no matter your execution model. Modern CPUs simply don't cope well if cores need to send data between them. Neither does a GPU.


But the isolation is what necessitates (or at least encourages) a design that requires more communication, isn't it?

Not a priori.

The grand design of BEAM is that you are copying data rather than passing it by reference. A copy operation severs a data dependency by design. Once the copy is handed somewhere, that part can operate in isolation. And modern computers are far better at copying data around than what people think. The exception are big-blocks-of-data(tm), but binaries are read-only in BEAM and thus not copied.

Sure, if you set up a problem which requires a ton of communication, then this model suffers. But so does your GPU if you do the same thing.

As Joe Armstrong said: our webserver is a thousand small webservers, each serving one request.

Virtually none of them have to communicate with each other.


You can be functional "in spirit" more than purely functional. OCaml and Standard ML falls into this category. Ocaml has loops for instance. You might just not see many loops if code is written by OCaml developers, because there's frankly no need to use them in a lot of places. You often want to lift the abstraction level of iteration to an arbitrary data structure such that you get freedom of implementation. See Applicative and Monad.

Rocq is an excellent example of something OCaml was designed for. FFTW3 is another great example. Unison too.

Generally, you want stuff where you have to build a fairly large core from scratch. Most programs out there doesn't really fit that too well nowadays. We tend to glue things more than write from nothing.


Given the track record of digitalization in Denmark, you can be rest assured this will be implemented in the worst possible way.

This is Denmark. The country who reads the EU legislation requesting the construction of a CA to avoid centralizing the system and then legally bends the rules of EU and decides it's far better to create a centralized solution. I.e., the intent is a public key cryptosystem with three bodies, the state being the CA. But no, they should hold both the CA and the Key in escrow. Oh, and then decides that the secret should be a pin such that law enforcement can break it in 10 milliseconds.

I think internet verification is at least 10 years too late. Better late than never. I just lament the fact we are going to get a bad solution to the problem.


It's my favorite line of the whole thing. There's so much which can be derived from that single statement.

In addition to the other comments: you can have an internal memory representation of data be Float32, but on disk, this is encoded through some form of entropy encoding. Typically, some of the earlier steps is preparation for the entropy-encoder: you make the data more amenable to entropy-encoding through rearrangement that's either fully reversible (lossless), or near-reversible (lossy).


Facebook is one major bait and switch strategy.

In the first step, you get everyone to invest into your platform. You provide some valuable services to people, and they sign an implicit contract as a result.

In the second step, you reap what you sow. You switch the platform entirely and change its core nature and functionality. It's hard to stop using Facebook when everyone else is using Facebook, and this fact means you can do things which would normally have people leave your platform in droves.

This ruling limits the extent to which you can run such a bait and switch campaign. It's somewhat remarkable, because it extends some basic consumer rights to tech companies, even if there's no direct product nor a subscription in place. Personally, I think it's long overdue.


How has Facebook changed its core nature?


I'd say the major switch happened at the point where you lost control over your feed. It's not populated because you opted in to updates from a specific person or organization. It's populated by algorithm. Furthermore, at no point in time were any of these new features opt-in. Instead, they were enabled without your consent. Facebook has a long history of enabling features for people which is not in their interest in the slightest.

I should also say that it's more general than Meta. Google are also notorious for doing stuff like this. About time we start legislating against it.


It's not going to be missing the next time around. Usually the file is missing due to some concurrency-problem where the file only gets to exist a little later. A process restart certainly fixes this.

If the problem persists, a larger part of the supervision tree is restarted. This eventually leads to a crash of the full application, if nothing can proceed without this application existing in the Erlang release.

The key point is that there's a very large class of errors which is due to the concurrent interaction of different parts of the system. These problems often go away on the next try, because the risk of them occurring is low.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: