It's just a matter of using the right tool for the right job. There's a lot of things Docker can do that an OTP application already handles (and in a way that's better-integrated with your application). Same with an orchestration system (e.g. Kubernetes).
You can use Docker to provide a common/consistent base on which you'd build the nodes running your OTP applications. If you have zero non-OTP dependencies, then this is overkill (you'd be better off deploying directly to some server, be it a VM or bare-metal, and managing configuration/deployment on that level), but most real-world applications do have other Dockerizable dependencies (databases come to mind, though there's ongoing debate on whether or not putting Postgres in a Docker container is actually smart). Also, using Docker to abstract the runtime environment for BEAM does help considerably with deployment to various PaaS platforms (e.g. Heroku, Bluemix, Elastic Beanstalk, etc.).
Personally, I lean toward a couple big servers dedicated to BEAM instances (running all the OTP apps), and a couple big servers dedicated to some SQL database (usually Postgres). Whether you maintain those servers as Docker containers or EC2 instances or DO droplets or physical servers is an implementation detail that doesn't really matter that much as long as it works and it's easy for you (or your DevOps / sysadmin teams) to maintain.
On that note: I don't think it makes sense to have a bunch of little containerized microservices each running a separate BEAM and a separate set of OTP applications. OTP already takes care of that sort of orchestration, and OTP applications already are (or can be) microservices. Just throw 'em all into a big server and let BEAM's scheduler and OTP's supervision model do their jobs.
If you get to choose your infrastructure that may be fine. If you don't and you work in even a medium size company you likely work with what they have, which is increasingly kubernetes.
That said, they still do not do similar things. OTP does not take care of orchestration if you must run more than 1 node.
You are still benefiting from OTP's supervision and scheduling when using docker or k8s, the difference is in what other features and the size you need.
No one is saying always deploy to k8s, but they are not overlapping.
Nor does using BEAM make it uniquely fitted to running on physical machines compared to other languages like java, go, C++, Rust, ... If your application is suited to running on a few droplets the language you are using is not likely the deciding factor for that.
> If you don't and you work in even a medium size company you likely work with what they have, which is increasingly kubernetes.
And that's totally fine, per my second and third paragraphs.
> OTP does not take care of orchestration if you must run more than 1 node.
No, but it (with BEAM) gets you most of the way there. The only thing not explicitly handled is spinning up the machine running that node (though you can certainly have the app itself drive that, e.g. by using the AWS API to spin up another EC2 instance with an AMI preconfigured to boot up and start ERTS). Once the machine is running, the application can use that node as a spawn() target and start throwing processes at it.
> Nor does using BEAM make it uniquely fitted to running on physical machines compared to other languages like java, go, C++, Rust
BEAM (or at least some sort of Erlang VM) is (in conjunction with an application framework that can capitalize on it, like OTP) is exactly what pushes the needle toward running on a couple big machines instead of a bunch of little machines (and again, whether those big machines are containers or VMs or physical servers is an implementation detail). Java and Go and C++ and Rust don't have Erlang's/BEAM's process model; they're dependent on the OS to provide concurrency and isolation (though there's no particular reason why a JVM implementation couldn't support an Erlang-like process model, on that note; similar deal with a .NET CLR implementation, or any other VM).
The compelling reason to go with a bunch of small machines instead of a couple big machines is resource isolation (e.g. enforcing memory/disk/CPU quotas for specific applications). It's definitely possible to do this on an OS process level (i.e. by applying those quotas to the BEAM processes), but if you're already using something like Kubernetes, no reason to not use that particular hammer to smack that particular nail.
You can use Docker to provide a common/consistent base on which you'd build the nodes running your OTP applications. If you have zero non-OTP dependencies, then this is overkill (you'd be better off deploying directly to some server, be it a VM or bare-metal, and managing configuration/deployment on that level), but most real-world applications do have other Dockerizable dependencies (databases come to mind, though there's ongoing debate on whether or not putting Postgres in a Docker container is actually smart). Also, using Docker to abstract the runtime environment for BEAM does help considerably with deployment to various PaaS platforms (e.g. Heroku, Bluemix, Elastic Beanstalk, etc.).
Personally, I lean toward a couple big servers dedicated to BEAM instances (running all the OTP apps), and a couple big servers dedicated to some SQL database (usually Postgres). Whether you maintain those servers as Docker containers or EC2 instances or DO droplets or physical servers is an implementation detail that doesn't really matter that much as long as it works and it's easy for you (or your DevOps / sysadmin teams) to maintain.
On that note: I don't think it makes sense to have a bunch of little containerized microservices each running a separate BEAM and a separate set of OTP applications. OTP already takes care of that sort of orchestration, and OTP applications already are (or can be) microservices. Just throw 'em all into a big server and let BEAM's scheduler and OTP's supervision model do their jobs.