Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The only time you need to consider a client-server setup is: Where you have multiple physical machines accessing the same database server over a network. In this setup you have a shared database between multiple clients.

Am I misunderstanding this or is this not the vast, vast majority of all cases?



Most of the time people separate the app and database into two different VMs that the infrastructure team then runs on the same box.

edit: This is done not because of any considered technical reasons, but because that's how one learned to deploy apps.


It's because the very first step in scaling will usually be separate machines for webserver and database.

And it costs almost nothing to write it that way from the start, but it's a pain to separate them out later.

I'd call that a considered technical reason.


And so it is, but I have never gotten that as an answer when asking.

edit: I feel like I should be more specific here. I would only call it a considered technical reason if it was actually considered. The fact that it is possible to come up with good reasons is not relevant if no thought went into it at the time of design/development.


Technical reason is to limit blast radius, easier permission management, easier scaling.

For blast radius, you have things like patching (you don't want to botch the database accidentally updating a shared dependency with your app), resource management (don't want your DB to eat all the RAM or I/O and kill your app), if someone botches maintenance there's less to break at once

You don't have to use VMs but they're one way to do it. Container orchestrators achieve many of the same goals and automatically restart workloads if a physical machine fails, too


I did not say that there does not exist good reasons to do it that way. I said it is usually not a considered choice.


If you are running your webserver and your database on the same box, how are you big enough to have an infrastructure team?


Most organisations run a lot of server applications and most of them don't use much resources considering the size of servers these days.


That would never fly in production for any sort of enterprise application with uptime requirements.


And yet it has been done just about everywhere I have worked in the past, The justification is usually that VMWare will move the machines to another host if it goes down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: