The connection handling section was surprising to me, reading that Postgres uses a process per connection! This is pretty shocking to me, in a bad way.
One could use connection pooling. Quoting from the tuning guide [0]:
max_connections sets exactly that: the maximum number of client
connections allowed. This is very important to some of the
below parameters (particularly work_mem) because there are some
memory resources that are or can be allocated on a per-client
basis, so the maximum number of clients suggests the maximum
possible memory use. Generally, PostgreSQL on good hardware can
support a few hundred connections. If you want to have
thousands instead, you should consider using connection pooling
software to reduce the connection overhead.
Replication, Clustering, and Connection Pooling [1]
you install it on the clients of the servers themselves and "connect locally" to the bouncer or have a bouncer step (server pair) in front of the database connections themselves.
Different design goals,
Mysql lot of connections with shared memory model and fast running small queries, ex: PHP one connection per page view.
Postgres dedicated memory model long running complex queries.
So PG has a process pool instead of a thread pool. (I doubt either database is spawning procs/thds anew willy nilly for each request).
This means the PG has explicit IPC overhead, vs the quick and seductive path (to the dark side?) of simply sharing memory between threads. Safety vs speed.