This is really a matter of architecting your database correctly. In all the years I used PostgreSQL for write-heavy and delete-heavy loads, I never had a problem with vacuum but I also understood how the database worked internally and how to design my data models to best fit that when performance mattered. Same thing for other database engines. If you can make it fast on MySQL, it just means you are doing it wrong on PostgreSQL.
There is not a database in existence that allows you to be oblivious to the underlying organization while still giving good write/delete performance. PostgreSQL is no different in that regard.
> If you can make it fast on MySQL, it just means you are doing it wrong on PostgreSQL.
This is quite the statement and inconsistent with your later point. Postgres never updates data in-place, and certain workloads can never be as fast on Postgres as using a different storage engine such as MySQL/TokuDB.
There is not a database in existence that allows you to be oblivious to the underlying organization while still giving good write/delete performance. PostgreSQL is no different in that regard.