Why does it need a Postgresql server? For just a handful of users, isn't sqlite the leaner, yet sufficient choice?
How does it compare to GoToSocial, which requires 50-100MB of RAM? They are also in alpha stage and i like their approach of keeping the web UI separate.
Author here - it's just to reduce support surface area. I know I'll need PostgreSQL's full text indexing and GIN indexes for hashtags/search eventually, and I probably also want to use some of the upsert and other specialised queries, and it's easier to just target one DB I know is very capable.
For reference, when I say "small to medium", in my head that means "up to about 1,000 people right now".
People were getting priced out of hosting an instance with "only" 10-20k users and the instance hosting services quote <= 4k users with the 4k end being >$US100/month. With the "low end" 1-200 user instances having 4 cores, 5tb of monthly bandwidth, etc.
The general sense I have got is that mastodon - the default software at least - is extremely resource heavy for relatively low user counts. My assumption/hope was that the bulk of this is that the server software hasn't ever really been under sufficient pressure to improve, and takahē seems to indicate that there's at least some room for improvement on the server side (i.e. performance problems aren't entirely protocol/architecture problems)
Is there any advantage to using a traditional db as opposed to a graph db since json-ld is just a text representation of graph nodes?
I was thinking the easiest path would be have the server deal with all the activityPub stuff and expose something like a graphQL interface for a bring your own client implementation. Of all the stuff they shoehorned graphQL into this seems like a valid fit, like they were made for each other.
For better or worse, many servers are targeting Mastodon API compatibility to be able to leverage the existing clients. Adding GraphQL increases surface area without solving the bigger issue of creating the clients.
I didn’t get as far as looking into the mastodon API for clients but that makes perfect sense, I just assumed it was an overlay on the more general API.
Mostly I was thinking how one could implement something in the most efficient way and graph databases/graphQL were literally designed for this stuff.
I tried swapping that for SQLite and successfully ran the test suite about a week ago, but I've not tried that again against the large number of more recent changes.
SQLite is magical and incredibly lean, but it is not leaner than Postgres if you need real database features. You end up reimplementing a lot of features in code that belong in the db.
This doesn't match my experience from the last few years. SQLite in WAL mode is extremely capable.
The only thing I really miss from PostgreSQL is that PostgreSQL has more built-in functions for things like date handling - but SQLite custom functions are very easy to register when you need them.
It also has excellent JSON features - JSON maybe stored as text rather than a binary format like JSONB in PostgreSQL, but the SQLite JSON functions crunch through it at multiple GBs per second so it doesn't seem to matter.
Why does it need a Postgresql server? For just a handful of users, isn't sqlite the leaner, yet sufficient choice?
How does it compare to GoToSocial, which requires 50-100MB of RAM? They are also in alpha stage and i like their approach of keeping the web UI separate.