So to be that guy, it doesn't look like you've got replication and sharding at the same level of ease as MongoDB. According to the last few slides, you're targeting that at some point in the future; maybe this year? That's one of selling points of MongoDB.
How does this compare to putting a thin Node.JS veneer over PostGRE's JSON type? The server could covert the cursor from PG into the binary format that MongoDB clients expect. It would have been less work.
I wonder if the future stack isn't optional typing language querying optional-sql database.
Postgresql with jsonb lets you get the best of both world : schemaless when you want it, relationnal when you need it, and acid all the time.
I found it strange that they decided to map a document over relationnal schema. I would only map the "id" property of the document to a pkey, and then dump all the rest in a jsonb column, per document type.
One reason may be that Postgres doesn't have built-in operators for updating an individual field within a JSON doc. (The workarounds are complex and/or inefficient.)
This is great. All complaints about Mongo's durability aside, it wins out over many other databases because of its ease-of-use -- its small DSL certainly beats out having to learn a whole new language (SQL, etc.). Now, combine that with real durability, real transactions, etc. If this isn't the database for the 80%... I don't know what is.
(My one open question, and something this page doesn't touch on, is how easy is it to get ToroDB up and running? Mongo makes that part easy as well!)
They claim that they don't use the JSON type in order to save on storage and I/O, but they don't show any data to back up the I/O savings part of it. They don't provide any evidence of improved performance over the JSON type, in fact.
Storage has become so cheap that Toro's savings are negligible considering you now must maintain their service layer on top of your normal Postgres configuration. I don't see how a bit of storage savings warrants any of those tradeoffs unless they can show a significant performance improvement as well. And if silence tells me anything, it is that this is probably slower than Postgres' JSON type.
I'm pretty sure negativity toward MongoDB's (lack of) data integrity is incapable of crossing the threshold into gratuity. MongoDB being synonymous to lost data might as well be a universal constant.
I think that's a reasonable question to ask. However, personally, i think that in this case, the modest but positive comedy value makes it not gratuitous.