At the same time, the way we measure mass is by measuring weight and having the balance approximate a conversion. I imagine most balances use sea level gravity for that but I can't be bothered to do research. :D
Sorry - didn't get a chance to go through the documentation much yet... A problem we are looking to catch early in the development process is for things we don't want folks to do, even if they are technically correct. For example, dropping a column or altering the datatype/nullability in such a way that the table goes into reorg pending. I know we can write our own regex to look for that specific syntax, but I've not yet found a tool with those kinds of rules either build out already, or easy to add/maintain.
It's pretty new functionality in sqlfluff, but it now supports user defined plugins for org-specific rules if you want to forbid something more obscure. Documentation is sketchy, but you can see the proof of concept here: https://github.com/sqlfluff/sqlfluff/tree/main/plugins/sqlfl...
I see a lot of value of using separate users in both dev/test and production environments. That gives you an "easy" way to physically separate your database/schema into multiple databases with minimal changes to your application other than pointing your connection pools/config to the new database endpoint. We do this often - separating the task of breaking up our monolith databases with two phases - logical then physical.
Would it double the number of connections? If you are doing the same volume of total work and not looking to increase concurrency, wouldn't you end up with something closer to two connection pools with ~1/2 the size compared to one connection pool with your old size?
If you are writing a ton of small files (we have billions of audit blobs we write) the API put costs can quickly creep up on your. We pay much more for those than on the actual storage costs. If you want to use tags on your objects, they charge you per tag per object per month - again, another huge cost. We missed that when pricing S3 out, and needed to do a project to pull out all of the tags we had, and are currently working on batching up multiple blobs into one larger blobs to hopefully reduce our API costs by an order of magnitude. This is purely a cost decision for us, adding complexity to our application and its operation. S3 seems better suited for fewer larger files. Our backups and other use cases like that work perfectly.
Does that really guarantee in-order processing, or just that messages can be picked up in order. If you have multiple consumers on your queue and consumer A picks up message x from the head of the queue then consumer B picks up message y next, it is possible for y to get processed before x. Maybe consumer A is slow (gc pause?) for some reason, and now we are processing out of order, even though the queuing infrastructure does not see it. If you truly need to guarantee strict processing order (x must complete before you start processing y) I think you may need to build that into your app. Or, I misread, which is very possible.
https://en.wikipedia.org/wiki/Gram