I've used apistar quite extensively in two different projects (one launched) and so far I like it.
One disadvantage is you can't use wsgi middleware, but after rewriting our middleware to components they ended up being much neater. I'm a fan of the bottom up approach where you define the components you want in a route. We have some routes which just returns the database query and with a custom renderer it will get rendered in the json format we expect
We use different files for "soft" requirements, dev/test requirements, and a freeze.txt containing all production requirements and then we default to installing from freeze.txt but can update from the other files. It's similar to this[1] but generating the freeze.txt in a temporary virtualenv.
I've started using an generated offline cache and PIP_FIND_LINKS to remove the dependency of pypi for known installs in one project as well, will be interesting to see if it turns out to be a good idea or not.
This is funny, I've actually inherited a project where the original developer had this idea, he used the same function to encrypt everything: even the news posts available on the front page were encrypted. The passwords were using the same encryption functions and needless to say not using a one way hash so fully decryptable...
Sorry that is not quite what was intended. I've revised the text to say:
If your database supports low cost encryption at rest (like AWS Aurora), then enable that to secure data on disk. Make sure all backups are stored encrypted as well.
i.e. this kind of encryption costs very, very little and give you physical security if you need it.
At home I use Dropbox for some files and Resilio Sync for others
At work we make heavy use of version controlled configuration management where we can recreate any machine from just rerunning the ansible playbook and duply backup for databases and other storage.
While duply was trivial to set up, nice to work with, and much more stable than any other solutions that we were using previously if I were to do it again with more than a handful of machines I would have likely looked into reversing the flow with a pull based backup just to have a better overview since I don't trust a `duply verify` monitoring to catch all possible issues.
Cloud backup is managed by a server fetching the data and then backing it up with duply.
We also run a rsync of all disk image snapshots from one DC to another (and vice versa) but that is more of a disaster recovery in case our regular backups fail or were not properly set up for vital data since it would take more effort to recover from those backups
While I agree with you I'd like to caution some users about rushing into dockerising everything in their production environment. If your environment setup is not repeatable and you don't have your configuration management under control then you have other problems and using docker is just going to add another layer of abstraction on your mess that your DBA doesn't know how to deal with when things hit the fan. In particular I can imagine improper understanding of docker volumes could bite some people, but they also have some questionable defaults for networking (user land proxy, rewriting iptables)
That being said we currently use docker for some of our production databases, mainly for almost-idle services (mongodb for graylog, zookeeper for kafka), but I have had no problem using them for some moderately sized services with a couple thousands writes per second on redis/kafka (which is nothing for them).
We're still using non-containerised versions of the databases that needs dedicated bare metal servers mostly because I don't see the risk-benefit being worth it, but I'd love to hear someones war stories about running larger scale databases in docker.
For development, I don't think there's anything better for databases, it beats manual setup, vagrant boxes, and shared development servers by a long shot. I feel that educating everyone on your team in how to use it is well worth the investment. docker-compose makes setting up even a fairly complicated development environment a breeze.
This, it depends more on the product and the requirements. I've worked in companies that are just a few employees and are running databases larger than this. I've also worked for a company with ~100 employees whose only 'database' is an in memory cache and a few kilobytes big textfile
Completely agree, for example chrome has been observed flushing full state data to disk at regular intervals even with no changes since 2010[1] which is eating battery and SSD lifetime. Firefox does this too as far as I know.
Not to mention the battery impact of poorly written javascript SPA's or advertisements.
It's hard to know what the right call is here. In most cases the battery life hit seems to be fairly minimal (I've never noticed it in any browser, even back when I used chrome), and preventing data loss is, well, fairly important -- at least it can be depending on the site.
OTOH there are almost certainly sites that trigger very bad behavior here. While I've never dug into it too much, it wouldn't surprise me that session store was part of why e.g. IRCCloud or Slack have such high power usage (although I don't know for certain -- it could just as easily be something else).
My understanding is that SSD lifetimes concerns are largely misplaced (or at the very least, only relevant for a fairly small subset of users) and that even reasonably old SSDs can handle well into petabytes of writes -- which is far above what this behavior can reasonably approach. But power usage concerns are totally legitimate IMO.
Full disclosure: I'm not unbiased here, I work on Firefox (but have never touched session store).
I don't understand why some developers insists on using semver and not follow the spec, in my opinion that is the only 'problem' with semver.
If you need a version and don't care about following the spec just use a timestamp or datetime, replace it where required with a sed command and move on but please don't claim to use semver if it's just an arbitrary number for you
And if you're in some context where you're forced to use SemVer (npm, cargo, etc) but you can't be arsed to think about backwards compatibility, just bump the major version with every release.
Ah, it's moved on quite a bit since then. I started using it around then too, but these days compiler errors are very rare indeed, in my experience anyway.
Put it this way, the only compiler bugs I've had in the last year or so have been c gen errors, maybe two in the past year of pretty heavy use. All of those have been because I've been doing something silly with templates.