Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've mentioned this before on HN, but we had a single postgres machine primary read/write server that did 4k QPS 24/7. It had DDR ram as storage on a PCIe card, of course, but this was before "SSD" was a thing. It was for a site that hosted portfolios of images, for both people in the images and people who took the images, and such. The front end data (the images and text) was, iirc, 3TB. Sometimes we'd need a server in a new location, so a locked metal briefcase was carried from the DC where the front-end data lived, to our offices, where one of our "IT" people would then carry it on to the new location and offload it to those servers in that location.

Anyhow that database server was probably ~$35,000 all in. That's 5 months of your current AWS spend. One of the things i did during that time was take a 2 generation newer server, a $35,000 1u Dell with 512GB of ram, and mirrored the postgres database into tmpfs and enabled replication, then we set that machine as primary. The new machine didn't break a sweat. So much so that one of the things me and the (really very awesome and nice; Hi, Chuck, if you're out there!) DBA did was set postgres to use no more than 640KB of memory, then ran the entire site, with 4k QPS, on that postgres instance with 640KB of memory (not counting the 280GB of tmpfs storage, of course!), just to prove it would work. It did - although some of the bookeeping queries (not sure what they're called) were taking a very long time, and would have had to be refactored to use less temporary memory, and such.

anyhow my point is, there are people out there that can do things cheaper, or faster, or more efficiently than whatever you got goin on right now. Your statistics on "per second" usage and the like don't sound too demanding. If you could squirrel away $500/month for a few months, and you ask around for someone that can rack metal and has peering, there are people (including me) who could get you co-lo in <16U[0] with redundancy, where your only monthly infra charges would be the co-lo fees.

[0] Old, extremely beefy, but large servers are generally 4U, but dirt cheap for what you get. Ex: 80 thread, 512GB RAM, 8 SAS bay HP server, $800 shipped. And i bought those 6 years ago. However: 5950x, 128GB RAM, 24 SATA port can be had for <$2000 (i'm guessing based on what i paid a few years ago), and that's roughly equivalent in power (kernel compile takes 3 seconds longer on the 5950x but it uses 1/4th the power at the wall). The reason i tagged on <16U is because at most you're gunna need 4x4U, two "front end" and two "back end" machines, with duties split and everything redundant in the rack. I haven't looked in a while to see what's available on ebay as far as more density, but for sure 16U or less!

The issue becomes: how do you find someone who knows how to do all that, that is willing to work for next to nothing because they believe in the ngo/nfp? Maybe there's a tech forum that people like that read, who knows.

good luck, and thank you for doing things to help other people. I hope it all works out in the end.

email in profile.



RAM as storage for DB? So data loss on reboot? Sounds like very specific use case.


https://en.wikipedia.org/wiki/Fusion-io

maybe i misspoke - at one point there was battery backed DDR ram they used in PCIe, but by the time i came around they were using Fusion IO PCIe devices, which i guess were NAND flash, not DDR. or, alternatively, that is how it was explained during onboarding - "it's like DDR on a PCIe card, so the iops are 1000x that of SAS 10k drives"

unless you're talking about our experiment of tmpfs - then yeah, the use case was "genewitch heard bill gates say 640k should be enough for anyone; here's a super beefy machine to test that theory; theory tested." We didn't run the site live on that machine for more than 10 minutes or so, we switched it back to the fusion-io backed server immediately. It was a proof of concept about one of the things we could do with these new servers - read replicas with the DB in tmpfs for extreme speed and no IO blocking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: