Why you cannot trust this model 101: because it's practitioners understand the value of the labor/product,
capitalize on the OSS ideal and then push it to the unsuspecting.
I don't know how many bait and switch
episodes from google I need to see but by 2008 I
was done with them -- for the first time.
Ad-nauseam.
If you boycotted and didn't use Google they would wither and die and stop being something to worry about: for immediate google specific free offerings that are sucker bait.
Also for self aggrandizing standards/inclusions and buying
the overt talent. But whatever. The HN community is full of dupes and buy ins, culture whores and fifth columnists. True belief in OSS died when everyone else took it and got rich.
So... you save $9k a year in recurring and it will be more than 5 years before you break even due to your $68k up front equipment costs.
And that's assuming you don't have any needs to quickly scale up or down and you are limited to 1 colo instead of the ability to expand to multiple regions like with AWS.
And that's not even taking into account the cost of the brain power to make sure your hardware stays up and running.
Doesn't sound like rolling your own stuff in a colo is a very good idea in this case. But that's job security if you are the sys admin I guess.
> And that's not even taking into account the cost of the brain power to make sure your hardware stays up and running.
Although, as I said upthread, I agree that AWS is very likely ideal for this particular deployment size, let me try to dispell this oft-repeated myth.
Modern server hardware takes almost no "brain power" (or effort of any kind) to keep up and running.
We aren't living in the days of the early dot-com boom where Linux-on-Intel in the datacenter could mean flimsy cases, barely rack-mountable, with nary a redundant part to be seen.
Applying some up front "brain power", one can even choose and configure hardware in such a way as to provide things like server-level redundancy, if that's important and/or preferable to intra-server redundancy (think Hadoop), or the ability to abandon mechanical disks in place instead of ever having to replace one.
This is the main "sweet spot" for AWS (or "cloud" infrastructure in general): small scale.
I am generally a strong proponent of using ones own hardware in a colo or on-premises, instead of or in addition to the cloud (primarily for "base" workload).
However, if the entirety of your needs can fit into a single rack, even I will advocate for AWS, since "convenience" is, perhaps, not strong enough a word.
I do think your server and storage prices are around $25k too high, but that's easy to do buying brand name and/or not negotiating with multiple vendors on price (which is particularly tough at low volume unless you're a startup with a credible growth story). That's assuming such an expensive CPU (in comparison to so little RAM) isn't foolishly profligate, along with the other hardware choices. Of course, this underscores the point (on which we agree) that, as a rule, it's just not worth that much time and effort for so little.
I'll take your word on the AWS pricing, as it's fairly predictable, if very tedious to perform the prediction. The main "gotchas" I've found people run into are forgetting to add in EBS costs for EC2 instance types without (or without comparable) local storage and underestimating data transfer costs.
You'll have to trust me that this examples hardware spec
and requirements are for a basic/base site.
You can thin the profile and increase the # of chassis, compromise on redundancy, etc...but experience has shown that this arrangement is most cost effective. Kinetic event
impact modeling system -w- RT data delivery -- that should answer your conjectures.
No large vendors used in this example - thinkmate or aberdeen supermicro re-brands for due diligence and warranty.
> You can thin the profile and increase the # of chassis, compromise on redundancy, etc
No, I wouldn't suggest more chasses, as that's almost always more expensive (it's tough to break even on that $1k minimum buy-in on a server).
I believe your workload needs the resources you say. It just happens to be a remarkably rare ratio, hence my remark.
> No large vendors used in this example - thinkmate or aberdeen supermicro re-brands for due diligence and warranty.
The vendor doesn't have to be large to jack up the price.. Any re-brand is super suspicious. To me, a large part of the point of a commodity server product is the reliability is predictable (and therefore easy enough to engineer for/around). Paying extra for "diligence", warranty, or hardware support is just flushing money down the toilet.
A fee for custom assembly and/or a basic smoke test is fine, but it had better be a flat rate per server and on the order of $100. Technician labor isn't that expensive.
Larger or "enterprise" vendors are merely the extreme version of this, with upwards of a 10x premium on something like storage arrays, especially if one includes
You seem to be an absolute type of planner. I used to approach IT mgmt and provisioning that way some years ago before being confronted with the realities of small and large business. One size obviously does not fit all and
sometimes you take shortcuts..usually you pay for them later.
I agree with your cautions around supermicro resale but
the warranty support and build diligence are absolutely necessary for a small business. Having a good business relationship with a trusted provider of hardware that
always performs the first time is priceless.
I don't know what an "absolute type of planner" is, but I consider myself an engineer and a pragmatist. I'm well versed with realities. In reality, with business, there's no such thing as "priceless", only risk, and risk is, generally, quantifiable. With enough data, it's easily quantifiable.
I admit that, having an affinity for startups, rather than more traditional small businesses, I have a greater affinity for risk. Ironically, perhaps, I'm usually the voice of risk-aversion with respect to IT infrastucture, so I don't believe it affects my overall understanding.
I recently pointed out to an interviewer who was trying to convince me that it was worth spending half a megabuck on a petabyte from Netapp because it was "business critical" instead of 1/10th that amount for DIY, that, just like the DIY solution, Netapp does not indemnify the business against loss. One isn't buying insurance, only a bunch of technology.
Sure, "works the first time" is worth something. Is it worth the cost of a whole, complete, extra server on a order of qty 6? If the infant-mortality rate on servers is anywhere approaching 1-in-6 and they're being shipped somewhere that the replacement time and/or cost would be prohibitive, I'd still probably rather just order 7 servers instead.
That's my main problem with paying a vendor for "reliability": it's a very fuzzy, hand-wavy assurance. Paying for reliability with more hardware has data and statistics behind it, which is an engineering solution.
That is some serious set up. I can’t see how you get close to this only spending $25K a year on AWS - maybe the price I was quoted for my needs was some sort of suckers price.
AWS is a feature factory and they are breaking their
own back. Today I had an issue where creating an AMI for an application feature set as a golden image (with a very modest price tag at t1.supersmall or whatever) does not scale into high end compute instances due to lack of support for ena. Never was the case before.
Rolled it back into KVM/QEMU in colo with a glue layer REST interface over virsh and will never look back.
Of course we don't use containers..they don't offer an overt benefit in HPC...and I don't think they ever will.
I think devops was defunct once the asphalt hit the hardpan.
Approaching 'generic' secure operational environments as programmable, iterable and contained is one of the great IT lies of the last 20 years.
God, I can empathize even if only at a much less prominent
vantage. The skills, will and effort involved are at a level far more than a 50+ can reasonably sustain into the early 70s (if that is the prospect).
I don't know. If this is really how software is written in large scale web service environments you will always have problems. It just seems like sh*t to me.
Create a user and env to run a one-off build + application. DJB cracks me up. He may have the right thing in mind but this type of prophylactic approach is no longer proof against anything.
I don't know how many bait and switch episodes from google I need to see but by 2008 I was done with them -- for the first time. Ad-nauseam.
If you boycotted and didn't use Google they would wither and die and stop being something to worry about: for immediate google specific free offerings that are sucker bait. Also for self aggrandizing standards/inclusions and buying the overt talent. But whatever. The HN community is full of dupes and buy ins, culture whores and fifth columnists. True belief in OSS died when everyone else took it and got rich.