Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ShowHN: Redeploying Debian-Administration.org, as a cluster (debian-administration.org)
17 points by stevekemp on Jan 21, 2013 | hide | past | favorite | 10 comments


Does Varnish run on all four machines on the off chance they will claim the IP, or does it come up when needed?

When a machine becomes the Varnish server, does it stop being an Apache server? What port do you run Apache on?


Apache runs on all hosts, on port 8080. It cannot run on the loopback adapter, because it has to be externally visible to the varnish instance which is running and listening on the floating IP. (However it is firewalled such that it is only accessible within the VLAN.)

The floating IP is implemented via ucarp, and when a host becomes the master pound and varnish are both started automatically. (They should be stopped when a machine loses the master-IP, but this is not currently done.)


Hi Steve, with much of your content static (or close thereto), would you be able to comment on whether you evaluated serving through a CDN such as Amazon Cloudfront or CloudFlare? Thanks!


I'm very familiar with load-balancing, apache, varnish, mysql, etc. I think because of that using a CDN didn't even cross my mind, (although I think they might be a good fit for this type of site).


Thanks for the reply!


Interesting read, as ever. Would be interesting to hear other alternatives to the cache invalidation problem..


There are a couple of different ways to go that I considered:

Reinstate the use of memcached, at the object-level. Rather than storing things on the localhost each server would talk to the central memcache instance. That would avoid each of the webservers from having a local, and potentially stale, cache.

The other way to go would be more fine-grained caching with varnish. Right now we cache in-RAM as much as possible, and on any event which can change the state of the site (new comment posted, new article posted, etc) we throw away the _whole_ cache. That's quite a heavyweight solution, but it was fast to implement and is provable to never leave stale content around.

(I once setup a site for a friend where we did the same thing; the whole site was cached, and the cache was blown away every time a POST request came in. That worked perfectly, and inspired me.)

Using the varnish PURGE requests though we could uncache only the things that have changed. The reason why I didn't go down that road is because there are non-obvious dependencies. For example if there is a poll vote the front-page must be thrown away, because it has a running total of the votes applied. Keeping track of all the things that should be expired on a single event would be a headache.


Steve: do you still work for Bytemark?

If so, tell them to hurry up and let us sign up for bigv.io :-)


Yes I do, and if you'd like to be invited please do drop a mail to the support address.


Already gone through the beta with the vkey etc (I no longer use it). I'm just finding it hard to justify to some clients with the word beta stuck on it (even though it's pretty damn stable).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: