I can't believe nobody has mentioned naive Bayesian text classification yet. It sounds like it could work wonders for Twitter. I'm much more likely to be interested in tweets with words like "hylomorphism" than tweets with words like "omglol", and a text classification algorithm could learn that if you trained it up some. It doesn't have to be perfect; it just has to improve the signal-to-noise ratio significantly.
I've looked into this a bit, albeit more in a spam-filtering context; tweets have very little text for naive Bayes to latch onto. 140 characters would be 20-30 words, tops. That is so few words that it is hard to move the prior very much, unless there are blockbuster words that almost always indicate a bad tweet; as the article suggested, "breakfast", "beer", etc.
The article's whole premise was that tweet quality does not correlate well within an account; e.g., some marvelous twitter streams include breakfast tweets.
I think stat/learning classifiers could work, even with the brevity of tweets. However, as the author says, one persons gold is anothers garbage. You'd need a platform/system that would allow for personal classifiers for each/every tweet consumer.