An interesting way to implement this would be to just analyze tweets by @newsyc20 and resolve the domains behind the bitly links. The advantage here is that you don't have to scrape the page (or even interact with Hacker News at all) to determine what gets ranked highly; the disadvantage is that it is the top 20 instead of top 30 and it relies on the twitter feed.
Even more interesting would be if there was a twitter statistics site that this could be fed into. (i.e. no code.)
Awesome (although I'm trusting you that the query properly gets posts that made the top 30). This Bitly link will let you see the pretty printed version without having to manually mangle the URL.
Top ten for the impatient: github.com, medium.com, youtube.com, techcrunch.com, nytimes.com, bbc.co.uk, wired.com, en.wikipedia.org, arstechnica.com, theguardian.com.
You're onto me ;) Hopefully it gets enough votes to get noticed. Then come Monday we will see the following headline: "I Built a HN Leaderboard for Sources over the Long Weekend."
Even more interesting would be if there was a twitter statistics site that this could be fed into. (i.e. no code.)