Hacker Newsnew | past | comments | ask | show | jobs | submit | requinot59's commentslogin

> basically everything is done in French, and much of it is required to be in French.

« La langue de la République est le français. » ("The language of the Republic is French"), first sentence of the second article of the French Republic Constitution (see http://www.legifrance.gouv.fr/affichTexte.do?cidTexte=LEGITE...).

Since universities are public, so republican, schools, it makes sense to make them apply and respect the constitution.

The idea is that any French citizen is able to have access, read and understand any published university thesis (PhD), since they paid for them (via taxes). In the same vein, students of the École Polytechnique are required to work 5 years for the State after their studies, since the State paid such a good formation to them.

French can always go to private school (or emigrate) if they don't agree with that.


"Since universities are public, so republican, schools, it makes sense to make them apply and respect the constitution."

Many constitutions have a clause like that. Encouraging international visibility, which requires English language skills, is not contrary to that.

"French can always go to private school (or emigrate) if they don't agree with that."

The point is that they're cutting off their nose to spite their faces. French researchers and students, and citizens more in general, are isolated in their own little world because their strange sense of 'pride' is causing them to be so. I don't care, really; but I do sometimes feel sympathy for those that I talk to who realize this and (rightfully) blame the system for their lowered chances at international success.


> Encouraging international visibility, which requires English language skills, is not contrary to that.

No, it's not, and international visibility is obviously a good thing! But there is a problem if a PhD is published only in english, because then there might be French citizens who will not be able to read a work they paid for (remember, the public education system is funded by the taxpayers-citizens). Since most people yelling about the "language restriction" problem actually imply that they should be able to publish in english only, I did recall that this would be unconstitutional in France. Additionally publishing in english has never been forbidden or discouraged!

> I do sometimes feel sympathy for those that I talk to who realize this and (rightfully) blame the system for their lowered chances at international success.

This system is the application of the law. The French law is the consensus of all the French citizens on the way they want to live all together. If you, individually, don't like the law, you can militate for a change (possible since France is a democracy), or leave the country, if you really can't live with this general consensus. Dura lex sed lex.

In this specific case, you can also just go to a private school which will let you write in english or whatever. Sure, you'll have to pay there; sorry, you can't have the free public education with no duties in return in France.

Ideologically speaking, the French education system is very egalitarian and very meritocratic. Lots of people don't agree with it (the majority still does). That's why the private schools were allowed (after a violent public debate). But it's a chosen, working and interesting system. Yes, you can "rightfully blame the system", in the sense that you have the right to disagree, but otherwise it's not any more "rightful" to blame it than to blame any other working system (e.g: the US one). It's just a different conception of education and its place in the society.


That seems reasonable; I didn't actually mean it as a criticism, just an explanation. Publishing only in French has downsides for readability outside France, but as you say, publishing only in English has downsides for readability within France. Most CS researchers and students I've met would prefer the English solution, mostly because they feel that, unlike in fields like philosophy or political science, the average French person isn't interested in reading their work anyway--- the only person who's going to read a theoretical CS thesis is another theoretical CS researcher. And in that case, they'd prefer the international community of theoretical CS researchers to be able to read their work.

Denmark is an interesting example of the opposite case: there are now graduate degree programs where you cannot study in Danish, because all courses and coursework are English-only. That's controversial to some extent, for obvious reasons. It does have some upsides from an international perspective, since Denmark can now hire researchers who don't speak Danish (which is why I'm in Denmark currently), and can also accept PhD students from other countries without requiring them to learn Danish first. But the situation differs from French because Danish has many fewer fluent speakers (about 6 million), so works published in Danish reach an audience much smaller than works published in French do.


> But there is a problem if a PhD is published only in english, because then there might be French citizens who will not be able to read a work they paid for (remember, the public education system is funded by the taxpayers-citizens).

Maybe those citizens should learn English.

But one of the main reasons why so many people in France don't speak English is precisely their broken educational system. (This is not exclusive to France, Spain for example is very similar.)

> Ideologically speaking, the French education system is very egalitarian and very meritocratic.

I don't see how a system can be both egalitarian and meritocratic.

> But it's a chosen, working and interesting system.

It is a system that fails the majority of the population ans only benefits the extremely small elite that gets to go to the Grandes Écoles.


"I don't see how a system can be both egalitarian and meritocratic."

Egalitarian: education is free for all, and richs and poors (normally) go to the same school, the one of their town, until at least 16. Egality of chance.

Meritocratic: but after 16, you go to the university or a grande école according to your abilities/ranks. The best can go to Polytechnique, the good can go to a grande école or a good university, the rest goes to the other universities (and it's still free for all).

It doesn't matter if your parents are poor and if you were born in the country and not in Paris, if you're great you can go to Polytechnique. And no matter how rich the parents, if you suck, you won't go to Polytechnique.


Thanks for sharing a link only Facebook users can access. In the pure hacker spirit: open, transparent access for anyone.


> Sometimes I just want to create some static pages, but I was forced to use either CMS or things like wordpress.

Wtf?! To create static pages, I open Emacs and type there. No need of a CMS or Wordpress to create an HTML page!

Then to host it:

  scp page.html me@srv.com:~/public_html/
and you're good to go.

Hosting a website on S3 is nice, but it's not simpler than what is already possible if you own a server. If you don't already own one, I'm not sure setting up an AWS account + client code is easier than creating a new VPS account. I surely would prefer to set up a Linux VPS, which is an environment I'm confortable with.

Regarding the costs, I rent one private (not virtual) server for 20 euros/month, and host several (static and dynamic) sites on it, so it's actually cheaper than going the AWS road. Uptime last time I logged: 560 days (aka reliable enough for me).


You misunderestimate the ineptitude of unwitting sysadmins such as myself. It takes me days to set up a VPS. It took me a good few weeks to figure out how to get Django working. I screwed up my latest VPS so badly that I had to scrap the thing and start again.

Some people really suck at these things.

> Regarding the costs, I rent one private (not virtual) server for 20 euros/month, and host several (static and dynamic) sites on it, so it's actually cheaper than going the AWS road. Uptime last time I logged: 560 days (aka reliable enough for me).

S3 will cost literally pennies for hosting a simple static website. A VPS is orders of magnitude more expensive.


The cost of S3 is around 0.1 euro per gigabyte. For the amount that you are currently paying per month, you could serve 200 gigabytes through S3. Of course, that doesn't include storage costs, but those should be negligible for static sites.


It might be better to use proper rlimits, and not let a mad process gone wild triggers the OOM. Swap just delays the problem.


The problem with set number limits is that changes in software and hardware require changes in limits - it's easier to just watch swap and get notified.


> I often just turn swap off entirely.

And you're smart to do so. Swap is useless for 99% of end-user systems. ChromeOS doesn't use any swap partition, for instance.

As for hibernation, it's not available if you use a crypted swap, which is wise to use (at least on a laptop).

As a result, I, too, disabled swap completely.


"And you're smart to do so."

Finally, that's the first comment I get, instead of someone extolling the virtues of swap and babbling warmed-over 1990s rules of thumb about "twice the RAM is the recommended size of your swap file" as if I'm going to wait for even 200MB of swap to fill up before flipping out and killing the offending process, let alone 8GB.


>Finally, that's the first comment I get, instead of someone extolling the virtues of swap and babbling warmed-over 1990s rules of thumb about "twice the RAM is the recommended size of your swap file" as if I'm going to wait for even 200MB of swap to fill up before flipping out and killing the offending process, let alone 8GB.

Those people have doubtless misunderstood the point of swap. You should have a swapfile/partition because it allows allocated but currently unused memory (from an application which keeps data hanging around which is not needed for most of it's working life, or an application which simply leaks) to be dumped to long term storage, thus freeing memory for its real use: page cache. Sweet, sweet page cache.

I'm always happy to see a few tens, even a couple of hundred MB of swap in use, because it means that some application had some unused data hanging around for so long that to leave it there would mean my machine having to read from disk more frequently, which would be Bad.


> "twice the RAM is the recommended size of your swap file"

Don't listen the parrots that repeat something that ceases to be true at least 10 years ago. Swap used to be a useful hack, it is not anymore.

> as if I'm going to wait for even 200MB of swap to fill up before flipping out and killing the offending process

Well said ;-)


Hibernation requires a large swap space, doesn't it? Last I knew it did, perhaps you can compress it now.


Yes, and it may seem a merely semantic difference, but there's still a difference between "swap space" and "hibernation backing". One I'm willing to wait for while it fills, the other, I am not. The kernel may not distinguish, but I do. When I don't care about hibernation I just remove it.


Right, swapoff is totally acceptable imo if you're having trouble with swap. I was commenting on how "double the size of RAM" is a bad or outdated guideline. There's been a few times where I've regretted not making my swap space big enough, sometimes when getting a RAM upgrade and wanting to do hibernation, etc. My disk isn't super pressed for space so in my mind there's no real reason to be stingy, and then you don't have to grow/shrink partitions if an upgrade occurs after the initial disk setup.


Swap still provides one important function: it allows large, inactive, long-running processes to be moved from RAM to make room for more caches and buffers.

That said, if you never fill your RAM with buffers/cache, then of course swap makes no sense.


On the other hand, one could say that moving a large process from RAM to swap certainly makes it nearly inactive and long-running...

Memory is cheap. I'd rather pay a little more and have the long-running processes stay in memory than worry about my more active processes ending up in swap accidentally. If I wish to reserve 1GB for these long-running processes in RAM instead of swap, I can still win by buying 2GB more RAM.

Besides there's always some disk-backing you can't generally avoid: pages containing read-only executables can be purged from memory when unused and re-read from the original .so or binary when needed. This is something that would never go to swap anyway.


Care to explain why this is so?


Stop measuring your swap space in terms of disk space, start imagining it in terms of "amount of time it takes to fill up". At a nice round 20MB/sec write, it would take a solid 400 seconds to use that much swap at full speed. In reality, you can't fill it that fast either, because it's seeking, also yanking other stuff out to run your other processes, and then sticking the stuff back in to run those other processes, so it's really a mess; the real amount of time it could take just to make initial real use of 8GB of swap could be days, no joke. (In other words, I don't care if you have a 80MB/s drive, it really doesn't matter much, plus swap usage tends to be full of seeks anyhow so it's not like you're going to get 20MB/s either.) Given that a modern system is, relatively speaking, brought to a near-complete halt by being in swap, what possible process are you going to run where you are willing to put up with your system being brought to a halt for even tens of minutes at a time, let alone the hours or days it'll take to fully utilize 8GB?

Clearly, you don't need and basically can't use an 8GB swap partition. So, how long are you willing to put up with? That will vary, but let's say 10 seconds before you're "flipping out and killing processes". That's a 200MB swap partition. But... that's only 5% of the size of your RAM! If that's the difference between a process completing or not, you've probably already lost. Or you should just kill Firefox.

On a 4GB system, the most likely reason a process is pushing you into swap is that it is in an infinite allocation loop, and all having 4GB of swap does is make your system crawl that much more before the process dies.

This is a result of RAM sizes increasing far faster than hard drive write speed has. When I had 32MB of RAM, it made sense to have some swap. I could swap out, say, 16MB of unused executable pages (bits of windows, bits of drivers I'm not using, bits of the massive Office suite I'm not using, etc) and get that much more working set, and this could happen in a reasonable time; the system choked for a couple of seconds but recovered in a stable manner. Now swapping out 16MB of executable is a joke. SSDs may change the balance, but these balances have been out of whack for a long time, I rather suspect that even with an SSD it won't be worth swap. Especially since by the time SSDs are truly common 8GBs of RAM may be entry-level because, well, why not? (Poking at Best Buy really quickly, at the $500 line you get 4GB for laptops, a little over $300 for desktops, coming down fast. I'm not sure they have anything less than 2GB now and even that is really into "don't use swap" for the average user.)


Please note that this doesn't apply all the same to servers. I have servers running with 16 GB of RAM and 16 GB of swap (because I simply can't even stop them to add RAM), and even then about twice a year they run out of memory and the OOM killer does its dirty job. However without swap they used to simply crash and burn, so swap is fine in this case.


The only thing I can possibly imagine that a server can do with 16GB of swap is leaking garbage like a sieve and then getting the garbage swapped out. On an incredibly local basis, yes, this might be a better idea than not having swap, but as a generalized reason for servers to have swap it's terrible. If that's not what's happening I am all ears as to what the situation really is, but I'm sure it'll be something very unusual; in general if your server has even the slightest need for performance it can't use swap.


Not a memory expert myself, but a friend of mine dismisses overly large swap as consuming more RAM to handle it's addressing - that is, the more swap you have, the less of your RAM you can access.

I'm not sure what the consumption rate is, but he won't create general servers with more than 2GB swap (if that), no matter how much RAM the system has.


You're probably never going to hit the swap for a good reason. You might hit it if you do a mistake (resizing an image by 5000% and not 500%), in which case you will suffer if you let your gigabytes of swap get filled), or if you have specific needs (video editing, and still...).

Keeping a swap partition in a 2011 computer is 1. a waste of disk space, 2. an unecessary matter of worries (2-1. may leak some infos, even if your other partitions are encrypted, if your computer is stolen, 2-2. a source of potential bugs).


As a side note, news.ycombinator.com should really have HTTPS access.

Passwords and cookies in clear HTTP are no good. Anyone here (should) knows it. Firesheep proves it. GMail and Zuckerberg suffered it.

Just buy or get a free SSL certificate, and let nginx or stunnel handles SSL and proxies HTTP to/from Arc. Total cost, being pessimistic: 150$ for the certificate verification, and 2 hours to set-up the certs & nginx.

I know, it's awesome, it's a custom Arc webserver and all, and good practices are for PHBs only, but still. For a "hacker" website, news.ycombinator.com is a shame regarding to privacy/security (see also: passwords stored as shasums (without even a salt), funny things like <img src="http://news.ycombinator.com/logout>, outdated versions of software used [http://news.ycombinator.com/item?id=516122], etc.)


i'm sure the audience of this site is technically savvy enough to all be running modern browsers that recognize startcom as a valid CA (if not, consider it a valid barrier to entry), so it would be free and take just a few hours to receive an SSL certificate for this site.

http://www.startssl.com/?app=1


Although I agree that having free SSL certificates is nice, I question whether it's actually a viable way to certify the authenticity of a site. Seriously, if you make it free, spammers will overrun it. Why should we trust free SSL certs? I think having a cost provides a certain barrier to entry that is good overall.


We don't need SSL certificates for authentication. I know that when I go to news.ycombinator.com, I'm getting Hacker News.

We need SSL certificates for encryption. With the certificate you get a private key that is used for secure communication between your browser and HN (both ways).

If it didn't cause every browser to show a big, scary, your-computer-will-instantly-explode-and-your-children's-social-security-numbers-will-be-stolen-if-you-continue, using self-signed certificates (ie. certificates that anyone can just generate) wouldn't be that big of a deal. It could open you up to a man-in-the-middle attack, but it's still way better than sending everything in the clear.


> I know that when I go to news.ycombinator.com, I'm getting Hacker News.

How do you know that? That's the whole point of SSL - knowing that you've traded private keys with the right party.

SSL for "encryption only" only works to defend against attackers that can listen to your network, but cannot write to it. So, sure, it defends against some passive collection system, and perhaps against some tools that are designed to just listen.

But, if browsers stopped displaying warnings, so that using a "bad" certificate worked just fine, then I'd bet the tools would just switch to allow cert injection and we'd all be worse off.


There was a story I read a while back about a support ticket filed with Mozilla for FireFox complaining about all of these "security warnings" that would pop up at every HTTPS site the user visited.

She was apparently someone who should have known better, but instead was willing to believe that FireFox was just warning her spuriously about valid HTTPS certs -- yes, someone had hacked her computer, and was collecting every bank, credit card, and online shopping password as she fell for an MITM attack over and over.



In that case, Mallory was a fool. Mallory should have installed the MITM cert in the browser's certificate store, to prevent warnings. How many people routinely audit their browser's SSL cert list?


No, the point of SSL is encryption. SSH seems to handle key exchange just fine.

(Hint: https should have been implemented the same way. CAs are fundamentally broken.)


No, SSH does not. Have you ever actually verified a host fingerprint? Of course not, no one does.

That's the way it's supposed to work. You know the first time you logon to a server and it asks if you trust it? You're supposed to call up the server admin and get them to read off the fingerprint, or have them email it to you, or get it from some other out-of-band channel.

And no-one, nowhere actually verifies host fingerprints. Even security conscious people. And what do people do when they get that warning about a modified fingerprint? Just delete the entry from authorized_hosts and re-connect.

So ssh actually does a really shitty job handling key exchange.

Anyway, the closest thing to a real alternative to https and CAs is monkeysphere (OpenPGP WoT for servers), but no-one uses that.


If I got an error about a modified footprint I wouldn't "just delete the entry" and re-connect... unless I know why it's complaining. If there's a reasonable explanation for why the keys are different then I might do that.

While 'security conscious people' might not verify the fingerprint out-of-band when adding it the first time, I'm sure most of them wouldn't just remove the authorized_hosts entry...


Yes, I often see this and it's almost always that a VIP has moved physical hosts for whatever reason (e.g. planned maintenance on the original box). Occasionally it's that someone's re-JumpStart'd the box. That's sufficient to create a false sense of security, if it ever happened "for real" I would likely dismiss it.


but that is the case in which yoiu _already have_ the footprint. Parent^2 is talking about the first connection, which is when you validate the fingerprint the first time.


Why don't people validate?

That doesn't make any sense to me. There are even free services that can perform the validation for you based on a "crowdsourced" approach to verification, like Perspectives:

http://www.techrepublic.com/blog/security/perspectives-provi...


Several ssh implementations also support using certificates as hostkeys. Of course the ssh client will still need to be configured to trust the issuer but it can help with the 'first-connection-hostkey-fingerprint-verification' problem. In my experience most users will never verify the fingerprint.


How does some corporation that will disclaim liability at the first sign of a light breeze telling you a site is "authentic" trump your own personal judgment? CAs are scams.

Use something like Perspectives instead of CAs:

http://www.techrepublic.com/blog/security/perspectives-bette...


StartSSL requires you to respond to an email sent to the address listed in the domain registration. That at least shows you have control of the domain. It also has certificates with greater levels of verification.


Being able to pay isn't a very good barrier. Being broke doesn't mean having no meaningful content, and most attackers who can make serious MitM attacks can pay. CAs are supposed to have real barriers (and I think most of them do).

In this case, though, we don't need a CA. PG could publish the key in an essay and we'd just carry it through manually.


The point of collecting payment for certificates is not that attackers can't afford it, but that it enables the CA to do some cursory verification, and creates a trail of evidence if the certificate is used for a scam later.


Here is a link to the relevant feature request (although it is a sin to call this a feature) in the feature requests thread:

http://news.ycombinator.com/item?id=499851


All HN needs is a note above the password field saying "don't use an important password". Nobody should care.


Given, however, that many founders and tech journalists use the site, a compromised account could be used to severely damage a startup's credibility. All it would take would be a few posts on HN before a funding round that called into question the founder's ethics, skill, or common sense, and someone from TechCrunch to pick up on it. It could cause sufficient uncertainty, if properly timed, to make potential investors stay away. That, in turn, could spell big trouble for a company.

Granted, that scenario may seem far-fetched, but it's not unreasonable to suppose that some unscrupulous person might have motive to do something of the sort. Rather than deal with the fallout if it does occur, why not simply allow people the option of having a secure login? If they choose not to use it, that's their prerogative.


Exactly. Take a tour of the SF and Mountain View coffee-shops which offer free wifi with a laptop to sniff traffic. Isn't there a not-negligeable chance you might recolt some HN cookies from "interesting" accounts? Once you get them, it's just a matter of imagination before causing some harm.

HN is not the small and unfamous news site it was 2 years ago anymore.


And not just interesting like a high-profile person, but interesting like a YC founder who is a moderator. It's possible that PG has instructed mods not to log in over public connections, but I bet they occasionally do it.


And how much damage could a hacked moderator account do to the site? This whole conversation seems like a symptom of taking this site way too seriously. The community is very valuable and even important. The site is just an artifact of it.

As evidence for my point of view (and, you can say "you're welcome" if my brinkmanship with this sentence is paid off by Graham promptly enabling SSL, which he could easily do in the process of fixing the far-more-important bug of this site not being served through a front-end proxy), note that next week SSL will in all likelihood not have SSL enabled. That request --- provide SSL --- has been outstanding forever. Does Graham also share my cavalier attitude towards the site?


That's true.

But remember that this is also the YC application system. A lot of alumni help read apps, probably just by getting a permission added to their account. So a lucky firesheep-er can probably read every application to YC. And mess up people's applications (if they get the account of an applicant before the deadline). And may reject people/delete apps if they were to get, say, pg's or harj's account.

And possibly other stuff. I don't know what all YC uses it for, but I get the impression that they continue to use it for various things (signing up for office hours?), some of which may be sensitive, once teams are accepted.


I addressed this point in another comment. Briefly: my advice regarding that fact would not be to improve HN's security; it would be to get the YC functionality off HN, stat. HN is way more a target than YC's stuff ever will be. Most of the people who will take a run at this site don't even know what YC is.


Ok, that would work too. But I'd guess that there's significant barriers to doing that (ie. it would take a lot of work to make it happen).

Plus it's never optimal, even for a bs written-in-a-weekend app, to send passwords in the clear, given how many people use the same password on multiple sites. And even though HN isn't that important, we'd certainly prefer to avoid the headache that would result from someone getting a mod's account, banning a bunch of high-karma people, deleting a ton of stuff, etc.

So SSL is a good solution because a) It could be deployed today. b) It's preferable anyway. But I agree that if they decoupled HN from all the other YC stuff, I'd be a lot less concerned.


That doesn't protect from cookie/password steal (for instance if you use a public Wifi hotspot).

I do care about identity usurpation.


You shouldn't. There are more important things to care about.


Like what, in the HN context?


Declining quality of comments? Creeping influence of politics?

SSL is a giant waste of time for Hacker News, modulo the fact that people might be crazy enough to use a shared password here.


> Declining quality of comments? Creeping influence of politics?

It's a fallacious argument in my book. Like comparing apples and oranges.

Say I run a bakery. What I care the most about is the quality of my bread. So much, I spend all my time working on that and only that. So much, I didn't ever bother to have a lock at the door. But it's not even a big deal if someone comes in and poisons one of the bread, as long as the overall quality is increasing!

> SSL is a giant waste of time for Hacker News

Yes, if by "giant" you mean that it takes like 2 hours to set-up, and a small payload for each negociation. But concerning the payload, Arc is not especially fast, so there is room for improvements there to compensate, if needed.

> modulo the fact that people might be crazy enough to use a shared password here.

Not the point, the point is HTTP sniffing.

And anyway, people could use a shared password, making it easier for them (don't overestimate human memory), if HN used (HTTPS and) a "real" password encryption scheme (bcrypt or the like). Why put the burden on the user when you can put it on the computer?


No, that is an extremely bad idea. Even if they use bcrypt. Bcrypt exists to protect the site owner from calamity, like, "thousands of user passwords posted to Rapidshare". It does very little to protect individual users against the attacker who busts into your server; whether you use bcrypt or not, they still get the contents of every input type=PASSWORD that hits the site.


SSL is a giant waste of time for Hacker News,

Waste of time in what sense? The time it takes to set up SSL?


Yep.

If this was a real product, this would clearly not be my advice. But it's not. It's just HN. The worst case to an attack here is not all that bad.

There's some goofy YC stuff that happens through this site. If asked, my advice regarding security and YC would not be "make HN more secure so the YC stuff is safer". It would be "get the YC stuff the hell off HN."


<really, really dumb question> Hi Thomas, I have checked your profile because I am confused by this whole conversation (I mean the social dynamic of it where you are mostly being downvoted into oblivion -- I have no hope of following the technical points). I can't find the info I want. For the unwashed masses (like myself), can you clarify: Aren't you some kind of security professional?

</really really dumb question>

Thanks in advance.


Yes, tptacek runs a security consultancy. Why are you surprised? He's not wrong that the worst-case scenario isn't that bad, and he's a lot more "practical", for want of a better word, than either e.g. cperciva or me. (cperciva picks his serverside crypto algorithms for side-attack-resilience; tptacek points out that not having buffer overflows is asking too much of most software.)

This is not to say that I agree with him - the worst-case scenario isn't that bad, but setting up SSL is easy and the right thing - but he's not babbling nonsense or anything.


Why are you surprised?

Not surprised. Just trying to verify if he had the subject matter expertise I thought he had or not so I can better understand the discussion. Since I am a member here, security of the site does matter to me as it potentially directly impacts me. But I lack your depth of knowledge of the subject. So the credentials of different speakers matters to my understanding. For someone like me, whether he is being downvoted because he has no clue what he is talking about or for some other reason entirely makes a significant impact on my understanding of the situation.

Thank you for your helpful reply.


I am being downvoted for two direct reasons and one indirect one: (1) people universally think it's trivial to enable SSL for HN --- and it is, in the grand scheme of things, for non-hobby non-side projects, and (2) people care about the security of their HN account, even though virtually nobody else does, and so they have little to worry about. Meta-reason: people assume I'm being argumentative for the sake of it; I'm not. SSL is a waste of time for HN.


Thanks.


For what it's worth, I cofounded it, and I'm a principal, but Dave Goldsmith runs it. Working with me is a hazard of joining us, but working for me isn't, so much.


Oh come on. How long would it take someone who knew what they were doing to set up SSL? Run Apache on the same machine, listen on 443, and reverse proxy to the arc app. It would take less than 30 minutes to set up.

Fifty bucks worth of work, once, which pays a dividend each and every time a security conscious user visits the site. That's not a waste of time, that's a no-brainer.


FWIW, Paul Graham, made a fuss about putting in a simple link to the searchyc page for searching through archives. His reason was that he didn't want to spend time on something that wasn't really focused on the important issues like comment quality.

He took a lot of flack for, what was surely just a 2 minute job editing some html template, but I can kind of see that logic now.

When you add the link, it signals that you deem "Searching Archives" as an important feature of the site and then it's suddenly no longer just a simple href= entry in a text file somewhere.

Dealing with SSL could be in the same boat. By adding it, you're implicitly saying that 'this site is serious enough to warrant proper security measures' and then that's another rabbit hole that's difficult to get out of.


This.


> He especially liked Lua

Well, not as much as Haskell : "Haskell is pretty ideal but I’m not smart enough to hack the GHC. Lua is less ideal but a lovely language [...]" (at the beginning of the 2nd answer.)

> He regrets using the CommonJS module system

Yes, and this is pretty interesting. Also, he regrets using WAF as the build system ("it introduces more WTFs than necessary".)

I have a great respect for Ryan Dahl. He is not orthodox. He is not a blind parrot.


The good news is someone else did 'hack' GHC and gave it a nice new IO back-end. http://johantibell.com/files/hask17ape-sullivan.pdf


Heh, I got the impression that he tried harder to get Lua working, partly because he felt is was a more pragmatic choice, but I take your point.


And I do take yours. The fact than Node is actually using Javascript, which is more similar to Lua than to Haskell, makes your impression very sensible ;-)


Seriously, is the specific and quite "strange" disposition of the periodic table really adapted here? Why not a simple table-based layout, with no useless holes?

Here on FF 3.6 on a 1280x1024 display, some of the labels are truncated...


Why are you sperging about this so much? It's just a cutesy geeky reference. I don't think anyone is under the impression that it actually leverages the layout of the periodic table in a useful way. What a curmudgeon.


Because it's cargo culting. The periodic table looks the way it does because it's about the relationships between the different elements organized spatially on the page. Because of that, it also lets us predict elements we haven't discovered yet! Amazing!

There's really no reason that the relationship between typography, dessert, vegetables, google apis look anything like the relationship between the elements. If it did, we'd really be on to something!

But! If the relationships between google apis isn't at all like the elements, what would it look like? And even more interesting, if there are missing spaces, that means there are google apis not yet written that we can look forward to!

http://www.ozonehouse.com/mark/periodic/

Is an example where someone did a periodic table that tried to use space as a way to convey information about the relationship between the perl operators. Notice how it looks nothing like the PTofE. It has its own structure because the relationship between perl operators is different than the relationship between the elements.

Those of us that are sticklers about this feel so, probably because we find the beauty of the actual meaning behind the structure of the periodic table much much more interesting and beautiful than any joke you can make from it.

Sometimes, jokes are funny because they're the truth that no one wants to say. Like when Chris Rock says, "[When listening to your woman], you've always got to throw in 'told you that bitch crazy', because every woman has another woman at their work, that they can't stand"

Other times, jokes are funny because one doesn't know any better. Like when Chris Rock says, "If they can send a space shuttle to the moon, why can't they make an El Dorado with a bumper that doesn't fall off?"


> If knowledge is power, then sharing your knowledge empowers others.

Yeah and in my nieztschean perspective I don't especially want others to be empowered.

Either you have to work hard to beat them, either your competitors are weak enough. Ideally: both.


This is a vast oversimplification, but some markets are ruthless zero sum games, others are not. mature markets that are either fixed in size, growing very little, or shrinking are usually zero sum games with all competitors trying to screw each other and their customers for every cent.

Other markets, especially those with potential for large growth, behave differently. Growing the market is more important than trying to maximize your share of the market. You don't want to have the smallest piece of the pie, but sacrificing a little bit of your share in exchange for a larger pie can be a win.

In knowledge businesses like consulting, educating customers often means educating competitors simultaneously. However, it can still be a win to share knowledge.

Finally, one must consider brand. Sometimes you trade knowledge for brand. If you write about programming, you may make your competitors better programmers. Some clients may feel empowered to write their own code instead of hiring a consultant. But your brand can now open opportunities for you that didn't exist before.

I guess what I'm saying is that knowledge is power, but sharing it can be an investment under certain circumstances.


Chrome should stick to its true nature, do one thing well, and leave the DNS stuff (refetch after..., the weird anti-nasty system explained in the post) to a DNS cache daemon. Google could include one in Chrome OS, and don't turn Chrome (the browser) into a big pile of bloat.


Anti-nasty system, maybe. The prefetching thing can't possibly be implemented by the DNS cache daemon because it doesn't know what the user is currently typing.


I was speaking about the fact that Chrome also re-fetch some of the (most used) DNS entries in the background when they are about to expire (like they do for the DNS "8.8.8.8" servers).

I agree that prefetching however can't be done by a DNS cache program.


So instead of shipping one executable with code X + Y they should ship two executables: one with code X + one with Y + Z, increasing the overall complexity of what they ship and adding the inevitable "bloat" which would come from extracting DNS code into a deamon and additional code to communicate between Chrome and that deamon (the Z part).

That doesn't sound like a sound engineering practice.


No. They give them a competitive advantage over other OSes by offering a faster Internet experience on ChromeOS. They stick with classic DNS resolving on Chrome the browser, no matter the plateform.

The complexity and bloat is having Chrome, Firefox, Opera copying DNS cache features and not gathering their knowledge together in this area.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: