Hacker Newsnew | past | comments | ask | show | jobs | submit | ma_mazmaz's commentslogin

In Scala, there are both mutable and immutable data structures. Immutable data structures are preferred, however there are some cases in which a dash of mutability can simplify the code, especially in cases where Java inter-op is a must.


In Scala, most variables should be val (i.e. constant) from a design perspective. That is, it is better, in Scala, to write code that does not have changing variable. Thus, using val instead of var is simply a check on the code, much in the same way that static typing provides a benefit over dynamic typing.


> much in the same way that static typing provides a benefit over dynamic typing.

A non-trivial number of people would dispute that ;-)


I don't think so, actually: in all the heated discussions I've had with people on the subject of static / dynamic typing, everybody agreed that static typing had significant benefits. What people don't agree on is whether these benefits are worth the cost.

It's hard to argue in good faith that having the compiler catch mistakes rather than finding about them at runtime is a bad thing. It's perfectly possible to argue that it's not worth the perceived development speed slowdown.


Keep in mind that all of this is just my opinion. I'm not going to append "IMHO" to each sentence, as to save the reader the tedium of reading it. I'm not saying that I'm correct or that people should agree with me.

I would go so far as to say that I think overly simplistic static type systems don't have much benefit. C's static typing drives me crazy, as there's almost nothing of use that I can express with it. It's the same with Go; I almost never pass an Int when I meant to pass a Bool. In exchange for thoroughly unhelpful type errors I now have to jump through flaming hoops to parse JSON.

It ends up being a bit like JavaScript or Python where both languages lack the ability to specify that something is truly private (though in JS you can use closures to hide things). You generally just use a naming convention to mark a thing as private, and hopefully people have the decency to respect that. It's like that with types in dynamic languages; I can express pretty complex relationships with types and keep the whole thing in my head with out many problems.

That said, languages with powerful type systems like Scala and Haskell are thoroughly worth the effort. I can express almost anything with these type systems, usually with a minimum of fuss. They can protect me from the dreaded NPE, and that's a bug I encounter quite often. They can help me write simpler code that deals with complex shapes of data with their support for pattern matching and TCO. This one is more Haskell related, but the guarantee that everything is immutable and lazy makes it possible for the compiler to do some insanely impressive optimizations.

Scala, Haskell, and Rust have taken a dyed-in-the-wool lover of dynamic languages and made a convert of me. They finally followed through on the promises of safety and productivity that other languages failed to deliver on.

In closing, I'll repeat one last time that all of these are merely the opinions of an insufferable neck beard (me). Even if we disagree, I'm sure you're a very nice person, and I approve of you using whatever languages and tools make you happy and productive.


I can't help but wonder whether you're that circumspect with everyone or if I come off as crazy-kill-you-you-phillistine and need to work on my communication skills?

Aside from the fact that I've never felt scarier, I agree entirely with every single point you just made and thank you for qualifying my broad generalisation.


No, not in the slightest, though you've made me laugh like a maniac in front of my co-workers. So there's that, you fiend.

It was more of a general butt-covering sort of thing. People like to take offense to things on the internet.


I agree with this. I also somethings think that I use the type system the most when I am refactoring/redesigning code and at that time I might be sending a bool instead of an int as someone wrote which is being caught by any good type system. Regarding the cost I think an optional type system, like in Dart, is intresting. You can do some prototyping or quick coding and then add types when you have some working code in order to develop fast, or you can use types all the time in order to be correct.


However, this is susceptible to a man-in-the-middle attack, still. A malicious man-in-the-middle could simply send an old timestamp which was already validated. The best alternative would be to require the time server to return a signed timestamp plus challenge to prove that it was sent by the actual server. Unfortunately, this would incur computational cost on the part of the time server, which may make such a scheme impractical.


Some TSAs (GeoTrust) support TLS which solves this problem. Surprisingly (given they sell the product to do that) it doesn't seem many others do...


When looking to improve your programming ability, it's important to ask for what end you are improving your skills. If your goal is to become a professional computer scientist, then you'll need to choose an area of focus. The field of computing simply has too many facets for anyone to master them all. If your goal is to learn, simply for its own sake, then all that matters is that you are enjoying the learning. It won't matter what technologies you use, or if what you're making actually works, so long as you enjoy it. In either case, you should choose something to learn that interests you. There are many highly sought after computer scientists, who specialize in computer vision algorithms, for example, who get recruited by companies all the time, despite knowing little to no software engineering and data structures. TL;DR: Learn anything, it will get you places.


This is blatantly wrong: "red means bad and green means good. This association is cross-cultural, probably universal, and probably as old as the hills: red as blood, green as grass. "

http://youtu.be/z2exxj4COhU?t=16m6s


Since the siding is no longer metal, does that mean that it no longer acts a a Faraday cage, and will be susceptible to damage by lightning?


According to the white paper, "all users receive all messages." How, then is the system scalable to a large network?


The whitepaper proposes to handle this by having nodes join separate clusters once their databases reach a certain size.


Please consider this a security review by a tenured P2P professor:

The whitepaper describes a simple and focused system relying on partitioning in an attempt to preserve scalability.

Bitmessage has many architectural similarities to Usenet and also offers no valid response to spam. Using a proof-of-work system to combat spam is proposed, but to-date science has not yet seen a working approach anywhere. Details are missing on this vital element plus defenses against the Sybil attack are missing from this design.

Mechanisms such as the "averageProofOfWorkNonceTrialsPerByte" in this system only slow down attacks and do not stop them. Check the impossibility proof by Harvard to see that systems like Bitmessage which react to any message cannot build an effective Sybil defense: http://dash.harvard.edu/handle/1/4907301 So this is known as a hard unsolved problem. Further diving into the scalability issue is this project thread on their forum: https://bitmessage.org/forum/index.php?PHPSESSID=8cl6qeafitk... It would be great if the partitioning concept and algorithms could be explained in detail. It's again a hard problem, even group size estimation in a hostile environment is already non-trivial. So how group consensus is formed to do a break-up is difficult and prone to attacks. This design is not incentive compatible. TOR has over 50% Bittorrent traffic, it's difficult to stop users from using(abusing?) TOR like that. Systems like Bittorrent and Bitcoin have some incentives, but Bitmessage with broadcasts and proof-of-work might even have a negative incentive for participation. I have seen no mechanism to prevent it's users broadcasting Blueray rips. This would bring down the system, one cluster at a time. Please check this work, it shows how to bring this type of P2P networks down: www.christian-rossow.de/publications/p2pwned-ieee2013.pdf‎

Publicity like "Bitmessage Sends Secure, Encrypted, P2P Instant Messages" might be nice. It creates a false sense of safety. If you want to protect against NSA snooping, you're up against a real army of crypto experts with decades of experience each.

Nice to see that this project has such an active Github community, 480 closed issues and 1159 commits. But, in my opinion it's back to the drawing boards... Sorry.

Disclaimer: working for 8 years on Tribler, a streaming Bittorrent client.


I appreciate the write up, it's why I popped into the thread.

That said, at least these folks are trying to protect against the NSA. What do you purpose we all do? Lay down and accept that they watch everything we do? Fuck that. Let's continue to build tools as a community. They may have a lot of people, but our community is bigger. So, fuck them.

People should continue to experiment, and try new things until we come up with various way to protect against the god damn NSA.


Indeed, we should not roll over and declare privacy an illusion.

A lot of people are experimenting with designs that will never work. It's just wasting programming resources, while projects like Tor starve for volunteers. My research team is currently merging Tor and Bittorrent (http://forum.tribler.org/viewtopic.php?f=2&t=5128&p=8585#p85...).

Clear designs (and lots of them) are more important then experimental code I believe.


Experimental code increased the understanding of these complex systems across a greater population of developers.

If we are going to come out on top, then we need to play the long game. In that case, experimental code has a lot of value.


Although I agree with your premise that clear designs are essential, I'd say that it's important nonetheless to implement them. Having an implementation is a marker of whether the design does/does not work. A whitepaper can theoretically show this, but an application is always (in my experience) more effective.

For example, when Satoshi first published the whitepaper for Bitcoin, there was talk on the crypto mailing list that it wouldn't scale due to its gossip-based protocol. Satoshi's design showed it did, and laid the foundations for future cryptocurrencies (Namecoin, Peercoin).

P.S. I've been following your work with Tribler, excellent stuff!


Doesn't I2P fit better, considering the design requirements? Tor would require very significant changes to scale better for high volume traffic, but I2P already handles it reasonably well.


I wonder why I2P is getting so small amount of love. I've read somewhere that for some reason it's popular only in Russia. It has two working email systems, working Bittorrent and much more. Maybe it lacks a native (C or C++) implementation?


I2P is slow, it has no incentive system like T4T.

Try installing it, search for files and watch download speeds. Onion routing and tunnels cost bandwidth. The crowd that is used to Bittorrent speeds (in combo with VPN safety) is not interested in 2 Kbit/sec speeds. Plus the user interface is hard to understand without having attended crypto courses.


@synctext: you can't have traffic anonymity without routing. And I2P is mainly slow because the shared bandwidth is low.

Also, it isn't a hard as you claim. It is mainly a matter of setup.


My understanding of tribler is that it is going to use onion style routing and use some internal cryptocurrency to incentivize users to provide bandwidth. They hope that with proper incentives people will provide enough bandwidth to enable speeds that are high enough to stream high def video anonymously. They hope to prevent spam by enabling anonymous wiki editing of torrent channels and voting of torrent quality.


It hasn't been studied as extensively as Tor.


Which is why we need to get it studied more.


Sure, but that's still the reason it isn't as widely used. And because research resources in the anonymous-networking field are so limited the marginalization of I2P tends to be self-reinforcing.


What do you think of Bote mail in I2P, which uses DHT for relaying mail? It is set to hold mail in the DHT for 100 days or until fetched by the recipient. It also uses public keys as addresses (ECDSA and NTRU are the options), and all mail is encrypted. I2P provides traffic anonymization, and Bote mail supports letting mails be relayed with random delays to further anonymize the sender by removing time correlation.

https://www.i2p2.de and http://i2pbote.i2p.us (remove .us if you have I2P installed).


I would love to see someone give i2p-bote a good analysis. I tried using it a couple of times but wasn't able to ever receive messages successfully. They are advertixing some pretty awesome features though; no content or metadata leakage; it is secure against a global passive adeversary if you use delays between relay hops; parties don't have to be online at the same time to communicate.


I've never had problems with receiving messages with it. Had I2P been running for at least 20 minutes or so to be able to establish enough connections? Bote can tell you how many Bote nodes it is connected to.


Bitmessage's POW concept is not only proposed but implemented and being used in practice.


my2c: The common meme in the bitmessage community is that the POW helps mitigate flooding, but it's not quite there for spam prevention


There are a bunch of different proposals for scaling bitmessage (with and without "streams")

https://bitmessage.org/forum/index.php?topic=2550.msg5271


They have some vague ideas for scalability that they do not know how to implement.

Also they have some major security issues that I pointed out to them, but they simply ignored. I am sure bitmessage will never be a success because it is fundamentally broken.


I've read some criticism of bitmessage sometimes, but it's been quite scarce (as with all interesting feedback) as of now...

care to link to the bug report/forum post/blog post where you wrote the security issues you mentioned?


Hello, I'm not sure what questions you have asked in the past but I would be happy to answer them here.

-Atheros / Jonathan (creator of Bitmessage)


Have you seen this post (https://news.ycombinator.com/item?id=6866972), and what do you think of it?

I'm interested in bitmessage but it being unvetted / not heavily reviewed gives me pause. Do you have any doubts about the design (that can't be easily solved)?

Cheers, wc


Regarding the link, The proof of work requirement exists to keep the network from being flooded too easily. It has the side benefit that it may make sending spam uneconomic. That said, any attacker with a good GPU without a financial incentive could send a very inconvenient number of messages through the network as has happened before. About the paper "On the Sybil-Proofness of Accounting Mechanisms", I'm not sure of its relevance as Bitmessage uses neither accounting nor reputation. The stream branching algorithm will indeed require a good group size estimation algorithm. My current best thought is to use child streams whenever there are a certain number of messages already going through each of one's current streams per unit time. "So how group consensus is formed to do a break-up is difficult and prone to attacks." Luckily using child streams doesn't require consensus; one can decide for one's self. To join a child stream, all one does is say that they are a member of that stream in version messages, create Bitmessage addresses with that stream number imbedded therein, and advertise the node's existence in the parent stream from time to time. But malicious attackers could cause problems by flooding a stream and getting others to make a bad decision about when to start using a child stream. "I have seen no mechanism to prevent it's users broadcasting Blueray rips. This would bring down the system, one cluster at a time." The proof of work mechanism is supposed to prevent that. Broadcasting torrent files in Bitmessage broadcasts would require much less computing resources for the sender. "Please check this work, it shows how to bring this type of P2P networks down..". Which attack specifically? And why hasn't anyone used it to take down Bitcoin?

Regarding your last question, if someone throws an FPGA at the PoW algorithm, they could flood the network with a lot of data and that concerns me. And, as mentioned above, deciding when to use child streams in the context of a hostile environment remains and open question.

-Atheros


I wrote the libertymail proposal. You said you'd read it but you never commented on it. It is mainly a summation of attacks that are possible on bitmessage, and provides solutions on how to prevent such attacks. I also propose a solution for scaling, one that could actually be implemented.


Bitmessage's solution for scaling can be implemented.

I found your paper here: https://anonfiles.com/file/849506ebab91aa0ab90e98fc539446a2

It lacks a "summation of attacks that are possible on bitmessage" or "solutions on how to prevent such attacks."

I like that you tried to add a feature where users could choose their own anonymity/usability balance. "Users should be able to choose to remain anonymous or to disclose (partial) address information and be a ’light’ client." 200MB a day just for headers is a little bit much for a mobile 'light' client. If the protocol supports sending only headers based on a filter then why bother supporting headers? The "seeding" node could just supply a list of body messages to download that pass a filter. This would also mean that no one ever has to sync headers.


Seriously,

To just take two examples:

-Every bitmessage user can be mitm'd by their ISP. (yes, I know about tor).

-Every bitmessage user could have only bad peers connecting to them when peers aggressively try to connect to their client.

These are two examples of attacks that work on bitmessage, that are addressed in the libertymail proposal, and for which a possible solution is given.


Is it forward secure yet? Why is this better than Pond?


(I'm just a guy interested in BM, not involved with development, yet)

forward secrecy is helped by 2 things: you cannot know who is the recipient of a message, so if you want to store the messages to be able to decrypt them in the future when you'll have obtained their private key, you'd have to store all the network messages

If you're worried by such an attacker, you can just create a new identity for each message, just like you can create a new bitcoin address for each transaction

Pond seems interesting, but quite different from bitmessage, especially I like bitmessage because of its user-friendliness (the UI needs lots of improvement, but you just download it, create an identity and off you go... I doubt about the feasibility of getting the whole world to use TOR, especially people in China/Iran or "my parents")


That's not forward secrecy. Usenet with encryption is not forward secret either.

> If you're worried by such an attacker [...]

Uh, shouldn't everyone be at this point?

> you can just create a new identity for each message

Key exchange and management is hard. That's why you try not to do it often. You could claim PGP e-mail was forward secret: All you need to do is use a new private key every time.


I know, that's why I wrote "forward secrecy is helped", it's not something that you get out of the box with bitmessage

moreover, since the keypair and the BM identity is one-and-the-same the key exchange and management comes for free, once you got the first message sent to your recipient... changing identity is much easier than creating a new gpg keypair and sending it to the other guy, and on top of that you'll get some added anonimity


It isn't yet. It may never be as it requires a round-trip to establish a session. The Pond design appears to be good.


If you have FIOS, enabling WPA2 breaks their router, and tech support won't help, claiming that WEP is "just as secure as a wired connection"


It looks like the keyboard and mouse shown are made by Apple.


This is certainly not an issue with iPads, specifically. Students probably spend more time using computers for entertainment and social networking than they do for school, but that doesn't stop teachers from taking their students to computer labs to type essays. Just because something can be used for fun, doesn't mean that it has a place in schooling. Moreover, students very commonly get around the very weak security procedures in place, which, more often than not, prevent students from doing legitimate school work, rather than preventing abuse.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: