Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fifteen years into my career, and I'm finally realizing that "expressive" languages are practically unreadable.


unfamiliar languages are practically unreadable


I don't see how that's any better for a haskell shop. i got some empathy but you chose a rough life.


The trick is to hire people who are familiar with the language.


As a haskell programmer, all of that was easily readable.


As a non-Haskell programmer, all of that was easily readable. Straightforward words for functions makes that pretty easy.


As a C programmer, that's the worst FizzBuzz implementation ever. You're not supposed to special-case % 15, just

    bool m3 = !(i % 3);
    bool m5 = !(i % 5);
    if (m3) printf("Fizz");
    if (m5) printf("Buzz");
    if (m3 || m5) printf("\n");
You can turn in your visitor badge at the front desk, and they'll call you an Uber.


They have

   n `mod` 3 == 0 && n `mod` 5 == 0
And you have

   if (m3 || m5)
I really don't see what point you're trying to make here...


the article has a special string that says "fizz buzz",

they saying its unnecessary because if you go in order you first print "fizz", then print "buzz" which will always print "fizz buzz" for the equivalent of " mod 15" you don't need a special string that like.

the "if (m3 || m5)" is just printing a newline because under that condition you printed something earlier.


It’s still an extra condition. So it really doesn’t matter if you’re printing “Fizz Buzz” string or not, it’s the same amount of branching.


add another bool that stores the information to write a newline whenever either fizz or buzz happen and you don't need the third branch


I agree. But you're also not supposed to have separate booleans for each special print, because when you have many special prints it gets annoying to extend.

I've always liked this solutiin, which avoids that: https://archive.is/KJ39B

    fizzbuzz i =
      fromMaybe (show i) . mconcat $
        [ "fizz" <$ guard (i `rem` 3 == 0)
        , "buzz" <$ guard (i `rem` 5 == 0)
        ]

    main =
      for_ [1..100] $
        putStrLn . fizzbuzz
This allows you to add special prints by adding just the one line of code, changing nothing else.


Nice. C looks like a safe language with the casual using an int as a bool.


Depends which of the hundreds of C compilers you used, as some "bool" are cast as uint8_t, others unsigned char, and others bit-packed with an optimizer.

With C, any claim one makes about repeatability is always wrong at some point depending on the version compliance.

I like C, but Haskell is a happy optimistic syntax... Julia is probably the language I'd wager becoming more relevant as Moore's laws corpse begins to stink. =3


I can't tell if you're being serious or making a joke.


That's an extra system call for printing the newline.


Well, duh, yeah, if you call setbuf(stdio, NULL) first.


Uh, setbuf(stdout, NULL)

I'll call my own Uber, thanks


I think this person wants a space between Fizz and Buzz.


Exactly, which means the interviewer didn't even state the problem correctly. The train had already jumped the rails by the time the candidate started writing. Hopefully HR will agree that they deserve each other.


And yet it's funny how many times you see the supposed "correct" solution missing that 3x5=15. I wonder how AI will answer fizzbuzz, is that part of any standard benchmark?


I mean, all trolling aside, that's kind of the idea behind FizzBuzz. If you don't notice that 15 is divisible by 3 and 5 and take advantage of that somehow in your logic, or at least acknowledge it, you really cannot be said to have aced the problem, even if your program's output is technically correct.

Phrasing the question in a way that doesn't leave room for that insight is also a pretty big goof.

As for AI, yes, FizzBuzz is trivial for any model because it's so well-represented in the training data. The common benchmarks involve things like "Render a physically-correct bouncing ball inside a rotating hexagon," or something else that is too complex to simply regurgitate.


and people say haskell is hard to get


There were a couple of places that took me a couple reads to figure out, like the fact that `(x:)` was "prepend". But overall, I followed the code pretty well. From the context of someone that wrote a small amount of Haskell a decade ago.


The : operator is the linked list data constructor. It takes an element and a list and creates a new linked list by linking the element to the existing list. It does the opposite when used in a pattern match: separates out the first element in a linked list.

It is also an operator, meaning it can be used with infix notation, as in (x : xs). Haskell has something called operator sections, where if one supplies only one of the arguments to an operator it will return a function expecting the other argument. In other words

    (x:) == \xs -> (x:xs)
and

    (:xs) == \x -> (x:xs)
This can be used as in this article, to create a function that prepends x to any list. Another common example is (1+) which increments any number it is given, or (:[]) which turns any value into a one-element list.

It can also be used much more cleverly -- especially considering that any two-argument function can be turned into an operator with backticks -- but then (in my opinion) readability starts to suffer.


So it's the equivalent of Lisp's cons then?


Yeah so prepend is just currying cons with the head


Yes, : is pronounced cons.


It’s partial application of cons via the operator, admittedly a poor choice from Haskell, a language which likes operators a bit too much. I think eta-expansion makes the whole thing clearer: (\xs -> x:xs) but most Haskellers would disagree.

The article also features examples of point-free style, another unfortunate trend for readability.

As long as you use operators sparingly, don’t abuse partial application and prefer explicit lambdas to composition, Haskell is fairly readable. The issue is that approximately no Haskeller writes Haskell this way.


I don't use Haskell nearly enough to call myself a Haskeller but I will still disagree. Yes, operator sections are yet another thing to learn but I find them very intuitive, and actually easier to read than the equivalent lambda expression because I don't have to match up the bound variable.

(For example, (\x -> x ++ y) and (\y -> x ++ y) look pretty similar to me at first glance, but (++y) and (x++) are immediately distinguishable.)

Of course, this is reliant on knowing the operators but that seems like a mostly orthogonal issue to me: You still need to know the operator in the lambda expression. That said, the niceness of sections gives people yet another reason to introduce operators for their stuff when arguably they already are too prevalent.


It’s not that those language features are hard to understand. They’re all syntactic and don’t bring a ton of theory with them. It’s just that the tower of understanding for basic programs is very tall, and the tendency to introduce abstraction essentially never ends. I spent ten years with Haskell as my go-to language and there are still things I don’t understand and haven’t explored. It’s not like that with Python, or Go, or even Rust.


I honestly just skimmed the code and assumed it probably does what I would guess it does. It seemed to make sense and be straightforward... assuming my guesses were right, I guess?


Not all programming languages are obvious derivatives of C. Haskell is pretty readable once you spend some time getting to know the syntax.


The semantics are very different. It's much closer to a mathematical proof than a C program with different syntax, in terms of its lazy evaluation.


I prefer to think of Haskell-like lazy evaluation as constructing a dataflow graph. The expression `map f (sort xs)` constructs a dataflow graph that streams each output of the sort function to `f`, and then printing the result begins running that job. Through that lens, the Haskell program is more like constructing a Spark pipeline. But you can also think of it as just sorting a list then transforming each element with a function. It only makes a difference in resource costs or when there's potential nontermination involved, unless you use unsafe effects (e.g.: unsafePerformIO).

Is there a way to think of proofs as being lazy? Yes, but it's not what you think. It's an idea in proof theory called polarization. Some parts of a proof can be called positive or negative. Positive parts roughly correspond to strict, and negative roughly correspond to lazy.

To explain a bit more: Suppose you want to prove all chess games terminate. You start by proving "There is no move in chess that increases the number of pieces on the board." This is a lemma with type `forall m: ChessMove, forall b: BoardState, numPieces b >= numPieces (applyMove m b)`. Suppose you now want to prove that, throughout a game of chess, the amount of material is decreasing. You would do this by inducting over the first lemma, which is essentially the same as using it in a recursive function that takes in a board state and a series of moves, and outputs a proof that the final state does not have more material than the initial state. This is compact, but intrinsically computational. But now you can imagine unrolling that recursive function and getting a different proof that the amount of material is always decreasing: simply write out every possible chess game and check. This is called "cut elimination."

So you can see there's a sense in which every component of a proof is "executable," and you can see whether it executes in a strict or lazy manner. Implications ("If A, then B") are lazy. Conjuctions ("A and B") can be either strict or lazy, depending on how they're used. I'm at the edge of my depth here and can't explain more -- in honesty, I never truly grokked proof polarization.

Conversely, in programming languages, it's not strictly accurate to say that the C program is strict and the Haskell program is lazy. In C, function definitions and macro expansions are lazy. You can have the BAR() macro create a #error, and yet FOO(BAR()) need not create a compile error. In Haskell, bang patterns, primitives like Int#, and the `seq` operator are all strict.

So it's not the case that proofs are lazy and C is strict and Haskell is lazy so it's more like a proof. It's not even accurate to say that C is strict and Haskell is lazy. Within a proof, and within a C and a Haskell program, you can find lazy parts and strict parts.


How much time have you spent with functional programming languages?


We once hired a F# developer to write Clojure and within a week, he was writing some of the clearest Clojure I've read.

Learn paradigms not languages...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: