Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In my practice of FP, it tends to force me to make state explicit and and available.

Ha, I think I get what you mean. The state that you do have is generally called out more visibly. I do think that's a product of defaulting to state avoidance, since the thing you're avoiding becomes more visible whenever it is present.

I think FP does a good job of pushing as much state as possible into the "state in the continuation" category, but there seems to be a hard kernel of mutable state that doesn't dissolve so easily. On the bright side, that makes for a bright, shining target to aim research at (LVars, CRDTs, logic programming).

> there’s something that feels very distinct between static and runtime state

For what it's worth, I'm only thinking about runtime state here. Can you call out where it seems like I'm referring to static state?



> The state that you do have is generally called out more visibly.

You see, this is the exact opposite of what I was discovering in this codebase. Sure, in Haskell, you have an IO monad that effectively puts bright blinking lights and klaxons over all the state, and a compiler that forces you to use it. But the vast majority of functional-style code is not written in a language like that. It's written in a language like Scala or Clojure or JavaScript that allows you to sneak a side effect into any portion of the call tree. And, by being several layers deep in the call tree, it's not visible. It's hidden, and, when it's being done in the context of a generally functional idiom, it's downright pernicious.

You hit the nail on the head in your comment further up. I did find my mutable variables. My epiphany after seeing this code is that, in a language where the only thing that can enforce true purity (as opposed to simply trying to avoid mutable variables as a general rule) is programmer discipline, the natural equilibrium point is code that still every bit as riddled with state. It just delegates the management of that state out to Redis or whatever.

Discipline is a good enforcement strategy for small individual projects. It's possibly also a good one for open source projects. I don't believe it can work for corporate team projects, because social factors at play generally won't allow it to work. There will always be some business constraint that prompts people to take shortcuts.


That definitely toes the important line between FP as defined and FP as practiced. Although, I’d argue that proper state management requires discipline in all cases. I’ve seen OO codebases that operated from lots of trick mutable shared redis vars as well!


Whilst trying to avoid the "No True Scotsman" fallacy, I'd argue that this system is FP only in name, but not in spirit. Even in Haskell, you can spend all your time in the IO monad and use IORefs as shared mutable cells, but you'd have a hard time arguing that such code is "functional".

> It's hidden, and, when it's being done in the context of a generally functional idiom, it's downright pernicious.

I think what we're seeing is a distinction between _syntactically FP_ and _semantically FP_ qualities. It's easy to apply _syntactic_ idioms obtained from FP, as it allows you to avoid and reduce state wherever possible. However, in a language where mutable state is assumed, and it's your responsibility to not use it, you don't get the _semantic_ guarantees about the behavior of your program.

I don't like having to exercise discipline, because no matter how good I am at it, I'm only a temporary part of any software system. IMHO, the fundamental goal of software architecture is to institute bias directly into a codebase to support the problem domain. The way in which you work with a codebase is informed by how that codebase wants you to work with it: you'll naturally avoid things that are made difficult to do, and prefer things that are made easier to do.

Programming languages are essentially the basement level of any given architecture, because it is nearly impossible to override the decisions your language makes for you. It is almost always going to be easier to use what the language provides you, and if the language provides global mutable state, it will always be tempting to couple two otherwise separate regions of your codebase by a mutable cell. Some languages especially make FP idioms difficult (hi, Java), so you end up fighting an uphill battle -- unwinnable if you're not extraordinarily careful.

> There will always be some business constraint that prompts people to take shortcuts.

To borrow a phrase, I don't think FP can "win" until we deal with the forces that make mutable cells such an attractive choice. There are multiple facets to the problem; it's not enough to just pick languages that make FP the easier option (or mutable shared state the harder option). IMO, we need to have an industrial expectation of domain modeling, and architect our systems specifically with our problem domain in mind, so that problems in that domain -- and expected evolution in that domain -- can be handled not only easily, but intuitively within the set architecture. (Lispers go wild over defining their own language within Lisp for exactly this reason -- but all things in moderation.)


> Whilst trying to avoid the "No True Scotsman" fallacy, I'd argue that this system is FP only in name,

I think that, at least for the purposes of the way of thinking that I am moving toward, it isn't exactly that, so much as that we seem to have hit a point of very vehement agreement, except perhaps for some slightly different coloring on the pessimism/optimism scale.

I agree with you that well-done functional code is much nicer to work with, or that the spot where this code I was working with went off the rails is that it kept departing from functional style when the original authors though it convenient to do so. It's more that I'm discovering that FP has an Achilles heel, and it turns out that it was exactly the same Achilles heel that produced my ultimate frustration with SOLID: In typical usage, it's an unstable equilibrium point. I suspect, in retrospect, that one movement failed and the other is doomed to fail because, as you allude to in that last paragraph, they're both trying to solve the wrong problem.

Other background information here is that I've lately been learning Smalltalk and Rust, and, as a result, seeing how eliminating state is far from the only way to tame it. And I've been noticing that, from a readability and maintainability perspective, many of the most compelling open source codebases I'm aware of tend to be neither functional nor object-oriented.


> as a result, seeing how eliminating state is far from the only way to tame it.

I agree! Elimination is but an extreme form of control :)

I have strong hopes that logic programming will provide the next generation of formal tools for controlling (rather than eliminating) state. My ideal paradigm would be "logical shell, functional core" (swapping out "imperative shell"). But logical methods are (a) unknown, (b) niche, and (c) overshadowed by the barest treatment students receive of Prolog, so there's still a long way to go here.

(FWIW, I think of things like LVars, CRDTs, and distributed protocols more broadly as having a fundamentally logical flavor. See the recent CALM Theorem for more on that.)

(EDIT: Here's a very recent comment I made where I dig a bit more into those logical items. https://news.ycombinator.com/item?id=25567740)


I was thinking of “static” state as being the idea of state existing “in the program text” and “in the continuation”. Perhaps I don’t yet fully grasp what you’re thinking of with those ideas of state. Especially given the correspondence between continuation passing style and ANF.

I had seen it as “remaining reduction” which in some sense is embodied in the syntax tree being reduced. So in that sense, it’s kind of static from the written text of the program (though duplication can force that to be a runtime concern).


Oh, I see! Yes, the "continuation" I'm referring to is the "remaining reduction". I like to think of FP programs proceeding by reduction, so there's no need to introduce any auxiliary state just to describe the computation -- it's always present in the program itself.

In a traditional imperative program, the "continuation" is the call stack + heap, both of which need to be introduced and managed separately from the "remaining reduction". In formal semantics, you usually have to introduce a reduction on "environments", which is a tuple of the remaining reduction and the separately-managed forms of state. This, specifically, is why I think of state in imperative languages as "explicit" -- it's a separate entity in the operational semantics.

I think the confusion may be that I'm thinking very much in terms of semantics, and almost not at all in terms of syntax. If you're an interpreter in the middle of executing a program, what information do you need to do your job? (We execute code mentally to understand and grasp it, after all.) In the imperative case, the program text (reduced up to this point) is "not enough" to tell what work remains. In the functional case, you need nothing else. State is a real, additional, explicit thing in the one case, and an illusory, implicit thing in the other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: