Recently, I ripped usage examples out of a rust project's README.md, and put them in doc comments. Almost all of them were broken due to small changes over time, and I never remembered to update the readme. `cargo test` runs doc comments like mini integration tests, so now the examples never rot. I wish more languages and tools had this feature.
It means having to go to the linked docs (which are automatically pushed to the repo's github pages) to see examples, but I think this is a reasonable tradeoff.
That’s not better because it implies these are all destructive function calls.
Mutating your inputs is not functional programming. And pipes are effectively compact list comprehensions. Comprehensions without FP is Frankensteinian.
That is a big enough DX problem that I would veto using this on a project.
You’ve implied what I’ll state clearly:
Pipes are for composing transformations, one per line, so that reading comprehension doesn’t nosedive too fast with accumulation of subsequent operations.
Chaining on the same line is shit for readying and worst for git merges and PR reviews.
It is not a surprise that overriding the implementation of an operator’s type coercion works and overrides the behavior of the operator’s type coercion.
I actually don't think you are wrong, but I'm not backing that up with any actual data.
I happened to know it because of how the hyperHTML micro-library works; the author went into great detail about it and a ton of other topics. But my gut would say that the average js dev doesn't know about it.
But then... it's useful for creating component frameworks which... most js devs use. Which doesn't mean they know how they work under the hood. But... a lot of devs I've met specifically choose a framework because of how it works under the hood.
... so... I really have no idea how many people know this. I'm still betting it's less than average.
Well, Proxy objects do allow you to override the behavior of any property, including Symbol properties. Symbol.iterator is pretty widely used to create custom iterable objects, so I would expect curious devs to have taken a look at what else can be done through the use of Symbol properties.
I think that depends on the person, the language, and how familiar they are with the language. Someone's "what the fuck" is another's "obviously it can do that".
Swift, IMHO. Grew up on ObjC and the absolutely crazy things you could pull off dynamically at runtime. You can definitely feel they did not want that in Swift. There's operator overriding but idk if I'd count that as contorting in surprising ways shrugs
No way dude, this does a disservice to the insanity that is C++'s syntax. Wake me up when you have 6 different initialization syntaxes or fun things like 4[array]
It sounds like using less tokens (or, less output due to a more compact syntax) is like a micro-optimization; code should be written for readability, not for compactness. That said, there are some really compact programming languages out there if this is what you need to optimize for.
IMO it's more likely to get confused because there are less unique tokens to differentiate between syntax (e.x. pipe when we want bitwise-or or vice-versa)
I’ve heard it said before on HN that this is not true in general because more tokens in familiar patterns helps the model understand what it’s doing (vs. very terse and novel syntax).
Otherwise LLMs would excel at writing APL and similar languages, but seems like that’s not the case.
In a similar way to the featured project, Chute also uses proxies to work like a pipeline operator. But like in your reply, Chute uses a dot-notation style to chain and send data through a mix of functions and methods.
You might like to see how Chute uses proxies, as it requires no `chainWith` or similar setup step before use. Without setup, Chute can send data through global or local, top-level or nested, native or custom, unary, curried or non-unary functions and methods. It gives non-unary functions the current data at a specific argument position by using a custom-nameable placeholder variable.
Since this library leverages Symbol.toPrimitive, you may also use operators besides bitwise-OR. Additionally, the library does not seem to dispatch on the `hint` parameter[0]. Now I want to open a JS REPL, try placing this library's pipe object into string template literals, and see what happens.
In case it might interest anyone, I wrote a similar vanilla JS function last year called Chute. Chute chains methods and function calls using dot-notation.
I am all for clean syntax but I feel like JS has already reached a nice middle ground between expressiveness (especially w/ map/reduce/filter) and readability. I'd personally rather not have another syntax that everyone will have to learn unless we're already moving to a new language.
I think JS's map/reduce/filter design is one of the worst ones out there actually - map has footguns with its extra arguments and everything gets converted to an array at the drop of a hat. Still, pipeline syntax probably won't help fix any of that.
I always thought JS map filter reduce felt quite nice, especially playing around with data in the REPL. Java maps with all the conversions back and forth to streams are clumsy.
> everything gets converted to an array at the drop of a hat
Can you name an example? IME the opposite is a more common complaint: needing to explicitly convert values to arrays from many common APIs which return eg iterables/iterators.
Right, but I’m not clear on what gets converted to an array. Do you mean more or less what I said in my previous comment? That it requires you (your code, or calling code in general) to perform that conversion excessively?
I think what confused me is the passive language: "everything gets converted" sounds (to me) like the runtime or some aspect of language semantics is converting everything, rather than developers. Whereas this is the same complaint I mentioned.
One gripe I have is that the result of map/filter is always an array. As a result, doing `foo.map(...).filter(...).slice(0, 3)` will run the map and the filter on the entire array even if it has hundreds of entries and I only need the first 10 to find the 3 that match the filter.
I agree but to steelman it, what about custom functions? I think just doing it naively is perfectly fine. Or if you want use some pipe utility. Or wrap the array, string, etc. with your own custom methods.
If you're interested in the Ruby language too, check out this PoC gem for an "operator-less" syntax for pipe operations using regular blocks/expressions like every other Ruby DSL.
C++ is the reason people have that reaction. The quintessential example in introductory texts for operator overloading is using bit-shift operators to output text. I mean, come on - if that’s your example, don’t complain when people follow suit and get it wrong.
C++ has std::format these days that does a far more sane thing, people are too quick to throw out the baby with the bathwater when it comes to bad things.
Some OO is fine, just don't make your architecture or language entirely dependent on it. Same with operator overloading.
When it comes to math heavy workloads, you really want a language that supports operator overloading (or have a language full of heavy vector primitives), doing it all without just becomes painful for other reasons.
Yes, the early C++ _STDLIB_ was shit early on due to boneheaded architectural and syntactic decisions (and memory safety issues is another whole chapter), but that doesn't take away that the language is a damn powerful and useful one.
std::format in C++20 is just for the string manipulation half but you still left shift cout by the resulting string to output text in canonical C++.
C++23 introduced std::print(), which is more or less the modernized printf() C++ probably should have started with and also includes the functionality of std::format(). Unfortunately, it'll be another 10 years before I can actually use it outside of home projects... but at least it's there now!
While that operator is also used for bit-shift, it is not the bit-shift operator. It's not that the bit-shift operator is used for stream direction, it's that the same operator is used for both stream direction and bit-shifts. And which code is operating on both high-level abstract streams and bit-shifts at the same time.
Oh wait, looked at the source again, so it's some weird stateful collection thing triggered by the type coercion? By now I'm wishing that it was operator overloading.
Pipes are great in environments where "everything is a string" (bash, etc), but do we really need them in javascript? I have yet to see a compelling example.
Pipes are great where you want to chain several operations together. Piping is very common in statically typed functional langauges, where there are lots of different types in play.
Sequences are a common example.
So this:
xs.map(x => x * 2).filter(x => x > 4).sorted().take(5)
In pipes this might look like:
xs |> map(x => x * 2) |> filter(x => x > 4) |> sorted() |> take(5)
In functional languages (of the ML variety), convention is to put each operation on its own line:
xs
|> map(x => x * 2)
|> filter(x => x > 4)
|> sorted()
|> take(5)
Note this makes for really nice diffs with the standard Git diff tool!
But why is this better?
Well, suppose the operation you want is not implemented as a method on `xs`. For a long time JavaScript did not offer `flatMap` on arrays.
You'll need to add it somehow, such as on the prototype (nasty) or by wrapping `xs` in another type (overhead, verbose).
With the pipe operator, each operation is just a plain-ol function.
This:
xs |> f
Is syntactic sugar for:
f(xs)
This allows us to "extend" `xs` in a manner that can be compiled with zero run-time overhead.
if the language or std lib already allows for chaining then pipes aren't as attractive. They're a much nicer alternative when the other answer is nested function calls.
e.g.
So this:
take(sorted(filter(map(xs, x => x \* 2), x => x > 4)), 5)
To your example:
xs |> map(x => x \* 2) |> filter(x => x > 4) |> sorted() |> take(5)
is a marked improvement to me. Much easier to read the order of operations and which args belong to which call.
First of all, with the actual proposal, wouldnt it actually be like this? with the %.
xs
|> map(%, x => x * 2)
|> filter(%, x => x > 4)
|> sorted(%)
|> take(%, 5);
Anything that can currently just chain functions seems like a terrible example because this is perfectly fine:
xs.map(x => x * 2)
.filter(x => x > 4)
.sorted()
.take(5)
Not just fine but much better. No new operators required and less verbose. Just strictly better. This ignores the fact that sorted and take are not actually array methods, but there are equivalent.
But besides that, I think the better steelman would use methods that dont already exist on the prototype. You can still make it work by adding it to the prototype but... meh. Not that I even liket he proposal in that case.
This is just different syntax for nesting function calls (i.e. c(b(a(value))) becomes value | a | b | c), right? Definitely would make code more readable if this was just something in JS or a compiler where it’s the same as normally calling functions.
I would actually love extension of TS with operator overloading for vector maths (games, other linear algebra, ML use cases). I wouldn’t want libraries to rely on it, but in my own application code, it can sometimes be really helpful.
Overengineered in my view, what is wrong with `x | f` is `f(x)`? Then `x | f | g` can be read as `g(f(x))` and you're done. I don't see any reason to make it more complicated than that.
Piping syntax is nice for reading, but it's hard to debug. There's no clear way to "step through" each stage of the pipe to see the intermediate results.