> If the value of human labor is going to zero, which some say ai will induce
These "some" are founders of AI companies and investors who put a lot of money into such companies. Of course, the statements that these people "excrete" serve an agenda ...
Yes, they are similar and in both cases what we know about them was passed down by their unreliable students with an agenda. I have studied both over the past few years and I find myself disagreeing with Plato's Socrates often whereas I find the Jesus of the Gospels much harder to argue against.
I am not a fan of function chaining in the style advocated in the article. In my experience functional abstractions always add function call indirection (that may or may not be optimized by the compiler).
You don't need a library implementation of fold (which can be used to implement map/flatmap/etc). Instead, it can be inlined as a tail recursive function (trf). This is better, in my opinion, because there is no function call indirection and the trf will have a name which is more clear than fold, reducing the need for inline comments or inference on the part of the programmer.
I also am not a fan of a globally shared Result class. Ideally, a language has lightweight support for defining sum/union types and pattern matching on them. With Result, you are limited to one happy path and one error path. For many problems, there are multiple successful outputs or multiple failure modes and using Result forces unnecessary nesting which bloats both the code for unpacking and the runtime objects.
Functional abstractions are great for writing code. They allow to nicely and concisely express ideas that otherwise take a lot of boilerplate. Now, for trying to debug said code... gl hf.
I find the direction of zig confusing. Is it supposed to be a simple language or a complex one? Low level or high level? This feature is to me a strange mix of high and low level functionality and quite complex.
The io interface looks like OO but violates the Liskov substitution principle. For me, this does not solve the function color problem, but instead hides it. Every function with an IO interface cannot be reasoned about locally because of unexpected interactions with the io parameter input. This is particularly nasty when IO objects are shared across library boundaries. I now need to understand how the library internally manages io if I share that object with my internal code. Code that worked in one context may surprisingly not work in another context. As a library author, how do I handle an io object that doesn't behave as I expect?
Trying to solve this problem at the language level fundamentally feels like a mistake to me because you can't anticipate in advance all of the potential use cases for something as broad as io. That's not to say that this direction shouldn't be explored, but if it were my project, I would separate this into another package that I would not call standard.
i think you are missing that a proper io interface should encapsulate all abstractions that care about asynchrony and patterns thereof. is that possible? we will find out. It's not unreasonable to be skeptical but can you come up with a concrete example?
> As a library author, how do I handle an io object that doesn't behave as I expect
you ship with tests against the four or five default patterns in the stdlib and if anyone wants to do anything substantially crazier to the point that it doesnt work, thats on them, they can submit a PR and you can curbstomp it if you want.
> function coloring
i recommend reading the function coloring article. there are five criteria that describe what make up the function coloring problem, it's not just that there are "more than one class of function calling conventions"
An interface is a library decision, not a language decision. The level of abstraction possible is part of a language decision. GP is saying that this adds "too much" possible abstraction, and therefore qualifies as "too high level". Another benchmark about "too high level" would be that it requires precisely the "guess the internal plumbing" tests that you describe.
Not really advocating anything, just connecting the two a little better.
What exactly makes it unpredictable? The functions in the interface have a fairly well defined meaning, take this input, run I/O operation and return results. Some implementation will suspend your code via user-space context switching, some implementation will just directly run the syscall. This is not different than approaches like the virtual thread API in Java, where you use the same APIs for I/O no matter the context. In Python world, before async/await, this was solves in gevent by monkey patching all the I/O functions in the standard library. This interface just abstracts that part out.
I like Zig a lot, but something about this has been bothering me since it was announced. I can't put my finger on why, I honestly don't have a technical reason, but it just feels like the wrong direction to go.
Hopefully I'm wrong and it's wildly successful. Time will tell I guess.
It's funny how this makes the Haskell IO type so clearly valuable. It is inherently async and the RTS makes it Just Work. Ofc there are dragons afoot always but mostly you just program and benefit.
> Every function with an IO interface cannot be reasoned about locally because of unexpected interactions with the io parameter input. This is particularly nasty when IO objects are shared across library boundaries.
Isn't this just as true of any function using io in any other language?
> As a library author, how do I handle an io object that doesn't behave as I expect?
But isn't that the point of having an interface? To specify how the io object can and can't behave.
It's more about allowing a-library-fits-all than forcing it. You don't have to ask for io, you just should, if you are writing a library. You can even do it the Rust way and write different libraries for example for users who want or don't want async if you really want to.
Yeah, these kinds of "orthogonal" things that you want to set up "on the outside" and then have affect the "inner" code (like allocators, "io" in this case, and maybe also presence/absence of GC, etc.) all seem to cry out for something like Lisp dynamic variables.
It depends on how you do it. XSLT 2.0 had <xsl:tunnel>, where you still had to declare them explicitly as function (well, template) parameters, just with a flag. No explicit control over levels, you just get the most recent one that someone has passed with <xsl:with-param tunnel="yes"> with the matching qualified name.
For something like Zig, it would make sense to go one step further and require them to be declared to be passed, i.e. no "tunneling" through interleaving non-Io functions. But it could still automatically match them e.g. by types, so that any argument of type Io, if marked with some keyword to indicate explicit propagation, would be automatically passed to a call that requires one.
fwiw i thought the previous async based on whole-program analysis and transformation to stackless coroutines was pretty sweet, and similar sorts of features ship in rust and C++ as well
If you are going to attack the sacred text of two billion people, it would be better to avoid a lazy comparison to Hitler. Have you read the Quran? Do you understand the historical roots from which it emerged? Do you know how it had been used and abused? What is the relationship between modern science and islam? How has it been used to justify violence? How has it been to argue for peace? Have the people who have used it to justify violence understood the original meaning? How does the violence/body count compare to other dogmatic religions, especially christianity?
There is violence in every ideology. To deny this is to deny reality. Singling out one group as uniquely prone to violence is both uncivil and dangerous in my view. That does not mean that one cannot point out the shadow side, but one should look in the mirror of one's one preferred ideology, whether that is christianity, atheism, scientism, nationalism, rationalism, etc., before casting blanket aspersions at others.
> Do you understand the historical roots from which it emerged?
Justification of one of the biggest, fastest, and most brutal conquests in history? Because everybody who wasn't a Muslim was fair game for killing or slavery? Because all non-Muslim land really belongs to the Muslims?
That's what it actually says.
> Singling out one group as uniquely prone to violence is both uncivil and dangerous in my view.
Something that I very clearly didn't do. And there was nothing lazy about my comparison.
Chalk it up to youth perhaps, but this piece would benefit, in my view, from more "for me..." or "I found..." and less "You should/must/are..."
I have reached a point in my life in which I recognize that I do not generally appreciate direct advice, especially not the unsolicited variety. Even the bits that I agree with in this piece are tainted by the many cases where I did the exact opposite of what he advises and excelled academically.
I cannot express how liberating it feels to opt out of "advanced" editor tools like lsp. I program in neovim with no plugins, no syntax highlighting and no autocomplete of any kind. There is a discipline that this imposes that I believe leads to better quality programs. It's not for everyone I suppose, but I really recommend trying it.
To each their own. I quit using syntax highlighting about 10 years ago and won't ever go back (been programming for 25 years, vim/neovim user for 24 years). I just like it better, it works for me. It definitely does not make things "difficult for the sake of it" (for me). There are dozens of us! :)
(As to the rest: I use a pretty minimal set of plugins and I use the built in nvim C-o/C-p or C-x C-o/p "dumb" autocomplete. At least I think it's built in...)
(To address sibling comment: If I were colorblind, I would lead with that in any conversation about syntax highlighting; I am not colorblind.)
To answer the question: it's a feeling, like lots of things in software development. I tried "no syntax highlighting", found that I liked it, and I no longer use syntax highlighting. To say "specifically" how it's "better"... I'm not even saying it's better. "I like no-syntax-highlighting" is the statement I'm making (which, when it comes to syntax highlighting, is a statement a lot of people have issues with). So, from my personal experience, I take issue with the statement that no-syntax-highlighting is making things "difficult for the sake of it".
Try this out for analogy: I ate Red Baron pizzas every Friday night for 15 years, then I heard about homemade pizza 10 years ago. I tried making homemade pizza. It was good! ("I tried it and liked it") Now I only eat homemade pizza on Fridays. How is homemade pizza specifically better? It's better because I like it more. That's all there is to it. It's a preference.
(For the analogy to work, you have to like or at some point have liked Red Baron frozen pizzas. I happen to like them... the analogy is flawed though, I admit!)
(Let me preempt criticism that I'm comparing Red Baron frozen pizzas to syntax highlighting. I am not. It's only about the preference, not the object of the preference.)
Not op, but in my case a lifetime of colourblindness has desensitised me to colour as an indicator.
I have my editor configured with zero highlighting for keywords and syntactic elements. Admittedly, I have compilation/lint/syntax/type check errors set to invert the erroneous block, black background white text.
Syntax and keyword highlighting is just noise given I’ve been trained by decades of colourblind unfriendly interfaces
Syntax highlighting doesn't necessarily mean color, though. Using boldface to highlight keywords is another option that is traditional in some circles (e.g. Delphi has been doing that for 30 years now).
that's a very good reason to not use syntax highlighting. If that is what the other guys are dealing with, I withdraw my critique but I don't get the impression that is the case
I recently found out that Mitchell Hashimoto has a setup like this, which blew apart my belief that you need modern tooling to be productive. Do you not find that you fatigue more quickly as a result of having to actively recall everything though? I can't understand how doing things like this would actually result in better code.
The trick is to avoid idle browsing of the code. Be intentional about what you need to do.
I use tools like grep/ripgrep to get a more focused of the code. Then with the line number, I can jump directly to where I need to be. Same with tools like linters and compilers.
In the same file, I often use search instead of line/character movement. You search for a symbols and you just cycle through their location.
I don't think it leads to a better code. But badly organized code will make this harder, so you tend to think about organization.
I haven’t made the jump yet but I believe you. My autocomplete has been broken on Emacs for a year, and I rarely miss it. I never use code actions, nor goto definition. I do enjoy real time errors but they often pop up with such a lag I’ve ran the compiler and it has shown me the error itself. So I dream of just turning off LSP — it’s not like it makes me such a better programmer than 20 years ago.
Re: syntax highlighting, I don’t know how people can work with their harlequin-on-LSD themes that are a constant distraction with no semantic benefit. I have gravitated around mostly mono or duochromatic themes, while 99% are a vomit of colours. I still don’t get why variables or function names need a distinct colour.
I agree and don’t use any of that stuff—-except syntax highlighting. Why wouldn’t you? Color is a whole extra dimension it adds to the code that lets the eye notice errors more quickly and jump around faster.
I do this when working on personal projects. I don't go this far though. I still like having syntax highlighting, and I have an LSP on to try in editor feedback on syntax errors, but I don't use auto completete or in editor documentation.
I had a few periods of doing the same in sublime text, I did use syntax highlighting though. It’s a really great feeling and very liberating, especially in a greenfield project.
Can’t really justify it at work though, projects are too big to and gnarly keep in my head.
Wasting your time hunting down a missing string delimiter forces you to read the whole file line by line maybe, so they write better software having read it.
what do you think about ctags? i think when trying to understand a codebase for the first time a tool to quickly look up definitions, etc. helps immensively
reply