I'm afraid this article kinda fails at at its job. It starts out with a very bold claim ("Zig is not only a new programming language, but it’s a totally new way to write programs"), but ends up listing a bunch of features that are not unique to Zig or even introduced by Zig: type inference (Invented in the late 60s, first practically implemented in the 80s), anonymous structs (C#, Go, Typescript, many ML-style languages), labeled breaks, functions that are not globally public by default...
It seems like this is written from the perspective of C/C++ and Java and perhaps a couple of traditional (dynamically typed) languages.
On the other hand, the concept that makes Zig really unique (comptime) is not touched upon at all. I would argue compile-time evaluation is not entirely new (you can look at Lisp macros back in the 60s), but the way Zig implements this feature and how it is used instead of generics is interesting enough to make Zig unique. I still feel like the claim is a bit hyperbolic, but there is a story that you can sell about Zig being unique. I wanted to read this story, but I feel like this is not it.
But i would not put comptime as some sort of magical invention. Its still just a newish take on meta programming. We had that since forever. From my minimal time with Zig i kind of think comptime as a better version of c++ templates.
That said Zig is possibly a better alternative to c++, but not that exiting for me. I kind of dont get why so many think its the holy grail, first it was rust, and now zig.
What you’re seeing is just a repeat of the same old thing. It used to be Ruby Clojure Scala Go Rust now it’s Zig.
When the author mentioned manually wiring up a PATH variable with gushing excitement I finally knew what I was up against. Holy fuck. Somebody please introduce the poor soul to UNIX because I’d rather someone pitches a tent over a cool NetBSD function or something. That would be a nerd article worth getting a box of Kleenex for.
You can switch one out for any of the others and the article would be the same.
I’m holding out for that one fucking moron who writes the next “Why I’m still using Lua in 2025” and we find out the punch line is he’s making 10M ARR off of something so dumb it shouldn’t even be possible.
In my opinion the biggest issue of Zig is that it doesn't allow attaching data to error. The error can only be passed via side channel, which is inconvenient and ENOURAGES TOOL DEVELOPERS TO NOT PASS ERROR DATA, which greatly increase debugging difficulty.
Somethings there are 100 things that possibly go wrong. With error data you can easily know which exact thing is wrong. But with error code you just know "something is wrong, don't know which exactly".
> I just spent way longer than I should have to debugging an issue of my project's build not working on Windows given that all I had to work with from the zig compiler was an error: AccessDenied and the build command that failed. When I finally gave up and switched to rewriting and then debugging things through Node the error that it returned was EBUSY and the specific path in question that Windows considered to be busy, which made the problem actually tractable ... I think the fact that even the compiler can't consistently implement this pattern points to it perhaps being too manual/tedious/unergonomic/difficult to expect the Zig ecosystem at large to do the same
Interestingly, I just read an article from matklad (who works a lot with Zig) talking about the benefits of splitting up error codes and error diagnostics, and the pattern of using a diagnostic sync to provide human-readable diagnostic information:
Honestly I was quite convinced by that, because it kind of matches my own experiences that, even when using complex `Error` objects in languages with exceptions, it's still often useful to create a separate diagnostics channel to feed information back to the user. Even for application errors for servers and things, that diagnostics channel is often just logging information out when it happens, then returning an error.
Your and GP's two statements are not mutually exclusive. This paradigm can have significant benefits, and at the same time be too cumbersome for people to want to use consistently.
The "correct" way is highly context dependent with the added proviso that Zig assumes a low-level systems context.
In this context, adding data to an error may be expedient but 1) it has a non-trivial overhead on average and 2) may be inadvisable in some circumstances due to system state. I haven't written any systems in Zig yet but in low-level high-performance C++20 code bases we basically do the same thing when it comes to error handling. The conditional late binding of error context lets you choose when and where to do it when it makes sense and is likely to be safe.
A fundamental caveat of systems languages is that expediency takes a back seat to precision, performance, and determinism. That's the nature of the thing.
For quite a long time, I have been wondering why I like to code in Raku so much … in a round about way you set me thinking. Perhaps it’s because, in Raku, precision, performance and determinism take a back seat to expediency. (Sorry for the tangent).
If the error rarely happens then passing error data shouldn't affect performance in visible way. If the error occurs in common path then it's designed wrongly.
I agree that in special states like OOM passing error data with allocation is not ok.
People are working on this. std.zon is generally considered to be a good example of how to handle errors and diagnostics, though it's an area of active exploration. The plan is to eventually collect all the good patterns people have come up with and (1) publish them in a collection, and (2) update std to actually use them.
I know that Zig doesn't allow attaching data to error for valid reasons. If error data contains interior pointer then it can easily cause memory safety problem. Zig doesn't have a borrow cheker or ownership system to prevent that.
This seems kinda contrived. In practice that "ERROR DATA" tends not to exist. Unexpected errors almost never originate within the code in question. In basically all cases that "ERROR DATA" is just recapitulating the result of a system call, and the OS doesn't have any data to pass.
And even if it did, interpreting the error generally doesn't every work with a microscope over attached data. You got an error from a write. What does the data contain? The file descriptor? Not great, since you really want to know the path to the file. But even then, it turns out it doesn't really matter because what really happened was the storage filled up due to a misbehaving process somewhere else.
"Error data" is one of those conceits that sounds like a good idea but in practice is mostly just busy work. Architect your systems to fail gracefully, don't fool yourself into pretending you can "handle" errors in clever ways.
That's really cool actually. Now that AI is a little more commonly available for developer tooling I feel like its easier than ever to learn any programming language since you can braindrain the model.
The standard models are pretty bad a zig right now since the language is so new and changes so fast. The entire language spec is available in one html file though so you can have a little better success feeding that for context.
> The entire language spec is available in one html file though so you can have a little better success feeding that for context.
This is what I've started doing for every library I use. I go to their Github, download their docs, and drop the whole thing into my project. Then whenever the AI gets confused, I say "consult docs/somelib/"
I totally vibe with the intro but then the rest of the article goes on to be a showcase bits of zig.
I feel what is missing is how each feature is so cool compared to other languages.
As a language nerd zig syntax is just so cool. It doesn’t feel the need to adhere to any conventions and seems to solve the problems in the most direct and simple way.
An example of this declaring a label and referring to a label. By moving the colon to either end it makes labels instantly understood which form it is.
And then there is the runtime promises such as no hidden control flow. There are no magical @decorators or destructors. Instead we have explicit control flow like defer.
Finally there is comptime. No need to learn another macro syntax. It’s just more zig during compilation
I was also curious what direction the article was going to take. The showcase is cool, and the features you mentioned are cool. But for me, Zig is cool is because all the pieces simply fit together with essentially no redundancy or overloading. You learn the constructs and they just compose as you expect. There's one feature I'd personally like added, but there's nothing actually _missing_. Coding in it quickly felt like using a tool I'd used for years, and that's special.
Zig's big feature imo is just the relative absence of warts in the core language. I really don't know how to communicate that in an article. You kind of just have to build something in it.
> Coding in it quickly felt like using a tool I'd used for years, and that's special.
That's been my exact experience too. I was surprised how fast I felt confident in writing zig code. I only started using it a month ago, and already I've made it to 5000 lines in a custom tcl interpreter. It just gets out of the way of me expressing the code I want to write, which is an incredible feeling. Want to focus on fitting data structures on L1 cache? Go ahead. Want to automatically generate lookup tables from an enum? 20 lines of understandable comptime. Want to use tagged pointers? Using "align(128)" ensures your pointers are aligned so you can pack enough bits in.
The feature I want is multimethods -- function overloading based on the runtime (not compile time) type of all the arguments.
Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Common Lisp has had it forever, and Dylan transferred that to a language with more conventional syntax -- but is very near to dead now, certainly hasn't snowballed.
On the other hand Julia does it very well and seems to be gaining a lot of traction as a very high performance but very expressive and safe language.
I think this is a major mistake for Zig's target adoption market - low level programmers trying to use a better C.
Julia is phenomenally great for solo/small projects, but as soon as you have complex dependencies that _you_ can't update - all the overloading makes it an absolute nightmare to debug.
It’s incredibly silly but I dislike zigs identifier policy. Mixing snake case and camel case for functions is cursed.
That said, amazing effort, progress and results from the ecosystem.
Bursting on the scene with amazing compilation dx, good allocator (and now io) hygiene/explicitness, and a great build system (though somewhat difficult to ramp on). I’m pretty committed to Rust but I am basically permanently zig curious at this point.
[EDIT] “hate” > “dislike”. Hate is a strong word and surely I just need to spend some time writing zig and I’d get used to it.
I don't think Zig--which certainly is innovative in a number of ways--benefits from this sort of thing. Up front is a claim that it's "totally new way to write programs", but zero support is offered, and almost nothing else "meta" said about the language, other than a couple of sentences in the conclusion that are likewise inaccurate hype. I've programmed in many languages including Zig and it definitely is not a new way of programming. It imposes disciplines that are different from those of other languages, but the same is true of other languages.
The final paragraph says "This is all quite surprising" -- why so?
"and let one think that many advantages previously found only in interpreted languages are gradually migrating to compiled languages in order to offer more performance" -- sure, but Zig is hardly the first ... D and Nim both have interpreters built into the compiler that allow extensive comptime computation--both of those languages have far more metalanguage facilities than Zig, in addition to many other language features that Zig lacks--which is not necessarily a fault, as it aims for a certain kind of simplicity and close-to-the-metal performance ... although both D and Nim are highly performant (both have optional garbage collection, though Nim is more advanced in making GC-free programming approachable). One thing you can say about Zig though--it compiles like a bat out of hell.
P.S. Another thing about Zig worth mentioning that came up in some comments is cross compilation. I don't think people understand how Zig is different and what an engineering feat it is (Andrew has a writeup somewhere of how it's done--it's shocking):
If you install Zig, you can now generate executables for virtually any target with just a command line argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires recompiling the compiler and library to target a different architecture. Zig comes with precompiled libraries for a huge number of targets.
I noticed a comment where someone said they love Zig but they've never programmed in it--they use it to cross-compile their Nim programs. (The Nim compiler has a C code backend, and Zig has a C compiler built in, so Nim inherits instant arbitrary cross-compilation to any target via Zig).
I've tried writing a similar post, but I think it's a bit difficult to sound convincing when talking about why Zig is so pleasant. it's really not any one thing. it's a culmination of a lot of well made, pragmatic decisions that don't sound significant on their own. they just culminate in a development experience that feels pleasantly unique.
a few of those decisions seem radical, and I often disagreed with them.. but quite reliably, as I learned more about the decision making, and got deeper into the language, I found myself agreeing with them afterall. I had many moments of enlightenment as I dug deeper.
so anyways, if you're curious, give it an honest chance. I think it's a language and community that rewards curiosity. if you find it fits for you, awesome! luckily, if it doesn't, there's plenty of options these days (I still would like to spend some quality time with Odin)
One of the things I like about Zig is that it pretty explicitly recognizes all the weird edge cases that exist in low-level systems code. A rather large cross-section of languages kind of pretend these cases don’t exist because addressing it would violate the aesthetic they are trying to achieve with the language. Nonetheless, these are real cases because low-level hardware and system behavior doesn’t care about aesthetics as might be expressed in a programming language.
Even C++ didn’t fully repent from this sin until around C++17. I appreciate the non-begrudging acceptance of this reality in Zig.
I would highlight `std::launder` as an example. It was added in C++17. Famously, most people have no idea what it is used for or why it exists. For low-level systems it was a godsend because there wasn’t an official way to express the intent, though compilers left backdoors open because some things require it.
It generates no code, it is a compiler barrier related to constant folding and lifetime analysis that is particularly useful when operating on objects in DMA memory. As far as a compiler is concerned DMA doesn’t exist, it is a Deus Ex Machina. This is an annotation to the compiler that everything it thinks it understands about the contents and lifetime of a bit of memory is now voided and it has to start over. This case is endemic in high-end database engines.
It should be noted that `std::launder` only works for different instances of the same type. If you want to dynamically re-type memory there is a different set of APIs for informing the compiler that DMA dropped a completely different type in the same memory address.
All of this is compiled down to nothing. It annotates for the compiler things it can’t understand just by inspecting the code.
The article doesn't answer the question, it's all just about "the basics of zig" (there is nothing cool manually editing environment variables on Windows with 8 labeled steps (and 5 preliminary steps missing))
and the actual cool stuff is missing:
> with its concept of compile time execution, unfortunately not stressed enough in this article.
Zig is not cool. It's a mediocre new language, missing key features needed for industrial development, like destructors or overall memory safety. But for some reason it's overhyped.
Zig being able to (cross)compile C and C++ feels very similar to how UV functions as a drop in replacement for pip/pip-tools. Seems like a fantastic way to gain traction in already established projects.
I like the idea of the `defer `keyword - you can have automatic cleanup at the end of the scope but you have to make it obvious you are doing so, no hidden execution of anything (unlike c++ destructors).
There's at least 1 thing that Zig is better than Rust is that Zig compiler for Windows can be downloaded, unzipped then used without admin right. Rust needs msvc, which cannot be installed without admin right. It is said that Rust on Windows can use cygwin but I cannot make it work even with AI help.
> I can’t think of any other language in my 45 years long career that surprised more than Zig.
I can say the same (although my career spans only 30 years), or, more accurately, that it's one of the few languages that surprised me most.
Coming to it from a language design perspective, what surprised me is just how far partial evaluation can be taken. While strictly weaker than AST macros in expressive power (macros are "referentially opaque" and therefore more powerful than a referentially transparent partial evaluation - e.g. partial evaluation has no access to an argument's name), it turns out that it's powerful enough to replace not only most "reasonable" uses of macros, but also generics and interfaces. What gives Zig's partial evaluation (comptime) this power is its access to reflection.
Even when combined with reflection, partial evaluation is more pleasurable to work with than macros. In fact, to understand the program's semantics, partial evaluation can be ignored altogether (as it doesn't affect the meaning of computations). I.e. the semantics of a Zig program are the same as if it were interpreted by some language Zig' that is able to run all of Zig's partial-evaluation code (comptime) at runtime rather than at compile time.
Since it also removes the need for other specialised features (generics, interfaces) - even at the cost of an aesthetic that may not appeal to fans of those specialised features - it ends up creating a very expressive, yet surprisingly simple and easy-to-understand language (Lisps are also simple and expressive, but the use of macros makes understanding a Lisp program less easy).
Being simple and easy to understand makes code reviews easier, which may have a positive impact on correctness. The simplicity can also reduce compilation time, which may also have a positive impact on correctness.
Zig's insistence on explicitness - no overloading, no hidden control flow - which also assists reviews, may not be appropriate for a high-level language, but it's a great fit for an unabashedly low-level language, where being able to see every operation as explicit code "on the page" is important. While its designer may or may not admit this, I think Zig abandons C++'s belief that programs of all sizes and kinds will be written in the same language (hence its "zero-cost abstractions", made to give the illusion of a high-level language without its actual high-level abstraction). Developers writing low-level code lose the explicitness they need for review, while those writing high-level programs don't actually gain the level of abstraction required for a smooth program evolution that they need. That belief may have been reasonable in the eighties, but I think it has since been convincingly disproved.
Some Zig decisions surprised me in a way that made me go more "huh" than "wow", such as it having little encapsulation to speak of. In a high-level language I wouldn't have that (after years of experience with Java's wide ecosystem of libraries, we learned that we need even more and stronger encapsulation than we originally had to keep compatibility while evolving code). But perhaps this is the right choice for a low-level language where programs are expected to be smaller and with fewer dependencies (certainly shallower dependency graphs). I'm curious to see how this pans out.
Zig's terrific support for arenas also makes one of the most powerful low-level memory management techniques (that, like a tracing garbage collector, gives the developer a knob to trade off RAM usage for CPU) very accessible.
I have no idea or prediction on whether Zig will become popular, but it's certainly fascinating. And, being so remarkably easy to learn (especially if you're familiar with low-level programming), it costs little effort to give it a try.
This is the real answer (amongst other goodness) - this one is well executed and differentiated
Every language at scale needs a preprocessor (look at the “use server” and “use gpu” silliness happening in TS) - why is it not the the same as the language you use?
Great comment! I agree about comptime, as a Rust programmer I consider it one of the areas where Zig is clearly better than Rust with its two macro systems and the declarative generics language. It's probably the biggest "killer feature" of the language.
> as a Rust programmer I consider it one of the areas where Zig is clearly better than Rust with its two macro systems and the declarative generics language
IMHO "clearly better" might be a matter of perspective; my impression is that this is one of those things where the different approaches buy you different tradeoffs. For example, by my understanding Rust's generics allows generic functions to be completely typechecked in isolation at the definition site, whereas Zig's comptime is more like C++ templates in that type checking can only be completed upon instantiation. I believe the capabilities of Rust's macros aren't quite the same as those for Zig's comptime - Rust's macros operate on syntax, so they can pull off transformations (e.g., #[derive], completely different syntax, etc.) that Zig's comptime can't (though that's not to say that Zig doesn't have its own solutions).
Of course, different people can and will disagree on which tradeoff is more worth it. There's certainly appeal on both sides here.
I look forward to a future high-level language that uses something like comptime for metaprogramming/interfaces/etc, is strongly typed, but lets you write scripts as easily as python or javascript.
Tryout Nim, it has powerful comptime/metaprogramming, statically typed, automatic memory management and is as easy to program as python or javascript while still allowing low level stuff.
For me it'd be hard to go back to languages that don't have all that. Only swift comes close.
D comes close ... it too has a full-language comptime interpreter and other metaprgramming features (though not as rich as Nim's), statically typed, optional garbage collection, and you can write
#!/usr/bin/env rdmd
[D code]
and run it as if it were an executable. (The compilation is cached so it runs just as fast on subsequent runs.)
Thing is, having a good JIT gives you the performance of partial evaluation pretty much automatically (at the cost of less predictability), as compilation occurs at runtime, so the distinction between compile-time and runtime largely disappears. E.g., in Java, a reflective call will eventually be compiled by the JIT into a direct call; virtual dispatch will also be compiled into direct dispatch or even inlined (when appropriate) etc..
> Probably the most incredible virtue of Zig compiler is its ability to compile C code. This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique.
Isn't cross compilation very, very ordinary? Inline C is cool, like C has inline ASM (for the target arch). But cross-compiling? If you built a phone app on your computer you did that as a matter of course, and there are many other common use cases.
If you install Zig, you can now generate executables for virtually any target with just a CLI argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires compiling the compiler to target a different architecture.
Yes, very rare and there is a strong cartel of companies ensuring it doesn't happen in more mainstream langs through multiple avenues to protect their interests!
From helicoptering folks onto steering committee and indoctrination of young CS majors.
If I had the ability to downvote a comment yet, I'd downvote you. If you're going to spout conspiracy-theory-sounding stuff, at least provide some evidence for your claims!
It doesn't sound like a conspiracy theory, you just have an incredibly poorly calibrated sense of judgement as to the tone of a statment.
Not uncommon in this space though, especially as you get closer to the metal (close as cross-compilation is relative to something like React frontends, at least)
Sometimes if a joke doesn't land, it's because the joke wasn't funny. (Also, yes, a lot of folks here are autistic, maybe cool it with the veiled insults.)
Sure sometimes... other times you get deadpan replies unironically demanding citations and proof of claims.
There's nothing veiled or an insult: what I mentioned is a real factor in why people would read that statement and jump to demanding proof.
-
If I told a room full of plumbers that Sharkbites are actually sponsored by big Water trying to encourage water wastage, it definitely might not land... but none of them are going to demand a citation!
I've heard good things about Zig. I want to pick it up and experiment with it but at ~2% market share I find it hard to justify spending the time to learn and master it right now. It's usually much easier to find the time to learn a new language if there is a project (work or open source) that is also using it.
Is dvui something you want to see? Although the use of backends are still c based, the core part of the gui seems written fully in zig rather than a binding from a c library.
Is the inline testing good in practice? I do like the clear proximity and scope of the code being tested but I can also imagine trying to cram in all the unit tests and mocking and logging and such.
Does the feature end up feeling unused, dominating app code with test code, or do people end up finding a happy medium?
Zig defaults to statically linking musl when targeting Linux, so the output will not be very interesting unless you target dynamic musl, or glibc, or FreeBSD/NetBSD.
It's more of an in-between C and Rust than Go as it is a systems language with no built-in garbage collector for memory management. It has a lot of memory safety features, but it's not as memory safe as Rust. However, it avoids a lot of the complexity of Rust like implicit macro expansion, managing lifetimes, generics and complex trait system, etc. It also compiles much more compactly than Rust, in my experience.
In my mind, it's an accessible systems language. Very readable. Minimal footprint.
Well, it's insanely simple, insanely fast, often more performant than Rust with lower resource usage, with first class C-interop and cross-compiling out of the box. It's easily my favorite language now, with Go being a close second. Both are opinionated and have a standard formatter that makes Zig code instantly readable when you see it, similar to Go. Rust was once interesting, but it's firmly in macro hell territory now, just like Swift, with concealed execution paths aplenty and neither cross-compiling out of the box.
>often more performant than Rust with lower resource usage
[citation needed]
If we are to trust this page [0] Rust beats Zig on most benchmarks. In the Techempower benchmarks [1] Rust submissions dominate the TOP, while Zig is... quite far.
Several posts which I've seen in the past about Zig beating Rust by 3x or such all turned to be based on low quality Rust code with some performance pitfalls like measuring performance of writing into stdout (which Rust locks by default and Zig does not) or iterating over ..= ranges which are known to be problematic from the performance perspective.
I would say in most submission-based benchmarks among languages that should perform similar, this mostly reflects the size and enthusiasm of the community.
"This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique."
Perhaps I'm missing something but this is utterly routine. It even has the name used here: Cross-compiling.
If you install Zig, you can now generate executables for virtually any target with just a CLI argument specifying the target, regardless of what machine you installed it on. Nothing else does that--cross compilation generally requires compiling the compiler to target a different architecture.
> I can’t think of any other language in my 45 years long career that surprised more than Zig. I can easily say that Zig is not only a new programming language, but it’s a totally new way to write programs, in my opinion. To say it’s merely a language to replace C or C++, it’s a huge understatement.
I don't understand how the things presented in this article are surprising. Zig has several nice features shared by many modern programming languages?
> One may wonder how the compiler discovers the variable type. The type in this case is *inferred* by the initialization.
That the author feels the need to emphasize this means either that they haven't paid attention to modern languages for a very long time, or this article is for people who haven't paid attention to modern languages for a very long time.
Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
> One is Zig’s robustness. In the case of the shift operation no wrong behavior is allowed and the situation is caught at execution time, as has been shown.
Panicking at runtime is better than just silently overflowing, but I don't know if it's the best example to show the 'robustness' of a language...
And it's not caught in ReleaseFast builds ... which is not at all unique to Zig (although Zig does do many innovative things to catch errors in debug builds).
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
I'm not even sure I'd call this type inference (other people definitely do call it type inference) given that it's only working in one direction. Even Java (var) and C23 (auto), the two languages the author calls out, have that. It's much less convenient than something like Hindley-Milner.
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
It’s not common in lower level languages without garbage collectors or languages focused on compilation speed.
The only popular language I can think of is C (prior to C23). If you want to include Fortran and Ada, that would be three, but these are all very old languages. All modern system languages have type deduction for variable declarations.
I meant for focused on compilation speed to apply only to lower level languages. And when I say lower level I don’t really include D because it has a garbage collector (I know it’s optional but much of the standard library uses it I believe).
That a language has a garbage collector is completely orthogonal to whether it has type inference ... what the heck does it matter what "much of the standard library uses" to this issue? It's pure sophism. Even C now has type inference. The plain fact is that the claim is wrong.
I feel like the article didn't really hit on the big ones: comptime functions, no hidden control flow, elegant defaults, safe buffers, etc.
What Zig really does is make systems programming more accessible. Rust is great, but its guarantees of memory safety come with a learning curve that demands mastering lifetimes and generics and macros and a complex trait system. Zig is in that class of programming languages like C, C++, and Rust, and unlike Golang, C#, Java, Python, JS, etc that have built-in garbage collection.
The explicit control flow allows you as a developer to avoid some optimizations done in Rust (or common in 3rd party libraries) that can bloat binary sizes. This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
The built-in C/C++ compiler and language features for interacting with C code easily also ensures that devs have access to a mature ecosystem despite the language being young.
My experience with Zig so far has been pleasurable. The main downside to the language has been the churn between minor versions (language is still pre-1.0 so makes perfect sense, but still). That being said, I like Zig's new approach to explicit async I/O that parallels how the language treats Allocators. It feels like the correct way to do it and allows developers again the flexibility to control how async and concurrency is handled (can choose single-threaded event loop or multi-threaded pool quite easily).
Zig's generics cause bloat just like any other language with generics--explicit flow control has nothing to do with it.
Zig is a good language. So are Rust, D, Nim, and a bunch of others. People tend to think that the ones they know about are better than all the rest because they don't know about the rest and are implicitly or explicitly comparing their language to C.
> This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
I don't think there's is any significant different here between zig, C and Rust for bare-metal code size. I can get the compiler to generate the same tiny machine code in any of these languages.
That's not been my experience with Rust. On average produces binaries at least 4x bigger than the Zig I've compiled (and yes, I've set all the build optimization flags for binary size). I know it's probably theoretically possible to achieve similar results with Rust, it's just you have to be much more careful about things like monomorphization of generics, inlining, macro expansion, implicit memory allocation, etc that happen under the hood. Even Rust's standard library is quite hefty.
C, yes, you can compile C quite small very easily. Zig is like a simpler C, in my mind.
The Rust standard library in its default config should not be used if you care about code size (std is compiled with panic/fmt and backtrace machinery on by default). no_std has no visible deps besides memcpy/memset, and is comparable to bare metal C.
This. Is Zig an interesting language? Yes sure. But “a totally new way to write programs”? No, I don’t see a single feature that is not found in any other programming languages.
Nothing against (or for) Zig, but the article author seems unfamiliar with other modern languages in common use... imagine if they saw Swift or Rust. Their mind would be utterly, utterly blown.
For a language that’s so low level and performance focused, I’m surprised that it has those extra io and allocator arguments to functions. Isn’t that creating code bloat and runtime overhead?
Every class method in other languages receives a hidden argument. Odin passes a hidden context argument that contains the allocator. The alternative is global variables--which you can also use in Zig if you're so inclined. The extra arguments aren't something the Zig language imposes, it's a convention.
the answer I've seen when it has been brought up before is that (for allocators) there is not a practical impact on performance -- allocating takes way more time than the virtual dispatch does, so it ends up being negligible. for code bloat, I'm not sure what you mean exactly; the allocator interface is implemented via a VTable, and the impact on binary size is pretty minimal. you're also not really creating more than a couple of allocators in an application (typically a general purpose allocator, and maybe an arena allocator that wraps it in specific scenarios).
for IO, which is new and I have not actually used yet, here are some relevant paragraphs:
The new Io interface is non-generic and uses a vtable for dispatching function calls to a concrete implementation. This has the upside of reducing code bloat, but virtual calls do have a performance penalty at runtime. In release builds the optimizer can de-virtualize function calls but it’s not guaranteed.
...
A side effect of proposal #23367, which is needed for determining upper bound stack size, is guaranteed de-virtualization when there is only one Io implementation being used (also in debug builds!).
He's talking about passing the pointers to the allocators and Io objects as parameters throughout the program, not how allocator vtables for calling the allocator's virtual functions are implemented. But context pointers are a requirement in any program. Consider that a context pointer (`this`) is passed to every single method call ... it's no more "code bloat" than having to save and restore registers on every call.
Regarding runtime overhead, I'd assume you would still need an io implementation, it is just showing it to you explicitly instead of it being hidden behind the std lib.
For simple projects where you don't want to pass it around in function parameters, you can create a global object with one implementation and use it from everywhere.
You still have to pass arguments to library functions that need to allocate or do I/O ... but the alternative is worse. This is really a bogus issue ... no one is crying over having to pass a `this` pointer to every single call of a method in other languages. Context pointers are a requirement in any sizeable or multi-threaded program, and Zig gives the user full control over what the context object looks like.
Yeah thing is it's usually better to have allocator in particular defined as a parameter so that you can use the testing allocator in your tests to detect memory leaks, double frees, etc. And then you use more optimal allocators for release mode.
It seems like this is written from the perspective of C/C++ and Java and perhaps a couple of traditional (dynamically typed) languages.
On the other hand, the concept that makes Zig really unique (comptime) is not touched upon at all. I would argue compile-time evaluation is not entirely new (you can look at Lisp macros back in the 60s), but the way Zig implements this feature and how it is used instead of generics is interesting enough to make Zig unique. I still feel like the claim is a bit hyperbolic, but there is a story that you can sell about Zig being unique. I wanted to read this story, but I feel like this is not it.
reply