Hacker Newsnew | past | comments | ask | show | jobs | submit | uecker's commentslogin

Indeed, and if someone wants to help work on C, this is very much possible both on the compiler side or on the standards side.

As someone who still thinks one should write C (so as a completely uncool person), what I like about Zig is that it is no-nonsense language that just makes everything work as it is supposed to be without unnecessary complications, D is similar, except that it fell into the trap of adding to many features.

So, no, I do not really see anything fundamentally new either. But to me this is the appealing part. Syntax is ok (at least compared to Rust or C++).

Having said this, I am still skeptical about comptime for various reasons.


Yes, it is the same thing, but since the objects are in free fall and there is no traditional force to cause the acceleration the better view point is that this is accelerated expansion of the universe. In a flat spacetime a forward light-cone can be identified with an expanding (no acceleration) universe where objects just fly away from a single point with constant but different speeds, i.e. an explosion. But in this model space as slice with the same local time after explosion is not flat. Also data seems to indicate that space is flat while space-time is curved on a large scale, so this picture is too simple.

I would say in most submission-based benchmarks among languages that should perform similar, this mostly reflects the size and enthusiasm of the community.

The memory model always had segmented memory in mind and safe C approaches are not new. The provenance model makes this more precise, but the need for this was to deal with corner cases such as pointer-to-integer roundtrips or access to the representation bytes of a pointer. Of course, neither GCC nor clang get this right, to the extend that those compiler are internally inconsistent and miscompile even code that did not any clarification to be considered correct.

It has been called a "gift to the world". https://www.nytimes.com/2014/09/14/science/earth/sun-and-win...

But since then there was an endless stream of negative press especially in English speaking countries against German energy policies, so not much of this positive comments are still remembered.


I wonder how you like my vector: https://godbolt.org/z/97YGrbP9s (note, experimental library, but I see no fundamental issue).

Oh, this is very good.

Some of the stuff you're doing in the library I've also been doing recently, and has been working well, like using the string of the type name to allow for run-time checking of, what I would called "mixed" data (your variadic type). I've also done the same basic thing as your option type in a way that's closer to your sum type than the maybe type.

But I'd had enough problems trying to get something like your vector actually working that I'd given up, but I think now I'll build something at some point over the holidays. I think as I'm coming up to speed on the history of changes to _Generic that's partially due to the attempts before I had a C23 compiler, but even then, your code there is impressive-- both clear and clever.

I also have enough stuff passed via pointer that the option type for me needed to handle pointers differently-- I basically just have run-time code that does the null check at run time when set. At that point, there doesn't really need to be an 'is_set' type flag.


By the way, the run-time type checking is one of the use cases where I really would like to have a tractable interface for hashing at compile time, preventing unbounded strcmps, and turning them into an integer compare (without having to manage caching such things at runtime).

Same thing would be for making it much easier to statically lay out fixed caches implemented via hashing, instead of inserting into them at start-up, etc.


that is amazing... I don't write C so I didn't dig too deep, but kudos for getting anything to work. Now you use need great documentation so we can figure out how to use it and what it supports without digging into the macros themselves. (tests would be good too, but maybe they are there and I didn't see them).

yes, unfortunately I do not have a lot of time... maybe I can some funding, but C work this is difficult.

X11 had the distinction between trusted and untrusted X11 clients basically forever. But nobody bothered to even spend the minimal amount of work to make this usable in practice^1. This had two reasons: 1.) It is irrelevant when you run the programs as the same user so nobody bothered (and no: Wayland does not help: https://github.com/Aishou/wayland-keylogger) 2.) It is more fun to simply pretend it is unfixable broken and write something new (something any good engineering manager should have stopped immediately).

¹. I used to use this and also fixed some bugs in some programs. The main problem when I last checked a decade ag was that some important extensions such as composite would need to be exposed to untrusted clients.


That Wayland keylogger is not the same thing. X11 has several mechanisms (XTest, XRecord, XI raw inputs) to receive a global raw key input stream, accessible to anyone who connects to the X server, without even making a visible window surface. It even bypasses grabs, meaning that your lock screen password entry can be snooped on.

The Wayland keylogger acts like an application; all Wayland compositors will only send key events to the focused surface, so the user has to focus an active surface in order to get key events. Even in the scenario where you've LD_PRELOAD-hooked all applications, you still will never get the lock screen password, as the compositor never sends it out across the wire.

LD_PRELOAD is problematic from a security perspective, but it's not Wayland-specific: the same issue is true for CLI applications and X11 applications, and any attacker with the ability to write files could also just replace your binaries with malicious ones (stuff them somewhere into ~/.hidden, and then add a $PATH entry to the start).


I think you did not understand my point. X11 has several such mechanisms, yes, but it also has the concept of untrusted clients that disallow the use of these mechanisms and could be used to provide safety similar to Wayland. The point is that this mechanism of untrusted X clients was neglected and I gave an explanation way.

Yes and your response in the whole thread reading top to bottom was the first one that really taught an old dog a new trick. I've been using gnu on x11 since 1991, been annoyed by fellow student's audio streams on my work station back then, and I've never heard about trusted vs untrusted x11 apps.

I wonder how this debate was mainstream? did some gamers try to squeeze 3 extra percent by taking the protocol out of local stacks? there must have been better ways to do that, without throwing out all X11 benefits?

to this day I'm annoyed I can't have a decent window manager integration on gWSL because the compositor doesn't implement the full window manager protocol.


See the ssh manpage for an explanation of untrusted/trusted clients. This debate was mainstream. Basically, some people presumable paid to work on Linux graphics decided to implement something new instead of doing their job, and gave talks about why X is fundamentally broken. I believe the driving force might have been the hope to support Linux on mobile or embedded devices, and X seems just unnecessary (although I think network transparency would be super useful on mobile devices). Some gamers certainly believed nonsense such as "all X programs are forced to use ancient drawing primitives and so programs will be much faster with Wayland". Wayland developers certainly did not do anything to stop such misconceptions. Later there was disappointment because obviously it was not faster (the drawing model for modern clients is essentially the same), but other myth such as the "fundamental security issue" prevailed.

It's like if Wayland is not just a graphical system, but a full business plan.

Control upstream, then companies wanting solutions will go to you first. Because why go to someone else in the FOSS market, when there is no certainty the code or standard (extension, protocol, etc) will get accepted, forcing you to maintain a fork? With IBM-RH and Ubuntu doings eg., it's hard to say FOSS is immune to vendor lock-in.


> It's like if Wayland is not just a graphical system, but a full business plan. Control upstream, then companies wanting solutions will go to you first.

Wayland is open source with a fixed core protocol that's extremely stable, which anyone can build on. New protocols are constantly proposed. The core is minimal and defines how applications interact with the compositor to render and produce the final output. Control by a single entity is virtually impossible. Wayland ensures everyone has a voice because it's an open protocol which means discussion and development are done in the public.


in _reality_ it gives stack owners full proprietary control.

specifically the wslg stack does not enable Linux gui apps to smoothly integrate with the Windows window manager, because some bits are missing in the Windows Wayland stack, clipboard, window decorations, thumbnails, maximize into a part of the monitor? nope. and no patches taken. supposed you figure where to offer them and how.


It's unfair to claim Wayland is inherently different from X11 in this regard. Both are just specifications, and there are also proprietary implementations of the X11 protocol, primarily for Windows and enterprise settings.

The point is: the X11 spec is much more complete.

> Some gamers certainly believed nonsense such as "all X programs are forced to use ancient drawing primitives and so programs will be much faster with Wayland".

This is incorrect. Kristian Høgsberg has explicitly stated that a primary motivation for Wayland is the reduced need for a central X server, as many programs already bypass it.


A Wayland compositor is even more centralized as it combines compositor, Window manager, and server while in X these could be separate components. I also do not know any program that bypasses the X server. Are you talking about programs that you can start from the text console and which then do graphics directly? Those are very rare.

There are reasons for those architecture differences.

Wayland is an evolution of the previous design. X11's architecture had clients sending drawing commands to the X server, a method that became limited and required extensions over time. Wayland's approach is: applications perform their own rendering into their own separate buffers, then tell the compositor when they are ready. The compositor takes those buffers to produce the final image.

Because those buffers are separate, enhanced security is a direct side effect. Wayland is the result of decades of experience and represents the current way of doing things.


Sorry, no modern X clients sends drawing commands. This is the nonsense I am talking about.

Posting from another account.

I'm aware that extensions exist now, like present, which make it possible to send buffers, similar to how Wayland operates, so you don't have to do things the primitive way.

However, to claim to speak the X protocol, you still need to support the older functionality, that's what I mean by a tremendous amount of functionality to support. The moment you get rid of that old functionality, you've essentially created a new protocol, which is what Wayland is.

How is that point nonsense? I don't want to see X go, but I don't think it's reasonable to prevent progress.


If you know these extension exist (for a long time), why spread the misinformation about "drawing commands" in the first place? A client does not need to support old functionality. A server does for backwards compatibility and this is a good thing! In fact, breaking decades of compatibility is the worst blunder of Wayland. The idea that this is a "tremendous amount of functionality" or a huge burden to maintain is also misleading, first because some drawing commands from the 80s are not a lot of functionality to support from a modern point of view, and also because all this is still being maintained anyway, just much worse because the resources redirected to Wayland. And even if one had deprecated some stuff eventually, this would not have broken compatibility and many other features at the same time as Wayland did.

It's not misinformation, that's how X still works. Clients do all kinds of things. New programs aren't like 80s ones but your X server still must support every operation clients expect.

Wayland doesn't break anything, it's a completely new protocol. Claiming Wayland breaks your use case is like saying systemd broke old init scripts. It did because it's a different system.

Wayland isn't trying to be Xorg 2. It's a protocol. At its core it's only a compositor protocol. Everything built on top is up to the implementation developers.


> Everything built on top is up to the implementation developers.

and that's exactly creating the problem: Window management for example is left as an excercise to the reader. thus (my point above) the WSLg interop for graphcial applications _sucks_ compared to where X Servers already were. and if MS doesn't implement what's needed, it won't come. no way to fix it in the Linux or on the Windows side. the MS Wayland thingie in between tightly controls what is possible.


The logical problem with your argument is that as long as we want to support old clients, we now must support the X server in parallel to Wayland. So there is nothing gained. And the moment we can stop supporting them, we could do this also in X. And yes, Wayland being new and incomplete both creates a huge amount of problem which nobody needs.

Wayland gives us a lot. What you don't realize is that Wayland _is_ X12.

So far Wayland gave me only headaches and I do not see what it offers that X does not already provide. And the fact that Wayland make their case by lying, etc. the drawing commands BS, network transparency does not work (I feature we do use every day), etc. and the fact that important use cases such as accessibility are now treated as an afterthought that there are diverging implementations with inconsistent support for important functionality, ... all this does not build confidence that the developers even remotely know what they are doing outside of their narrow view on the Graphics pipeline itself. And this after decades of effort. Maybe it is too late now to save X, but Wayland was a terrible idea, not the idea of developing Wayland itself as an experiment with open result, but to declare X dead and Wayland its successor long before it was ready and before it was clear that it is actually a better replacement (so far, it isn't).

Those are not lies. I don't think you know what you are talking about. If you knew, you would know that waypipe + xwayland-satellite works even for forwarding X11 clients over waypipe. I use it myself every day, but it's pointless to discuss it with someone who isn't interested in listening, only in spreading the same lies as everyone else.

Sorry, how was your comment "Wayland is an evolution of the previous design. X11's architecture had clients sending drawing commands to the X server, a method that became limited and required extensions over time. Wayland's approach is: applications perform their own rendering into their own separate buffers, then tell the compositor when they are ready. The compositor takes those buffers to produce the final image." not highly misleading, if X had the composite extension in 2004 and Wayland project was started in 2008? Last time I tried waypipe it did not work and its design seems flawed as it has to have hard-coded knowledge about each protocol used on the wire.

I apologize for my previous misleading comments. You're right, Wayland causes many problems. As a long time Linux user, I miss how capable X was and don't want to see it go. Wayland compositors feel like toys in comparison, and its advocates sometimes seem to be coping. However, with major DEs and toolkits dropping X11 support, what options do we truly have?

I dropped Gnome a long time ago and I have never used KDE, so I don't this is an immediate problem for me. As long as there are enough people using it, X will live on. I think the main thing one can do is to not accept the argument that whatever the industry wants is inevitable. Free software would not exist if this were the case.

You're aware of the guideline about throwaway accounts? This isn't good for community (or discussion).

Thanks for pointing that out. I just learned about the policy, my bad.

As far as I can tell, enabling the trusted/untrusted security feature breaks a lot of basic features, including clipboards, XRandR, GPU acceleration, XKB keyboard layouts, and whatnot. It's theoretically available but practically useless.

Xnamespaces looks to be more promising, but as far as I can tell that's still a WIP with little documentation, and from the documentation I can find it looks like it still breaks things like clipboard functionality.


Well, the threat is shared access to resources like clipboards, so securing against this of course breaks the clipboard. Pick your poison.

> Pick your poison.

Permission system? X11 with a permission system is the next logical step for Unix graphics. There is absolutely no need to break 40 years of graphic software, or throw away 40 years of accumulated features.


Yes, but even a permission system can only give a program access to the clipboard or not. You can't deny a program access to the clipboard and complain that the clipboard doesn't work.

> something any good engineering manager should have stopped immediately

Who exactly should and can control the horde of OSS developers?


They were paid by redhat.

C has such types and can guarantee that there is no out-of-bounds access at run-time in the scenarios you describe: https://godbolt.org/z/f7Tz7EvfE This is one reason why I think that C - despite all the naysayers - is actually perfectly positioned to address bounds-safe programming.

Often in dependently-types languages one also tries to prove at compile-time that the dynamic index is inside the dynamic bound at run-time, but this depends.


-fsanitize-bounds uses a runtime address sanitizer, surely? The program compiles fine. In a (strongly) dependently typed language, something like the following would refuse to typecheck:

  int foo(int i) {
      int bar[4] = { 1, 2, 3, 4 };
      return bar[i]
  }
The type checker would demand a proof that i is in bounds, for example

  int foo(int i) {
      int bar[4] = { 1, 2, 3, 4 };
      if i < 4
          return bar[i]
      else 
          return 0
  }
In languages with an Option type this could of course be written without dependent types in a way that's still correct by construction, for example Rust:

  fn foo(i: 32) -> i32 {
      let bar = [1, 2, 3, 4];
      bar.get(i)       // returns Option<i32>, not a raw i32
         .unwrap_or(0) // provide a default, now we always have an i32
  }
But ultimately, memory safety here is only guaranteed by the library, not by the type system.

You seem to be repeating what I said except you mixup strong and static typing. In a statically dependent-typed language you might expect it this not to compile, but this also depends. Run-time checking is certainly something you can combine with strong dependent typing.

Implementing bounds checking that returns option types (which can also implement in C) is not exactly the same thing. But dependent typing can be more elegant - as it is here.


Update: For the interested, here are three ways to write this in C, the first using dependent type, the second using a span type, and the third using an option type. All three versions prevent out-of-bounds accesses: https://godbolt.org/z/nKTfhenoY

I thought that this was a GCC extension(you need to use #define n 10 instead of int n = 10). Is this not the case anymore?

This is in ISO C99. In C11 it was made an optional feature but in C23 we made the type part mandatory again.

It is definitely not fine. The argument seems to be that since you need to trust somebody, curl | bash is fine because you just trust whoever controls the webserver. I think this is missing the point.

s/webserver/DNS/

HTTPS is there, so you go down to that level only if you want to distrust any element of the public key infrastructure. Which, to be fair, there are plenty of reasons if you are paranoid -- they do tell you who's doing what in a shady way as they revoke, so there's a huge list of transgressions.

It is not only that directly; the domain name might be reassigned to someone else, resulting in a valid certificate which is different than the one you wanted. (If you have the hash of the file which you have verified independently then it is more secure (if the hash algorithm is secure enough), although HTTPS is not needed in that case, it can still be used if you wish to avoid spies knowing which file you accessed. You can also use the server's public key if you know what it should be, although this has different issues, such as someone compromising the server (or the key) and modifying the script.) (There is also knowing if the script is what you intended or not anyways (or if there is something unexpected due to the configuration on your computer); if that is your issue, you can read it (and perhaps verifying the character encoding) before executing it, whether or not you trust the server operator and the author of that script.)

> the domain name might be reassigned to someone else

If that happens its game over. As the article I linked noted, the attackers can change the installation instructions to anything they want - even for packages that are available in Linux distros.


It's missing which point?

That you should be very careful about what you install. Cut&pasting some line from a website is the exact opposite of it. This is mostly about psychology and not technology. But there are also other issues with this, e.g. many independent failure points at different levels, no transparency, no audit chain, etc. The counter model we tried to teach people in the past is that people select a linux distribution, independently verify fingerprints of the installation media, and then only install packages from the curated a list of packages. A lot of effort went into making this safe and close the remaining issues.

None of that has anything to do with curl|bash.

Be careful who you trust when installing software is a fine thing to teach. But that doesn't mean the only people you can trust are Linux distro packagers.


I think it has a lot to do with "curl|bash". Cut&paste a curl|bash command-line disables all inherent mechanisms and stumbling blocks that would ensure properly ensuring trust. It was basically invented to make it easy to install software by circumventing all protection a Linux distribution would traditionally provide. It also eliminates all possibility for independent verification about what was installed or done on the machine.

Downloading and installing a `.deb` or `.rpm` is going to be no more secure. They can run arbitrary scripts too.

Downloading a deb via a package manager is more secure. Downloading a deb, comparing the hash (or at least noting down the hash) would also already be more secure.

But yes, that the run arbitrary scripts is also a known issue, but this is not the main point as most code you download will be run at some point (and ideally this needs sandboxing of applications to fix).


> Downloading a deb via a package manager is more secure.

Not what I meant. Getting software into 5 different distros and waiting years for it to be available to users is not really viable for most software authors.


I think it would be quite viable if there is any willingness to work with the distributions in the interest in security.

Well, distros haven't really put any effort into making it viable as far as I know. They really should! Why isn't there a standard Linux package format that all distros support? Flatpak is fine for user GUI apps but I don't think it would be feasible to e.g. distribute Rust via a Flatpak.

(And when I say fine, I haven't actually used it successfully yet.)

I think distros don't want this though. They all want everyone to use their format, and spend time uploading software into their repo. Which just means that people don't.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: