Hacker Newsnew | past | comments | ask | show | jobs | submit | zubspace's commentslogin

The problem is the demand for dynamic content in AAA games. Large exterior and interior worlds with dynamic lights, day and night cycle, glass and translucent objects, mirrors, water, fog and smoke. Everything should be interactable and destructable. And everything should be easy to setup by artists.

I would say, the closest we can get are workarounds like radiance cascades. But everything else than raytracing is just an ugly workaround which falls apart in dynamic scenarios. And don't forget that baking times and storing those results, leading to massive game sizes, are a huge negative.

Funnily enough raytracing is also just an approximation to the real world, but at least artists and devs can expect it to work everywhere without hacks (in theory).


Maybe I'm missinformed, but doesn't starlink have to comply with rules made by local authorities? Afaik, when the internet goes down, like it happened in Thailand, Starlink can't be used too, because it always roots traffic to a ground station near the source.

If this would not happen, I would agree that Starlink is the future. But as it is right now, I don't see the point, unless you are living in or travelling to remote places.


I think lots more is routed through the satellite network now[0].

[0] https://hackaday.com/2024/02/05/starlinks-inter-satellite-la...


Depends if they want to operate legitimately


I also thought about this a lot. Some things about slow thinking are great. I truly believe that it helped me thrive as a software developer.

But social interactions are awkward. I can't really come up with things to say easily and lots of times I can't respond in ways to keep the conversation going. Only after the fact I get lots of ideas of what I could have said. I'm truly impressed about others who can just come up with interesting or funny things to say on the spot.

I'm a tad older, so I stopped caring about it and just accepted my slow thinking. But I'm sure that I also missed out on a lot of opportunities regarding friendships or work. I still think, that others perceive me as awkward or just not fun and it's hard to just ignore that.

Funnily my wife is completely opposite to me and we have the greatest time.


> I'm truly impressed about others who can just come up with interesting or funny things to say on the spot.

As Winston Churchill once said when asked “what are you doing” –> “Oh just preparing my off-the-cuff remarks for tomorrow”

I’m one of those weirdos who does public speaking sometimes. Even 8 hour workshops. You cannot prepare for an 8 hour speaking engagement. Not really. But you can accumulate a plethora of anecdotes, metaphors, and remarks that you weave into the narrative or in response to questions.

You can build frameworks that are similar to code. Prepared functions/coroutines/objects that you run in appropriate situations. Works pretty well especially in mentoring/teaching/consulting situations. This is also how comedians prep their sets.

The key is that things you say are new to the audience, but not to you. It can be the same metaphor you’ve fine-tuned over dozens of interactions. And the person you’re talking to thinks “Wow that guy is so quick on his feet, how did he come up with that so fast!?”

You can also spot this if you watch talks by popular presenters (Simon Sinek is a good example). You’ll notice the same 2 or 3 core stories getting polished and fine-tuned over years of talks and interviews.


When I was super young, I used to think my dad (who everyone I met seemed to think was extremely funny) had a huge repitoire of hilarious stories, but after a few years I noticed him repeating them and realized he just had a few specific ones that he would re-use with new people, like you mentioned. As someone who tends to be pretty slow to learn how to navigate new social situations, it was eye-opening when I recognized this was something I could do. What's amusing to me at this point is that I'm still not sure he fully realizes that this is something he does sometimes, because he'll still sometimes try to whip out one of the stories when talking to me and then genuinely be surprised when I remind him of some very minor detail about it that he forgot to mention this time he told it.


My dad recently told me a funny story about something that happened to him. Except it actually had happened to me and I had told him about it years before.


I recognize this in my self, so I usually preface the story saying something like "this may or may not have been me, but I'm telling it from the first person". First person stories are funnier. This is part of the "telling a yarn" or "tall tale" tradition I grew up with (in Texas).


Better your Dad with a funny story than your boss with a victory tale!


Heh, my wife was recently telling a story about how she taught our son how to run into a hug. She did not, I actually did, but I let her have it since she was so excited about it.


There’s a Father Brown story which has the extreme of that: https://standardebooks.org/ebooks/g-k-chesterton/the-secret-...

But in general yes - have a few or tens of stories you’ve rehearsed through long use, and practice ways to segue to one of them.

Once you get a handle for the few topics of small talk, it’s not terribly hard - and is a skill that can be taught and learned.


Don't watch Big Fish unless you're ready to cry.


Haha, this is my dad too. He doesn’t seem to realize it. It’s me as well. These are stories that tend to strongly relate to our identity. Like, if you get to know me I will talk about meditation or another story like me training with Wim Hof for an experiment at the Radboud Universiteit.

My username checks out :P

We relate to our own stories I guess.


Jeremy Vine once wrote a story about Boris Johnson which I thought was the pinnacle of this. It was published on his Facebook page and I've since lost the link, so you're going to have to read it on Reddit where someone has posted the whole thing again.

It is uproariously funny and very relevant.

https://www.reddit.com/r/ukpolitics/comments/c1korj/jeremy_v...

It tells the tale of how the man who was going to become Prime Minister of the United Kingdom used to play the improviser and ex tempore comedian, in a practiced and automatic way.



  > “Oh just preparing my off-the-cuff remarks for tomorrow”
That actually reminds me something I head from Corridor Crew. Wren was referencing Brandon Laatsch (timestamped link to clip [0])

  | "Shower thoughts only work, when you put in the work." You have to actually spend time trying to brainstorm something, trying to think of something, and at the end of it you'll've made no progress and be discouraged. But then randomly, later, in the shower, doing something completely unrelated, the idea hits. 
It's weird, but I think it works. My best guess is that (part of this can be explained that) by spending time thinking about the solution space, your brain will then subconsciously start to generalize. Which would mean this is, to some extent, a trainable skill. I'm sure Churchill was also a master of moving conversations to topics or subtopics where he could more quickly make off-the-cuff remarks. But I think even the "slowest" person will recognize that they are much quicker in certain categories. Slowness might not mean they lack having thought about those topics, but might just mean they've thought less about quip remarks in that domain (or even quip remarks in general).

I think we would be naive to assume quick responses are a good measure of one's intelligence[1]. I know this is common, but I think it is missing the same thing that quick responses also tend to miss: depth. You can be fast and deep, but more often people are fast and wrong[2]. More complex the topic the easier it is to be unaware of how wrong.

[0] https://youtu.be/9FL7IZavt1I?t=93

[1] https://news.ycombinator.com/item?id=45242293

[2] https://0x0.st/KcAU.png

[Edit] I wanted to add that I found this method highly effective during my PhD. It requires a balance of churning the wheels and walking away. Progress is invisible until the finish line is in sight, so you need to spend time pushing even if it looks like you are getting nowhere. But at the same time, you need to walk away. If you keep pushing you'll never have that time for those random thoughts. There's a laundry list of famous physicist[3] who used to "only work" for a few hours a day and then do things like go on long walks or play tennis. I think that fits into this model. It seems to be a critical aspect for any creative work. Honestly, I would find that the most common mistake I would make is sitting at my desk for too long. It results in a narrowing of focus. There's a lot of times we want that narrowing, but there's also plenty of times we want to think more broadly. I think this is very true for programming in general. I can sympathize with managers who look at people doing these things and interpret them as being unproductive. But I think the reality is that productivity is just a really hard thing to measure when you're not a machine stamping out well defined widgets. I think this ends up with us just making fewer "widgets" and of lower quality. I mean it isn't like you can measure quality by anything as simple as the number of lines of code or number of Jira tickets knocked off. Hell, if you are too narrow your solutions are probably creating more tickets than you're knocking off! But that's completely invisible, only measurable post hoc, and even then quite difficult to measure (if not impossible).

We often talk about current "titans" and all of them boast their long hours and "dedication." People like Elon suggesting 120hrs or the growing 996 paradigm. But I'm unconvinced this really checks out. If anything, it appears much more common that Nobel scientists worked fewer hours, not more. We're all not working on Nobel level work, but it does beg the question of what the most effective strategy actually is. Certainly we can't conclude longer hours at the desk yields better output. We can't counterfactually conclude that Dirac would have been even greater had he spent 16 hrs a day working rather than a handful. "More hours" just seems to be a naive oversimplification, highly related to these "shower thoughts"

[3] Dirac is a famous example, who colleagues would also jokingly use the unit "Dirac" in reference to "one word per hour". Notoriously "slow" thinker, but a surefire candidate for one of the smartest humans to ever exist. Poincare famously worked 10am till noon then 5pm till 7pm. Darwin followed a similar model.


People like Elon suggest 120 hrs or 996 for the employees that work under him implementing his ideas-- the people rolling up their sleeves and putting hammers to nails. Most of the people in an org do not need to be involved in deep level thinking.


  > nobody ever changed the world on 40 hours a week
  - Elon[0] 
Yet... I listed a few... He goes on to suggest 80 is good with spikes into 100. I mean Elon is notorious for putting in those long hours himself[1], but he's definitely wrong in the quote.

So... who do you think those demands are for? He seems pretty clearly to be demanding it from engineers to execs. That also matches the experience of everyone I've known to have worked at SpaceX, including both programmers and aerospace engineers. Same with Tesla.

Also, thought I'd drop a link to this 996 HN post from the other week[2].

Honestly, I'm not sure who you're referring to, because when not taken literally that would seem to cover literally every employee.

[0] https://x.com/elonmusk/status/1067173497909141504

[1] https://www.financialexpress.com/trending/my-workload-went-f...

[2] https://news.ycombinator.com/item?id=45149049


Elon does this because being able to order all those people around makes him feel special and important, not because it actually works.

Study after study proves that productivity drops off rapidly when people are tired and stressed - which should be common sense, but apparently it's too common for some of our notables to understand.

So instead of actual work output you get productivity theatre. Everything is dramatic and shouting happens, but - for example - Tesla still doesn't have anything resembling FSD while more modest companies are much further along.

It's juvenile machismo, not adult management.


Musk's use of social media and his involvement in recreational activities is way too much for 80h/week. He's clearly more of a 4 day workweek type tho shy about it.


I once had the opportunity at a comedy festival to see John Mulaney’s same set twice, and it’s pretty wild to see with your own eyes.


In a way it shows what a craft it is to not only have jokes but also be able to present them again and again in a way that appears fresh.

Being a comedian is hard not because writing jokes are hard but because it's actually an intersection of a number of hard skills that require a LOT of practice to hone.


Why would it be wild? Very few live performances of any type are pure improvisation.


Because of how formulaic it is?


Do you have any advice for accumulating and sharing relevant anecdotes? I struggle with sharing anecdotes that have an "Aesop", or directly relatable point, even if I've lived such experiences.


> Do you have any advice for accumulating and sharing relevant anecdotes?

For me blogging for the past 15 or so years has been the secret ingredient. I regularly sit down and distill things into an approachable form then send off to my audience to see if it lands. If yes, I mentally add to my reference list. If not, I engage in some clarifying back-n-forth and try again next time.

These days in a more leadership position I get a lot of reps of this synchronously as I work with younger or less experienced folk.


I also often don’t have timely responses.

There are sometimes long pauses before my response or even mid-speech, during which I’m thinking about what’s said. But the delay is often interpreted as a cue for someone else to respond or change the subject, which often leads to not being able to say anything that i’ve spent so much glutamate to process.

I used to say “one moment” every 5 seconds while I think, but that was distracting.

Sometimes, I do this thing with my eyes jumping them around as if I’m reading a book; that gives people something to look at while they wait, like a spinner indicator.


As as over-thinker myself something I didn't appreciate until too late in life is the necessity of practice.

If you want to be able to hit a ball it doesn't matter how much thought you put into it - the learning is all about programming your lower instinctive brain and it only has the input device of repetition. This brain level has the ability to work at much lower latency - which is critical for reactive physical tasks.

I suspect it is the same here. You can certainly learn to speak using different levels of your brain as well. Case in point public speaking - the reason this is hard is generally you have to trust your mouth on automatic mode to follow behind and using the thinking part of your brain to better plan (or remember) ahead to build a narrative path.


There are body language cues that show you are thinking. Try looking up (like you're looking into your brain).


Yeah. That does work. I do that sometimes, though my eyes start to hurt when I roll them up for that long.

I also find it easier to do something with my eyes than to do nothing while thinking. That’s probably just me.

In the future, I might want an led embedded in one of my temples, that will blink like a network switch port or hdd light that indicates brain activity.


Also filler text while you think is good to practice. “That’s actually a well informed question, what I’ve seen is….” Buys you 5 seconds if you can say it on autopilot and think while your mouth is moving


I worked with a guy who would always start with “to a first approximation…”

After a while it started to grate on me because it came across as “here comes my poorly formed first thought”


lol, yeah the trick is to use them as sparingly as possible and have a dozen or so to rotate. I'm not sure if I qualify as a slow thinker the way OP and some others here are discussing it. But, I do frequently need more time to answer complex problems. I've also started just feeling OK telling people I can follow up with them on certain questions as it's too complex to answer off top of my head. As with anything, it's a balance, I find if you can't answer simple questions people will lose confidence in your answers.


That is a hilarious bodymod. And considering that is is possible to get rough brain activity indicators in a non-invasive manner, something like it could actually be made.


To an extent it’s a skill you can practice if not learn.

By nature I’m a slow thinker but I can mode switch if I need to but it’s exhausting after a while in a weird way I put it down to working in the trades before switching to programming full time, some of the fastest funniest people you’ll ever meet are tradesmen on job sites (introversion doesn’t mean poor social skills after all though they get conflated).

If you are generally happy as you are don’t sweat it, be a boring world if we where all the same.


I am also the slow-thinking dev married to a quick thinker, and it's a good pairing. I know couples where they're both quick thinkers and things are so mercurial it's hard to believe they're still together, but maybe the excitement keeps it going.

I enjoy watching Harry Mack videos on YouTube where he freestyles and can work in something that happens like someone walking into the frame into literally the next line of his raps. This capability is so absolutely outside of the realm of possibility for my brain I almost feel like he's a different species.


I‘m very similar. I noticed that people who are very easy to speak to share one trait: they have no shame to tell you the same story multiple times. It bores the hell out of me every times. If i try to do it, i get bored as well.


My boxing coach once described fighting as a conversation. I am inclined to agree.

In boxing you don't have the luxury of taking your time to think otherwise you get punched in the face.

Improving at conversation is like boxing - it can be reduced into structures and scenarios. Combinations and responses can be drilled in. Ultimately once the foundations are bedded in there is plenty of room for self expression and creativity.

The funny thing about social interaction is that we all talk to each other but there are people who live breathe and hone the art whether formally or informally while plenty of us just stumble along doing just good enough...


>> I still think, that others perceive me as awkward or just not fun and it's hard to just ignore that.

Most likely they don’t care as much as you think. They are probably thinking about what they should say next


Due to this, career success is going to be highly dependent on chosen profession. Some professions are inherently more social and require heavy networking. Software and engineering as a whole, is a good profession for slow thinkers to excel. It can also be meritocratic so any personality quirks or pedigrees get normalized when ranking by performance.


For me is has been quite the opposite. Im a top performing senior/staff engineer. But always struggle to get through coding interviews.


I'm self taught programmer, I don't code professionally but feel confident that I can build any software related widget I ever dreamed up. That said, coding interview stuff I have seen is like gibberish and I don't see how it correlates to the day to day job of building software. Maybe some types of software (lower level systems stuff and such come to mind). It seems like everyone online discussing it feels about the same too. So honestly not sure if that's signal or noise for you, lol.


I could have written these exact words. Marriage needs a certain balance you know ;)


I think this is not really a bad trait. If you think about it from the other person’s perspective, they really don’t expect you to make jokes or entertain them


> they really don’t expect you to make jokes or entertain them

Oh, but they do, if you want to have future conversations with them. As a slow thinker with the same social issues as OP, trust me, they do. Nobody wants to keep talking to someone they consider boring, and first impressions are still the most important impressions.


As someone who believed both versions of this in different parts of my life, I’m pretty convinced that it is a choice to believe in either one


You're pretty much hosed for any FAANG leetcode interviews, though. Unless you're a superstar performer otherwise.


you can try getting into improv comedy to develop this sort of skill. I'm also generally a slow thinker, but i dont actually think we think slower, I think we have too high a barrier for what we allow ourselves to say. We're afraid of making a mistake or saying something stupid, but most people just blurt out the first thought that percolates from their subconscious.


It's a shitty system, if one side just needs to succeed one time while the other side needs to succeed over and over again.

What really should be done is to disallow proposals, which are kinda the same. Once a mass surveillance proposal like this is defeated, it shouldn't be allowed to be constantly rebranded and reintroduced. We need a firewall in our legislative process that automatically rejects any future attempts at scanning private communications.


> What really should be done is to disallow proposals, which are kinda the same.

This very much exists in a lot of parliamentary rules authorities, but it's usually limited to once per "session." They just need to make rules that span sessions that raise the bar for introducing substantially similar legislation.

It can easily be argued that passing something that failed to pass before, multiple times, should require supermajorities. Or at least to create a type of vote where you can move that something "should not" be passed without a supermajority in the future.

It is difficult in most systems to make negative motions. At the least it would have to be tailored as an explicit prohibition on passing anything substantially similar to the motion in future sessions (without suspending the rules with a supermajority.)

I don't know as much about the French Parlement's procedure as I would like to, though.


Is there no way to codify a negative right, like “The right of the European people to privacy in their communications and security in their records through encryption shall not be infringed?” Negative rights reserved to the people should be more important than positive laws granting power to the government.


Yes; they could amend the definitely-not-a-constitution (for branding/eurosceptic-appeasement reasons, the EU constitution was rebranded as the Lisbon treaty before adoption). Arguably such a right may exist already and this legislation might find itself on a collision course with the ECJ if it passed (notably the ECJ nuked _another_ intrusive law, back in the day: https://en.wikipedia.org/wiki/Data_Retention_Directive).


In some ways yes but we've already seen with covid that governments are happy to behave unconstitutionally even when it's clear they will eventually lose in court - by then their targets have already been dragged through the mud.


This rule can really hurt. e.g. Theresa May tried passing a deal to keep the UK in the Customs Union. The speaker wouldn't allow it because the same deal had previously been rejected, even though she now had the support for it in the house.


I wonder if it'd be possible to fix a lot of these issues by having a constitution with damn near impossibly strict standards for changing it that rely on the entire population agreeing (or close to it)?

So there might be a right to privacy or freedom of speech enshrined in law, and the only way to change it would be for 90+% of the population to agree to change it. That way, it'd only take a minority disagreeing with a bad law to make it impossible to pass said law. Reactionaries and extremists would basically be defanged entirely, since they'd have to get most of their opponents to agree with any changes they propose, not just their own followers.


It exists. Except these mfs will not put the proposal to vote if they know it will not pass. Instead they try again and again to gather the votes.


Same here! Restarted my router and pi hole twice. Now i feel stupid.


I know this behavior all to well. I definitely think that this can be used to your advantage: You learn alot in the struggle and get a deep understanding of the problem.

But you really need to step back once in a while and contemplate, if the thing you're doing is really worthwhile.

One of the most precious resources is time. I didn't appreciate this insight a lot while I was young. But as I grew older I needed to be more careful how you I was spending it. In this regard like I like the saying "youth is wasted on the young". But this also enables you to be more focused in your approach. Fail fast is a lot better than spending years on a problem with bo end in sight


I’m already old.


Wouldn't it just solve a whole lot of problems if we could just add optional type declarations to json? It seems so simple and obvious that I'm kinda dumbfounded that this is not a thing yet. Most of the time you would not need it, but it would prevent the parser from making a wrong guess in all those edge cases.

Probably there are types not every parser/language can accept, but at least it could throw a meaningful error instead of guessing or even truncating the value.


I doubt that would fix the issue. The real cause is that programmers mostly deal in fixed-size integers, and that’s how they think of integer values, since those are the concepts their languages provide. If you’re going to write a JSON library for your favourite programming language, you’re going to reach for whatever ints are the default, regardless of what the specs or type hints suggest.

Haskell’s Aeson library is one of the few exceptions I’ve seen, since it only parses numbers to ‘Scientific’s (essentially a kind of bigint for rationals.) This makes the API very safe, but also incredibly annoying to use if you want to just munge some integers, since you’re forced to handle the error case of the unbounded values not fitting in your fixed-size integer values.

Most programmers likely simply either don’t consider that case, or don’t want to have to deal with it, so bad JSON libraries are the default.


This is actually a deliberate design choice, which the breathtakingly short JSON standard explains quite well [0]. The designers deliberately didn't introduce any semantics and pushes all that to the implementors. I think this is a defensible design goal. If you introduce semantics, you're sure to annoy someone.

There's an element of "worse is better" here [1]. JSON overtook XML exactly because it's so simple and solves for the social element of communication between disparate projects with wildly different philosophies, like UNIX byte-oriented I/O streams, or like the C calling conventions.

---

[0] https://ecma-international.org/publications-and-standards/st...

[1] https://en.wikipedia.org/wiki/Worse_is_better


It's an interesting discussion. There's always a divide when you slowly migrate from one thing to another.

What makes this interesting is that the difference between C code an Rust code is not something you can just ignore. You will lose developers who simply don't want or can spend the time to get into the intricacies of a new language. And you will temporarily have a codebase where 2 worlds collide.

I wonder how in retrospect they will think about the decisions they made today.


Most likely Rust will stay strictly on the driver side for several years still. It's a very natural Schelling fence for now, and the benefits are considerable, both in improving driver quality and making it less intimidating to contribute to driver code. It will also indirectly improve the quality of core code and documentation by forcing the many, many underspecified and byzantine API contracts to be made more rigorous (and hopefully simplified). This is precisely one of the primary things that have caused friction between RfL and the old guard: there are lots and lots of things you just "need to know" in order to soundly call many kernel APIs, and that doesn't square well with trying to write safe(r) Rust abstractions over them.


An example of the latter: drm_sched

https://vt.social/@lina/113051677686279824


What's the link for the lkml drama?


I'm not sure but I'm guessing it's this one https://lore.kernel.org/lkml/20230714-drm-sched-fixes-v1-0-c...


> and that doesn't square well with trying to write safe(r) Rust abstractions over them.

Or just using those kernel APIs, period.


I don't think changing to Rust code completely is something attainable. I guess some older or more closer to the metal parts will stay in C, but parts seeing more traffic and evolution will be more rusty after some time, and both will have its uses and have their islands inside the codebase.

gccrs will allow the whole thing to be built with GCC toolchain in a single swoop.

If banks are still using COBOL and FORTRAN here and there, this will be the most probable possibility in my eyes.


> I guess some older or more closer to the metal parts will stay in C

I suppose the biggest reason is that C programmers are more likely than not trained to kinda know what the assembly will look like in many cases, or have a very good idea of how an optimizer compiler will optimize things.

This reminds me I need to do some non-trivial embedded project with Rust to see how it behaves in that regard. I'm not sure if the abstraction gets in the way.


After writing some non-trivial and performance sensitive C/C++ code, you have feeling of how that code behave on the real metal. I have that kind of intuition, for example. I never had to dive to the level of generated ASM, but I can get ~80% of theoretical IPC with just minding what I'm doing in C++ (minimum branching, biasing branches towards a certain side, etc.).

So, I think if you do the same thing with Rust, you'll have that intuition, as well.

I have a friend who writes embedded Rust, and he said it's not as smooth as C, yet. I think Rust has finished the first 90% of its maturing, and has the other 90%.


I write embedded rust full-time and can say there's nothing that I can do in C that I can't do in rust. Sure the tools/frameworks are a lot more mature, but a combination of the PAC for register access (maybe a bit of community maintained HAL) and a framework like RTIC is pretty much all I need.


I am not convinced, given the amount of heavy lifting that the Rust type system does, that rusty Rust is nearly as brain-compilable as C. However, you can write the equivalent of C in many languages, and Rust is one of them. That kind of code is easy to compile in your head.


It's not brain-compilability, it's getting used to what that specific compiler does with your code you brain-compile.

So, I have a model for my code in my brain, and this code also has a real-world behavior after it's compiled by your favorite toolchain. You do both enough times, and you'll have a feeling for both your code and the compiler's behavior for your code.

This feeling breaks when you change languages. I can brain-compile Go for example, but compiler adds other things like GC and null-pointer protection (carry local variables to heap if you're going to hit a null pointer exception after returning a function). Getting used to this takes time. Same for Rust.


> I suppose the biggest reason is that C programmers are more likely than not trained to kinda know what the assembly will look like in many cases, or have a very good idea of how an optimizer compiler will optimize things

This is the only way Hellwig's objection makes any kind of sense to me. Obviously, intra-kernel module boundaries are no REST-APIs, where providers and clients would be completely separated from each other. Here I imagine that both the DMA module as well as its API consumers are compiled together into a monolithic binary, so if assumptions about the API consumers change, this could affect how the module itself is compiled.


I've done a non-trivial embedded project in C. (Quadcopter firmware). The language doesn't get in the way, but I had to write my own tooling in many areas.


Is there a layer where C is the sweet spot? Something too high-level for ASM, and too low-level for Rust? (not my area, so genuine question).


Many people still have the mistaken belief that C is still trivial to map to assembly instructions and thus has an advantage over C++ and Rust in areas where understanding that is important - but in practice the importance is overstated, and modern C compilers are so capable at optimising at high optimisation levels that many C developers would be surprised at what was produced if they looked much further than small snippets.

Like half the point of high-level systems languages is to be able to express the _effects_ of a program and let a compiler work out how to implement that efficiently (C++ famously calls this the as-if rule, where the compiler can do just about anything to optimise so long as it behaves in terms of observable effects as-if the optimisation hadn't been performed - C works the same). I don't think there's really any areas left from a language perspective where C is more capable than C++ or Rust at that. If the produced code must work in a very specific way then in all cases you'll need to drop into assembly.

The thing Rust really still lacks is maturity from being used in an embedded setting, and by that I mostly mean either toolchains for embedded targets being fiddly to use (or nonexistent) and some useful abstractions not existing for safe rust in those settings (but it's not like those exist in C to begin with).


Often the strong type system of C++ means that if take C code and compile it with a C++ compiler it will run faster. Though part of the reason it is faster C++ will allow the compiler to make assumptions that might be false and so there is a (very small IMHO) chance that your code will be wrong after those optimizations. C++ often has better abstractions that if you use will allow C++ to be faster than C can.

If Rust doesn't also compile faster than C because of the better abstractions that should be considered just a sign of compilers needing more work in the optimize and not that Rust can't be faster. Writing optimizers is hard and takes a long time, so I'd expect Rust to be behind.

Note that the above is about real world benchmarks, and is unlikely to amount to 0.03% difference in speed - it takes very special setups to measure these differences, while simple code changes can easially but several hundred percentage differences. Common microbenchmarks generally are not large enough for the type system to make a difference and so often show C as #1 even though in real world problems it isn't.


Rust is a systems programming language by design; bit-banging is totally within its remit, and I can't think of anything in the kernel that Rust can't do but that C could. If you want really, really tight control of exactly which machine instructions get generated, you would still have to go to assembler anyway, in either Rust or C.


The exact reason why it was created in first place, a portable macro assembler for UNIX, and should stayed there, leaving place for other stuff on userspace like Perl/Tcl/... on UNIX, or Limbo on Inferno, as the UNIX authors revised their ideas of what UNIX v3 should look like, already on UNIX v2 aka Plan 9, there was a first attempt with Alef.

Or even C++, that many forget was also born at Bell Labs on the UNIX group, the main reason being Bjarne Stroutroup never wanted to repeat his Simula to BCPL downgrade ever again, thus C with Classes was originally designed for a distributed computing Bell Labs research project on UNIX, that Bjarne Stroutroup certainly wasn't going to repeat the previous experience, this time with C instead of BCPL.


I'm not sure what you mean by "leaving place for". There was a place for Perl and Tcl on Unix. That's how we wound up with Perl and Tcl.

If you mean that C should have ceded all of user-space programming to Perl and Tcl, I disagree strongly. First, that position is self-contradictory; Perl was a user-space program, and it was written in C. Second, C was much more maintainable than Perl for anything longer than, say, 100 lines.

More fundamentally: There was a free market in developer languages on Unix, with C, Perl, Awk, Sed, and probably several others, all freely available (free both as in speech and as in beer). Of them, C won as the language that the bulk of the serious development got done in. Why "should" anything else have happened? If developers felt that C was better than Perl for what they were trying to write, why should they not use C?


This is kind of what I mean,

"Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue.... Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels? Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."

-- Fran Allen interview, Excerpted from: Peter Seibel. Coders at Work: Reflections on the Craft of Programming

C's victory is more related to not having anything else as compiled language in the box than anything else regarding its marvelous technical capabilities, so worse is better approach, use C.

Even more so, when Sun started the trend that UNIX development tooling was paid extra, and it only contained C and C++ compilers, for additional compilers like Fortran and Ada, or IDE, it was even a bit more extra on top.

Which other UNIX vendors were quite fast to follow suit.


Thanks for the explanation.

But I've seen that quote before (I think from you, even). I didn't believe it then, and I don't believe it now.

There is nothing about the existence of C that prevents people from doing research on the kind of problem that Fran Allen is talking about. Nothing! Those other languages still exist. The ideas still exist. The people who care about that kind of problem still exist. Go do your research; nobody's stopping you.

What actually happened is that the people who wanted to do the research (and/or pay for the research) dried up. C won hearts and minds; Fran Allen (and you) are lamenting that the side you preferred lost.

It's worth asking why, even if Ada or Algol or whatever were extra cost, why weren't they worth the extra cost? Why didn't everybody buy them and use them anyway, if they were that much better?

The fact is that people didn't think they were enough better to be worth it. Why not? People no longer thought that these automatic optimization research avenues were worth pursuing. Why not? Universities were teaching C, and C was free to them. But universities have enough money to pay for the other languages. But they didn't. Why not?

The answer can't be just that C was free and the other stuff cost. C won too thoroughly for that - especially if you claim that the other languages were better.


Worse is better, and most folks are cheapy, if lemons are free and juicy sweet oranges have to be bought, they will drink bitter lemonande no matter what, eventually it will taste great.

Universities are always fighting with budgets, some of them can't even afford to keep the library running with good enough up to date books.


Well, why not? The price tag certainly is what did it for me.

I was twelve. C was free, the alternatives were not. By the time I could have paid for one, I’d been writing C for ten years…

Nowadays I wouldn’t touch it with a ten-foot stick.


> What actually happened is that the people who wanted to do the research (and/or pay for the research) dried up. C won hearts and minds; Fran Allen (and you) are lamenting that the side you preferred lost.

Eh, sort of. The rise of C is partially wrapped up in the rise of general-purpose hardware, which eviscerates the demand for optimizers to take advantage of the special capabilities of hardware. An autovectorizer isn't interesting if there's no vector hardware to run it on.

But it's also the case that when Java became an important language, there was a renaissance in many advanced optimization and analysis techniques. For example, alias analysis works out to be trivial in C--either you obviously prove they don't alias based on quite local information, or your alias analysis (no matter how much you try to improve its sensitivity) gives up and conservatively puts it in the everything-must-alias pile; there isn't much a middle ground.


Directly programming hardware with bit-banging, shifts, bitmasks and whatnot. Too cumbersome in ASM to do in large swaths, too low level for Rust or even for C++.

Plus for that kind of things you have "deterministic C" styles which guarantee things will be done your way, all day, every day.

For everyone answering: This is what I understood by chatting with people who write Rust in amateur and pro settings. It's not something of a "Rust is bad" bias or something. The general consensus was, C is closer to the hardware and allows handling of quirks of the hardware better, because you can do "seemingly dangerous" things which hardware needs to be done to initialize successfully. Older hardware is finicky, just remember that. Also, for anyone wondering. I'll start learning Rust the day gccrs becomes usable. I'm not a fan of LLVM, and have no problems with Rust.


> too low level for Rust or even for C++.

I'd love to hear a justification for why this is a thing. Doing bit-banging is no more difficult in Rust or C++ than in C.


You probably mean "C compatible-ish subset of C++98"


Two reasons I can think of off the top of my head.

The assembly outputted from C compilers tend to be more predictable by virtue of C being a simpler language. This matters when writing drivers for exotic hardware.

Sometimes to do things like make a performant ring buffer (without vec dequeue) you need to use unsafe rust anyway, which IMO is just taking the complexity of the rust language without any of the benefit.

I don’t really think there’s any benefit to using C++ over rust except that it interfaces with C code more easily. IMO that’s not a deal maker.


> The assembly outputted from C compilers tend to be more predictable by virtue of C being a simpler language.

The usual outcome of this assumption is that a user complains to the compiler that it doesn't produce the expected assembly code, which the compiler ignores because they never guaranteed any particular assembly output.

This is especially true for the kinds of implicit assembly guarantees people want when working with exotic hardware. Compilers will happily merge loads and stores into larger load/stores, for example, so if you need to issue two adjacent byte loads as two byte loads and not one 16-bit load, then you should use inline assembly and not C code.


I’m not saying every C compiler is always perfectly predictable, but by virtue of it being a simpler language it should Always be more predictable than rust, barring arcane optimizations.

I do agree that if someone actually cares about the assembly they should be writing it by hand.


> I’m not saying every C compiler is always perfectly predictable

No C compiler is predictable. First, there is the compiler magic of optimization.

Then you have Undefined Behavior, which in C, that's almost a guarantee, you'll experience inconsistent behavior between compilers, targets, optimization levels and the phases of the moon.

In Rust, use .iter a lot to avoid bound checks, or if you want auto-vectorization use a lot of fixed length arrays, and look how LLVM auto-vectorizes it. It takes getting used to it, but hey, so does literally every language if you care about SOURCE -> ASSEMBLY translation.


> The assembly outputted from C compilers tend to be more predictable by virtue of C being a simpler language.

That doesn't seem to be true, not in the presence of UB, different platforms and optimization levels.

> Sometimes to do things like make a performant ring buffer (without vec dequeue) you need to use unsafe rust anyway, which IMO is just taking the complexity of the rust language without any of the benefit.

If you write a data structure in Rust, it's expected to wrap the unsafe fiddly bits into a safer shell and provide unsafe access as needed. Sure, the inner workings of Vec, VecDeque, and Ring Buffers are unsafe, but the API used to modify them isn't (modulo any unsafe methods that have their prerequisite for safe access stated).

The idea is to minimize the amount of unsafe, not completely eradicate it.


Autovectorizing code is hardly predictable, or when folks land in UB mines that optimizers glady take advantage of.


Why exactly would it be too low-level for Rust?


> too low level for Rust or even for C++.

This doesn’t make any sense to me, could you explain why?


Rust does ok at this but typically works better with some tooling to make register and bit flag manipulation look more like normal rust functions. chiptool and svd2rust do this for microcontroller code using all rust. The only asm needed is going to be the bootup to setup enough to run rust (or C)


Honestly, if I need to do bit-banging, I'd go with Rust over C these days. Rust has a much richer set of bit primitives than C does.


"C is closer to the hardware" [citation needed]

This might have been true 50 years ago, I am unconvinced it was true 25 years ago, it would take a lot to convince me it's true today.


maybe generic implementations of crypto primitives and math kernels.


It strikes me that the former would profit from a strong type system, whereas the latter could profit from (enforced) strict aliasing.


> I wonder how in retrospect they will think about the decisions they made today.

The decision was not made today, what happens today (or, rather, a few days ago) is Linus calling out a C maintainer going out of his way to screw rust devs. Rust devs have also been called out for shitty behaviour in the process.

The decision to run a Rust experiment is a thing that can be (and is) criticized, but if you allow people to willfully sabotage the process in order to sink the experiment, you will also lose plenty of developers.


Do colors have any significance in those flame graphs? It's unfortunate that a post about them does not mention anything about colors. If you look at at the examples, there are bars, which have the same length, but the colors look random to me.


Yes they are random. See the blog about the inventor of the flame graph, how we know them today

> Neelakanth and Roch's visualizations used completely random colors to differentiate frames. I thought it looked nicer to narrow the color palette, and picked just warm colors initially as it explained why the CPUs were "hot" (busy). Since it resembled flames, it quickly became known as flame graphs.

https://www.brendangregg.com/flamegraphs.html


G'mic is a great tool. I use the QT-Plugin for Krita.

The only problem I have is that I don't use it very much and if I do, I'm just overwhelmed by all the effects and parameters. I'm sure there's great value there... But, if you don't know exactly what you're looking for? Well, be prepared to click around for ages finding something cool. I have no idea how to improve that, but I find it annoying.

It would be also great for the effects to not be destructive. Like using it with filter layers. But unfortunately this seems not to be possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: