It's worse than that -- there is not a single piece of hardware that implements RVA23 available to be bought on the market today.
There are SoCs on the market that implement RVV (Vector extensions), and SoCs on the market that implement H (Hypervisor extensions).
There are no SoCs on the market that implement both at the same time. And both are mandatory for RVA23.
I'd love to be proven wrong on the hardware availability. If there's hardware to be bought in western countries that implements both RVV and H, please let me know.
> It's worse than that -- there is not a single piece of hardware that implements RVA23 available to be bought on the market today.
I think that's fine, as an outsider without any RISC-V board around, alignment in the future seems better than a board out today given performance is AFAIK still awfully subpar.
As a potential consumer all I want is that by the time RISC-V really hits the market people don't start hitting edge cases like toes on furniture with missing extensions that ended up being critical to properly run the software they need. I don't want another shitshow like USB-C fast-charging where consumers can't easily tell if a cable will work fine or end up in a slow charge fallback.
I'd rather see RISC-V for the more general public coming out later than starting with the wrong foot.
Most of the low-level pieces are in Rust, the TUI is written in Python and most of the remaining pieces are getting lowered down to the Rust libraries over time.
(It was all Python up until ~6 months ago)
EDIT: Oh, and you can buy the Grayskull cards online now, without contacting anyone.
I figured I'd add a bit of clarification here. Tenstorrent has a couple SW stacks. The lowest level programming (Metalium) model is written in C++ and goes down to the register level. Higher level entrypoints (ttlib, ttnn) are written on top of that in python. I think it's the smi tooling and other systems software that might be written in rust.
Firmware can sometimes get a bit complicated due to third party IP used and whether that can be opensourced, but within those constraints this is the ambition.
I.e things like memory controller setup and link training is often constrained by the IP vendors on whether you can share it in source form.
Good news in general though -- there isn't a lot of firmware on these cards in the path of the workloads, most of it is around system management and thermals/power.
Terminology-wise some of pieces in the code base are called firmware, but are loaded at runtime and built as part of the usual build process. It's more of a linked-with-the-workload libraries than resident FW. (There's some of both though).
Zero work? I guess memory fades quickly sometimes.
Apple rented out DTK systems for $500 for developers to get their hands on ARM hardware and port their applications. Hardly zero work. You couldn't even keep the hardware, you had to send it back (and originally you wouldn't even get $500 in credit to buy a production system with, luckily they fixed that after the easily predictable uproar).
Google is providing software emulators instead, much easier to work with and doesn't require complicated device logistics. Not only that, but the only audience that really needs to do work to get ready, are either those producing NDK applications, or those working on hardware support for RISC-V.
Java/Kotlin applications would not need any porting. Sounds like zero work to me!
If you were initially fine with software emulation (i.e. Rosetta 2), as were many small and large software projects for macOS or Unix, you had no need whatsoever to get a DTK.
> If you were initially fine with software emulation (i.e. Rosetta 2)
You'd need a DTK to know if you're fine with emulation or not.
If you cared at all about your app running, you wouldn't just assume that it magically runs fine on an emulation layer you never touched, and that at speeds that are reasonable.
For a reference point, Rosetta was far from being great, and while some apps could run against it, it was most of the time an ultra painful experience. That pain helped devs to put the effort into making native versions, but it also means you couldn't expect Rosetta 2 to give acceptable performances from Apple's track record.
> Rosetta 2 was straightforward and surprisingly fast, requiring zero tweaking or user interaction. Most people never even noticed.
Yes, but if you were a developer who had to make sure your application worked on the upcoming M1 models, there was no way to know that without getting the DTK.
At the time when Apple announced their transition to their own silicon and that there would be a Rosetta 2, the only thing you had to go on other than the DTK was the precedent of the first Rosetta. It was a major feat back in its day (god I feel old just writing that...) but it was nowhere near Rosetta 2's compatibility and speed. Armed with nothing but the first Rosetta's precedent, it was a somewhat risky outlook.
They are saying that given Apple's track record with Rosetta 1, developers had no reason to trust Rosetta 2 would be sufficient to test their applications with, and would thus need to buy the expensive DTK.
Rosetta translates all x86_64 instructions, but it doesn't support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions.
I can imagine quite a number of users running into the above situation in multimedia related code.
But we're talking about Macs. That's a huge chunk of their userbase. For the people who use Macs for actual work you can broadly classify them in two groups, Devs who need Xcode and media. Yes there are exceptions but that's the majority. For one of those groups AVX is pretty important.
And those apps had to be ready too. You don't build a reliable platform by randomly breaking "a small minority of apps". You yourself are certanly in "small minority" of at least a few features you rely on.
The initial claim was, lxgr: "If you were initially fine with software emulation (i.e. Rosetta 2), as were many small and large software projects for macOS or Unix, you had no need whatsoever to get a DTK."
The subsequent claim was, xvector: "Rosetta 2 was straightforward and surprisingly fast, requiring zero tweaking or user interaction. Most people never even noticed."
Posters, myself included, are reacting against these claims, as they both put the cart before the horse, and the second gives only an end user perspective.
Devs had verified with a DTK that Rosetta2 ran their programs acceptably. Keep in mind patches had to be issued for programs which did not check for the presence of AVX, AVX2 or AVX512, else they would crash. This invalidates the first claim. It shows why the second claim is only the second half of the story.
So the logic follows a line rather than a circle.
Also nobody made the claim that: “And Apple made test hardware available for those people, But not enough for all apps to be tested”
The "ultra painful experience" part is about Rosetta 1.
For Rosetta 2, I agree Apple made a much better translation layer. There were still swaths of software that couldn't run on Rosetta 2, but the previous generation of Intel machine was still there kicking and alive, so people who didn't feel like taking the risk didn't need to.
luckily many of the things which caused bugs on ARM where related to the weaker memory ordering and code having (invalid) implicit assumptions about stuff like memory barriers
luckily because this has mostly been fixed and is also the biggest stumbling block when going from x86 => RISC-V
Through there was something about the specific LR/SC definition which I found quite problematic when it comes to implementing certain atomic patterns. But that was 2?? years or so ago and I don't remember the details at all.
Eitherway theoretically if you have a correct C++ standard compatible program without UB it should "just work" on RISK-V by now, but then I don't think such a thing exists (/s/j).
> if you have a correct C++ standard compatible program without UB it should "just work" on RISK-V by now, but then I don't think such a thing exists
Fortunately, what does exist is programs that have been ported to work on arm64 (or are native there), which will Just Work on RISC-V which has a slightly stronger memory model than Arm.
It actually does, the devices might run ART, however the whole developer toolchain depends heavily on the JVM, Gradle, Android Studio/InteliJ, the pleothoara of little CLI tools to transform those .class files into .dex, desugar modern JVM bytecodes into JVM bytecodes that can actually be mapped into existing .dex ones (for pre-Android 12 devices), library calls that need to be polyfilled into something else,....
Stupid question: why does android use ART (and Dalvik before that)? I guess it's more performant for mobile phones, but then, is there a reason why other java apps don't use it outside of android? Has anyone tried using ART/Dalvik outside of the android ecosystem?
It's a bit of a question about what you define the term "JVM" as. For many people it just means the thing which runs bytecode java was compiled to, in which case the Android runtime (ART) is a JVM (it has AOT, JIT and can run bytecode prodced from Java if done so in the right way).
But a more nit-picky/correct definition would expect the JVM to follow various specs (e.g. expect the bytecode the have exact the same format, features etc. as the one you find in a classical java application) and have various features which do not apply to the ART at all.
Or in other words, people which have nothing to do with java might call the ART a JVM in a generic way but people which do might be for good reason very insistent that this isn't the case. (Also Google lawyer will be VERY insistent it's no a JVM.)
Which is a kind of nit picking for Google laywers, as the approach taken by Dalvik/ART has been quite common in the embeded space, see PTC, Aicas, Aonix, microEJ, Websphere Real Time,...
Not all of them are available, however all of them do support AOT compilation, and their own bytecode format more optmized for their use cases than regular .class files.
The big difference between them and Google, is that they always played by the Sun/Oracle rules regarding Java licensing.
they probably mean Sun(Oracle) JVM on ARM, as far as I remember there where tons of issues with that one
and in general there where tons of issues with multi threaded code during the time Java did move to the "current" memory model (was that in Java 1.5 or Java 1.6? I don't remember), through Sun(Oracle) JVM having issues on ARM was still a think after a huge part of the ecosystem moved to Java 1.6+ but I think wasn't anymore much of an issue when Java 1.8 was approaching but uh that was many years ago the my memory is a bit vague
"and in general there where tons of issues with multi threaded code during the time Java did move to the "current" memory model"
I wrote code or managed Java development teams from around 1996 to ~2010 and I don't remember any problems with multi threaded code or memory models. But of course you might be right if you did run Java on an ARM mobile like the Nokia.
The only challenge I remember with two high load (thousands of write transactions/sec - back then) websites were GC pauses, where we had to hire consultants to help with tuning the GCs on different machines for the high load.
GNU/Linux applications are typically quite portable across CPUs. Is Android really that different? I would expect that once you have ported it an NDK application to at least two architectures, the third one should be really easy. And with Android, you already get two easily testable architecture (the ubiquitous aarch64, and x86-64 under virtualization).
(It's not that many Debian or Fedora community contributors have a mainframe in their basement—yet the software they package tends to build and run on s390x just fine. Okay, maybe there are some endianess issues, but you wouldn't hit those with RISC-V.)
Assuming you have the source code for all your native code, then yes it should mostly work. 3rd party libraries can contain native code which means that you may be dependent on someone else porting their code.
but "GNU/Linux applications are typically quite portable across CPUs" is only this case because a lot of people have a lot of interest and time invested into making that work
but for many commercial apps there is little insensitive to spend any time into making them work with anything but ARM (even older armv7 might not be supported because depending on what your app does the market share makes that not worth it)
still with both ARM and RSIV-V havine somewhat similar memory models and in general today people writing much more C/C++ standard compatible code instead of "it's UB in therefor but will work on this hardware"-nonsense I don't see to many technical issues
on issue could be that a non negligible amount of Android apps which use the NDK use it only to "link" code from other toolchains into android apps, e.g. rust binaries. And while I'm not worried about rust I wouldn't be surprised if some toolchains used might not support RISK-V when it starts to become relevant. Especially when it comes to the (mostly but not fully) snake oil nonsense stuff banking apps tend to use I would be surprised if there wouldn't be some issues.
Through in the end the biggest issue is a fully non technical one:
- if you release an app with NDK you want to test it on real hardware for all archs you support (no matter if there is some auto cross compilation or interpretation (like Roseta)
- but to do so you need access to such hardware, emulators might be okay but are not always cutting it
- but there aren't any RISK-V android phones yet
- but to release such phones you want to have apps, especially stuff like paypal or banking apps available
- but to have such things available the providers much evaluate it to be worth the money it costs to test which is based on marked share which is starting at 0 and slow growing due to missing support for some apps and missing killer features
So I think RISK-V android will be a think but as long as multiple big companies are not pushing it strongly it will take a very long time until it becomes relevant in any way.
But my immediate conclusion isn't "Signals are dangerous and let's go deep on the complexity thereof". Instead, my primary conclusion really is "Someone forgot to write a testcase to make sure log rotation behavior is covered, so of course it might regress".
I would throw in a third conclusion: given that you're directly using signals (without a wrapper to rationalize the interface) your team should be composed of people who fully understand how signals work. Neither the person who wrote this code or the person who reviewed it had internalized the fact that "the default SIGHUP handler kills the process", and therefore using signals was an inappropriate decision for this team.
But this is an exceptionally high bar; most interfaces do not require that your devs have memorized so many details. Signals expose a terrible interface and there is essentially no satisfying way to make use of them. We should expect better from the tools we use.
I dunno, I don't even program in C and when scrolling through the diff I immediately said to myself, "Hey, I bet that callback registration they removed for SIGHUP could be an issue; I would have no-op'ed the callback and left it in there." Of course, I had the privilege of knowing that there was something wrong with that specific code diff relating to signals.
My take is similar to yours: only use tech and features the plurality your team understands. If the people leave that are the only ones who understand the fancy stuff, then you run into this type of issue. At my current job, the guy who setup the database put all kinds of fancy extensions, triggers, and procedures in there and no one knows how to dissect it or what it all does, so now we're running old versions of code in a database container. I am sure someday it will be my job to replace that black box with something else, and probably just move the data from one to the other with ETL methods.
It's sad how the ND-100/500 (and 5000+) families have almost completely disappeared, including online material about them.
The IT department at my university was involved in NDIX development (BSD for ND-5000), I believe. This was a few years before my time so I didn't get first-hand exposure to that.
I do regret not holding on to one of the Compact ND-100/110s that we had around in the late 90s, nor any of the Tandberg terminals that we had huge numbers of.