SPARK is a formally defined computer programming language based on the Ada programming language,
intended for the development of high integrity software used in systems where predictable and
highly reliable operation is essential. It facilitates the development of applications that demand
safety, security, or business integrity.
People who don't want to read the first line of the article, rejoice!
It sounds like a good decision though. Formal verification tends to make more sense in highly concurrent situations where there is a high-value stable API to be supported. That describes graphics cards and surrounding environments really well so it makes sense that Nvidia would find use cases.
Something more from the article is also that they made the decision with a pilot project that went well. It'd be nice to have more details about it; that sort of thing is important to do well and always interesting. Lo and behold, we do have more details about it! https://blog.adacore.com/when-formal-verification-with-spark... - there is a video in there that looks to be worth a watch.
For those not aware, SPARK gives even more safeguards/guarantees than Rust (see below) ... and also does it at compile time (when not all of Rust safeguards are at compile time).
SPARK is free for all to use and is open source. They chose to pay for Adacores pro support services and verified pro Ada compiler over the FSF GCC/Gnat Ada compiler. Spark is actually part of the Gnat compiler (compatibility) but actually the slower analysis is done by gnatprove thereby keeping compilation and iterative development fast. Nvidia can certainly afford to do so of course.
AdaCore's pro support includes more recent releases and, in case of problems, wavefronts. That said, the free version is fairly recent, and you can get community (and sometimes vendor) support.
Seems like a great example of LLMs getting confused by language and lacking any actual understanding, although I haven't yet looked into this particular case.
I maintain conviction in my position that if AdaCore and/or other champions of Ada were to undertake a serious project (i.e. not an April Fools' Day joke blog post[1]) to ship a compiler that accepts a curly-braced dialect of the language[2][3], then there would be at least twice as many new, earnest users of the vulgar "skinned" form as there are programmers today using the existing language, and that this tripling would happen virtually overnight.
I'm not even talking about major revisions to the grammar. I'm saying fork the lexer to accept a different set of terminals, and leave the nonterminals alone.
Is this stupid and unreasonable and should programmers just get over themselves and use a language on the strength of its merits instead of being turned off by surface-level stuff? Yeah, it is, and yeah, they should. But people are fickle.
Not related to Ada, but Dafny is a "Verification aware programming Language" where both Formal Specification and Verification are combined together in the same source language - https://dafny.org It uses curly braces and can generate code for a number of different languages at the backend (eg. Java, C#, Go, Python, Javascript, C++).
I haven't played with it yet but just from reading the documentation i really like this approach since you need learn only one language/toolchain for both formal specification/verification and implementation. But by giving one the flexibility to generate code to various mainstream languages you can slowly add verified code module by module to existing codebases.
Derivatives of Pascal such as Modula-2, Oberon, Lua, and Ada do not require BEGIN on IF, FOR, WHILE, LOOP or CASE statements, making the syntax much cleaner.
I'm also desperate to stop Rust by any means, but I don't think you're correct; I think the sticking point for language adoption is the presence or absence of a big npm clone.
It's not absurd, it's just out of fashion. Ada syntax comes from the same heritage as Pascal, which was as popular as C during the 80s (and sometimes even more so).
That may well be, but syntax was not one of them. Someone who came into programming more recently and has only used curly-brace languages might not immediately understand what they are looking at, but that doesn't make it absurd - it's just different.
Pascal object code with runtime range checks could be slower than C object code without any such checks. Pascal strings typically were either statically bounded or included bounds information, while C strings typically did not.
Unfortunately C's lack of bounds checking made it very easy to create buffer overflows, which turned out to be a huge problem.
Ada doubled down on memory safety, which turned out to be a very good idea, especially if it can be combined with concurrency safety, formal checking (as with SPARK), etc.
Fortunately clang/llvm seem to be making some progress on memory safety; hopefully features like -fbounds-safety [1] will eventually bring C (and maybe C++) solidly into the 1980s.
Derivatives of Pascal such as Modula-2, Oberon, Lua, and Ada do not require BEGIN on IF, FOR, WHILE, LOOP or CASE statements, making the syntax cleaner and conciser than Pascal.
Article doesn't say what parts of Nvidia's stack use SPARK. Considering Nvidia is a huge software company with ~30,000 employees, "There are now over fifty developers trained and numerous components implemented in SPARK" doesn't inspire confidence.
IMO the realistic path towards formal verification is AI proof assistants that automate the tedious parts instead of forcing you to write your code in a weird way that's easier to prove
I'd also add that AI code generation in non-formally-verifiable languages, at places with concurrency requirements like Nvidia, might end up in the short-term creating more hard-to-spot concurrency bugs than before, as developers become incentivized to tab-complete code without fully thinking through the implications as they type.
But AI code generation for formally verifiable programs? And to assist in writing custom domain-specific verifiers? Now that's the sweet spot. The programming languages of the future, and the metaprogramming libraries on top of them, will look really, really cool.
Have you tried it for Rust, arguably a much lower bar than actually fully formally verifiable? Sorry to say but AI can't figure out the borrow checker. I'd speculate it does poor on a lot of things that are sort of implicit in the code / properties of the grand structure rather than incremental text things.
There is a podcast and blackhat? video about Nvidia choosing SPARK over Rust. Not because of formal verification at all but because it is a more developed prospect and offers better security even without any formal verification. This isn't mentioned but Ada is also far better at modelling registers or even network packets.
I might consider AI if it utilised SPARKs gnat prove but I wouldn't usw AI otherwise.
As an example of modelling packets, here's an example of modelling a complex packet which is a bit-packed tagged union. I don't think many other languages make such packets so easy to declare: https://gist.github.com/liampwll/abaaa1f84827a1d81bcdb2f5f17...
Maybe better performance can be achieved with specialized models. There are some that were able to solve mathematical olympiad problems, e.g. AlphaProof.
> Sorry to say but AI can't figure out the borrow checker. I'd speculate it does poor on a lot of things that are sort of implicit in the code / properties of the grand structure rather than incremental text things.
I don’t really use AI tools, but in the past few weeks I’ve tried it with Rust and while it had problems, borrow checking errors were never part of them.
I fully agree that LLMs don’t “understand,” but people also really oversell the amount of thinking needed to satisfy borrowing in a lot of cases.
I had a production bug (caught in canary) introduced into Google Play in just this manner. The AI code review assist suggested switching a method to a more modern alternative, both the author and code reviewer thought the AI suggestion was a good one, the new alternative was documented as not being safe to use in the onCreate() method of an Android activity, the code in question was not in onCreate but was in a method transitively called by onCreate in a different Activity, boom goes an unrelated Activity. Would've been caught trivially by an expressive type system, but the LLM doesn't know that.
I tested o3-mini yesterday, having it verify a bit-hack for "vectorizing" 8x8bit integer addition using a single 64 bit int value[0]. Suffice to say, I am not impressed. I asked it to give me a counter example that would make the function fail, or to tell me that it works if it does. It mentioned problems regarding endianness which weren't present, it mentioned carries that would spill over which couldn't happen. I had given it chances to give counter examples, but the counterexamples he gave didn't fail.
Only after telling it that I tested the code and that it works did it somewhat accept that the solution worked.
I think a deterministic, unambiguous process is a lot more valuable for formal verification.
A limitation I see with AI for coding is that your problem must be mainstream to get decent results.
In my experience, if I ask it to do web things in PHP or Java, data things in Python, ... it gives a good enough result. If I ask it a postgis question, I get an answer with hallucinated APIs, bugs, and something that doesn't even do what I want if it works.
I suspect the ADA/Spark world and the formal verification world are to small to decently train the AI.
> A limitation I see with AI for coding is that your problem must be mainstream to get decent results.
Or to phrase it another way, there must examples of the technique that you want to "generate" in the training set otherwise the horrifically overfitted model cannot work.
Probably a little bit too cynical but this is my experience asking LLMs "unusual" questions.
No, that's about where we are. Today's LLM coding is basically automated Stack Overflow copy/paste - not too bad for anything simple and mainstream, but unable to reason about code from first principles, and fond of making up random shit that looks good but doesn't work.
I tried asking a LLM for a full code snippet for the first time yesterday (i've been using them as search engines better than google, which isn't saying much, before).
It produced code that compiled but failed to do anything because the 3rd api call it generated returned an error instead of doing what the LLM said it would do.
Didn't spend time on seeing what was wrong because we already had a library that did the same thing; I was more testing what the LLM can do.
I thought OP was saying AI can generate the code that can then be formally verified based on your non-verifiable code, not that the AI itself is verifying it? I could be wrong, of course.
Perhaps I misread it then. I think that if you're at the point of using formal verification you're likely to be very meticulous in whatever you're writing, so an AI would be used quite carefully
> as developers become incentivized to tab-complete code without fully thinking through the implications as they type.
Is this true? There should always be a QA process and you should always carefully review when committing. The writing process doesn't influence that unless you're automating whole swaths you don't understand during reviews.
I’m not entirely sure they are writing any portion of their driver using SPARC the presentation they gave 5 years ago seems to indicate they are limiting the usage to firmware for their embedded RISC-V co-processor, they may have expanded the usage of it but I still think it’s predominantly firmware related with possibly some expansion to their automotive and robotics solutions.
It bears repeating that Nvidia is not strictly speaking a SW company. It is a semiconductors company. That 30k employees figure includes all Nvidia employees, not all of which are in software.
I am rather skeptical of AI in this context. Until you have verifiably correct AI assistants, you still need a highly skilled human in the loop to catch subtle errors in the proofs or the whole result is moot anyway.
And there _are_ tools out there capable of verifying C code, but for a greenfield project implementation/verification in a higher-level, formal language + verified compilation might make more sense.
- image authentication and integrity checks for the overall GPU firmware image
- BootROM and secure monitor firmware
- formally verified components of an isolation kernel for an embedded operating system
- In general, their targets tend to be smaller code bases that would benefit the most from SPARK’s strong typing, absence of runtime errors, and in some cases, rigorous formal verification of functional properties
> NVRISCV is NVIDIA’s implementation of the RISC-V ISA and Peregrine subsystem includes NVRISCV and multiple peripherals. They show how fine-grain access controls, formally verified for correctness, allow following the principle of least privilege for each partition. NVRISCV provides secure boot that starts with an immutable HW, the chain of trust extends to the Secure Monitor in SW, where partition policies are set up and isolation enforced using HW controls.. Boot and Secure Monitor software is implemented in SPARK.
The article seems fairly clear that it is the security folks within Nvidia that are spearheading this. 50 engineers on the security team doesn't seem unreasonable for a company of that size.
I feel like it’s going to be a step further and we won’t write actual “code” but more like comprehensive tests and it generates the code . Sort of like the movement from assembly to C
Historically AI has been pretty bad with this. Machine learning is famous for finding solutions that pass all the test cases in out of the box ways but don't do what you want.
Writing comprehensive tests is the part we're weakest at IMHO.
That's the same paradigm as outsourcing development at some cheap place and doing acceptance tests against the result. It saves money but that's not how you'd build an airplane for instance...
Don't they release custom tuning patches for every single AAA game that comes out? Geforce Experience is basically just a spoonful of sugar to make sure people install new drivers.
After briefly skimming through their talk they mention they're a third party consultancy company making recommendations on safety for NVIDIA, so "Nvidia Security Team" take it how you will. Also I think these drastic changes moving towards a completely different language always bug me, especially when the syntax is also completely different.
Then there's like everything libraries wise and CUDA is all C/C++ if I'm not mistaken. In every large project I'm sure you're eventually going to find some exploit attack vector.
Especially if the focus from what I believe for GPU/CUDA until recently wasn't mostly focused on security rather than performance and those are always trade-offs.
> After briefly skimming through their talk they mention they're a third party consultancy company making recommendations on safety for NVIDIA, so "Nvidia Security Team" take it how you will.
This is a marketing case study by the major Ada tools and training vendor, on years of engineering work done by their customer, Nvidia. It includes quotes from Nvidia employees and their blog references a Defcon video by the head of the Nvidia Offensive Security team, https://www.adacore.com/uploads/techPapers/222559-adacore-nv...
Principal Software Engineer
Manager, GPU Firmware Security
Senior Manager, GPU Software Security
VP, Software Security
At the conclusion of their initial POC at the end of 2018, NVIDIA had five developers trained in SPARK. By the second quarter of 2022, that number had grown to over fifty.. Several NVIDIA teams are now using SPARK for a wide range of applications that include image authentication and integrity checks for the overall GPU firmware image, BootROM and secure monitor firmware, and formally verified components of an isolation kernel for an embedded operating system.
The focus of their work seems to be low-level firmware, and not CUDA or any kind of compute. It makes sense a lot there because you won't need as many libraries.
The syntax is really a red-herring especially when you get contracts, proof of absence of runtime errors and higher-lever functional proof, mostly automated, or assisted with annotations. If you're actually going with this effort, the lack of curly braces, ampersands shouldn't be a main concern.
Problem is my main program is c++ complete with the stl that makes it useful. Vectors unique-ptr and all the other weird stuff that makes c++ better that c.
You still do the same thing that you would when calling a C library: you serialize your STL objects into plain C objects, pass them to the external library, and deserialize back. If you need callbacks, you make sure that they catch any exception and return an error code. It's really no different from virtually any other C++ interop.
There are a few languages that interop with C++ at a deeper level, essentially building a small part of a C++ compiler into their runtime. But they're relatively obscure - Clasp, an implementation of Common Lisp, is probably the only one that can really take an arbitrary C++ object (say, an std::vector<std::list<int>>) as an input to a Common Lisp function and work with it.
The bookkeeping overheads might be trivial though. If it boils down to a handful of instructions per call, that's probably not so bad, depending on context. If large buffers are being copied, that's a problem, but hopefully that can generally be avoided.
I don’t think your problem has anything to do with Ada itself. You’d have the same issues using Rust or Go or Swift or Zig or whatever - if you make heavy use of C++’s features, it can make interop with other languages harder
100% It seems like any language other than C has poor interoperability with anything other than C. C++ doesn't know how to deal with the rust borrow checker. Go will have issues with both.
D is an exception - D supports C++ (I'm not sure how or how well, but they at least have some C++ ABI constructors). there are others (some languages running on the JVM usually work with java not C)
> It seems like any language other than C has poor interoperability with anything other than C
Interop between C and classic 3rd generation languages such as Fortran, Pascal, Basic, COBOL, Algol, PL/I, tends to not be that hard [0] – because their feature set is at roughly the same level as C's: just functions/procedures without polymorphism or inheritance, relatively simple (non-OO) data types, etc.
Whereas, C++/Go/Rust/Swift/Ada/etc all add lots of complex features on top, and it is when you use all those complex features, interop with C and classic 3GLs gets harder. Sometimes you can define a subset which omits many of those more complex features and makes interop easier, but then you lose out on the benefits those features bring
[0] one complexity, is for historical reasons, sometimes these languages use different calling conventions than C – but on platforms on which that is an issue, it isn't uncommon for C compilers to support those additional calling conventions as an extension, and newer implementations of those languages for newer platforms are generally converging to C's calling conventions. Another is that some of these languages have data types for which C lacks native support – e.g. many COBOL implementations feature BCD arithmetic – but it is easy to write a routine in C to convert BCD to binary and back.
All the while I've been calling C and C++ code (hard stuff but C++ interop is ... hard) from Ada for 15+ years and calling Ada code from C too. python from Ada and Ada from python too. Ada from Java through JNI. Writing postgres extensions in Ada. It's not always a panacea but it's possible.
I forget sometimes that the papers and talks issued by teams at most companies are for PR and resume-padding. They hype the findings that justify that team's budget, and bury any negatives.
The other funny thing I noticed is the formal verification just means the program implements the standard - it doesn't mean the standard doesn't have security holes! And sometimes the security holes are implementation-specific, like timing attacks, which can require weird low-level tricks to solve.
> moving towards a completely different language always bug me, especially when the syntax is also completely different.
Why do syntax differences bug you? I could understand most concerns about switching ecosystems, but surely the difference between e.g. C++ and Java and JavaScript are orders of magnitude larger than that between C++ and Ada. Syntax is the smallest piece of it all, as far as I'm concerned.
> Then there's like everything libraries wise and CUDA is all C/C++ if I'm not mistaken.
Maybe it's naive of me but I also don't really perceive much of a security imperative for NVIDIA. They make graphics cards and slop generator cards for the most part. What exactly is the threat model here that requires switching to prioritizing memory safety? Are there a lot of graphics card related codes that are being exploited in the wild?
NVIDIA GPU RISC-V root of trust is analogous to Apple T2 secure enclave, AMD PSP, or Intel ME, which all perform security-critical functions.
> What exactly is the threat model here
It probably varies by product, but one commercial possibility is protection of price premiums, e.g. enforce feature segmentation for different products or customers, while using common silicon. NVIDIA operating margin reached 50%, unusually high for a hardware company, https://www.macrotrends.net/stocks/charts/NVDA/nvidia/operat.... AMD margin is below 20%.
~1 Billion RISC-V cores shipping in 2024 NVIDIA chips
Unified embedded HW and SW across all NVIDIA products
• Eliminates replication in basic primitives (isolation, crypto etc.)
• Maximizes SW/HW leverage across NVIDIA
Configuration: pay only for what is needed
Custom extensions: additional functionality, security, and performance
Our HW and SW architecture enable differentiation
There are upcoming open hardware/firmware RoT building blocks like OpenTitan (RISC-V), OCP Caliptra and TockOS (Rust) that could be used by competing device and platform vendors.
> don't really perceive much of a security imperative for NVIDIA
When countries start budgeting hundreds of billions of dollars for national investment in LLM-based AI based on GPUs, they may introduce new security requirements for the underlying infrastructure.
Yeah, there's a whole bunch of places where NVIDIA kit is used outside of "graphics cards and slop generators". I work on a drone system based around the Orin AGX and we definitely have "leverage the root of trust functionality to lock down the code that can run on the Orin" on the roadmap before we ever let the hardware end up directly in customers' hands. Root of trust -> Signed Bootloader -> Signed Kernel -> Signed Binaries. There's too much IP that we've spent too much money on to risk having end-users getting access to the raw TensorRT models involved.
In general, NVIDIA never had proper bug-free support in C for well over a decade (hidden de-allocation errors etc.), and essentially everyone focused on the cuda compiler with the C++ API.
To be honest, it still bothers me an awful GPU mailbox design is still the cutting-edge tech for modern computing. GPU rootkits are already a thing... Best of luck =3
GPU rootkits are sounds like misnomer unless they start getting rewritable persistent storage (maybe they do now and my knowledge is out of date).
If you've got malicious code in your GPU, shut it off wait a few seconds, turn it back on.
Actually looking at the definition, my understanding might be off or the definition has morphed over time. I used to think it wasn't a rootkit unless it survived reinstalling the OS.
The funny thing is that the "C++ API" is almost entirely C-like, foregoing almost everything beneficial and convenient about C++, while at the same time not being properly limited to C.
In general, a modern GPU must copy its workload into/out-of its own working area in vram regardless of the compute capability number, and thus is constrained by the same clock-domain-crossing performance bottleneck many times per transfer.
At least the C++ part of the systems were functional enough to build the current house of cards. Best of luck =3
NVIDIA began to ask themselves, “Can we change our fundamental approach? And if so, which of these tools can we actually use?” They put these questions to their software security team. In reply, the security team came back with what Daniel Rohrer characterized as “a fairly heretical” proposition: “What if we just stopped using C?” This led to other questions, like, “What alternative languages and tools are available to support the use of formal methods?” In trying to answer those questions, NVIDIA discovered SPARK.
The real problem is deeper than this. The actual question to ask is:
"What if we just stopped distributing and blindly executing untrusted binary blobs?"
A trusted compiler in the OS, and some set of intermediate representations for code distribution would solve a massive amount of security issues, increase compatibility, and allow for future performance increases and disallowing suspect code patterns (spectre, rowhammer, etc). Specializing programs at install time for the local hardware makes way more sense than being locked into hardware machine code compatibility.
That does nothing to fix the vast majority of security issues, which are caused by trusted but not memory safe programs running on untrusted input.
It's also an extremely unrealistic goal. First of all, you run into a massive problem with companies and copyright. Second of all, it will be very hard to convince users that it's normal for their Chrome installation to take half an hour or more while using their CPU at 100% the whole time.
"The Verified Software Toolchain project assures with machine-checked proofs that the assertions claimed at the top of the toolchain really hold in the machine-language program, running in the operating-system context."
Some of the same researchers worked on TAL (typed assembly language), which sounds like it could be one of the "intermediate representations" you mentioned.
For a while, Apple required apps to be submitted as bitcode (LLVM IR) to the App Store, where they would be converted to x86 or Arm machine code during device install. They stopped that a couple of years ago, after migration to Apple Silicon.
Thanks for the correction and link. Relevant to the comment above about binary blobs:
> The idea to have apps in a CPU neutral form available on the app store is not a bad one; that's why Android chose Java Byte Code (JBC). Yet JBC is a pretty stable representation that is well documented and understood, Bitcode isn't. Also on Android the device itself transforms JBC to CPU code (AOT nowadays).
The general idea is correct, but a rather significant detail is not.
Android did not choose Java bytecode, it chose Dalvik bytecode [1]. This was done for several reasons, but at a high level, Dalvik bytecode more closely matches how real computers work while Java bytecode targets a more abstract virtual machine. This result is that Dalvik bytecode is easier to transform into machine code, and more optimizations are done in the source code -> bytecode phase rather than the bytecode -> machine code phase, relative to Java bytecode.
> Programs for Android are commonly written in Java and compiled to bytecode for the Java Virtual Machine, which is then translated to Dalvik bytecode.. The successor of Dalvik is Android Runtime (ART), which uses the same bytecode and .dex files (but not .odex files), with the succession aiming at performance improvements.
Pretty huge promotional opportunity for AdaCore. You can tell because this was written by AdaCore.
Still, I am a bit convinced. The NVIDIA name is increase the chance that I will evaluate SPARK sometime soon. I have to admit, other than knowing it exists, I am not super familiar with it. What I have seen before now has felt like them using the popularity of Rust as a platform for promotion. This, at least, appears to be a use case where they came in on their own merits.
Efficiency of Ada should be pretty close to C, but that strange thing Ada does where you define your own numeric types like
type Day_type is range 1 .. 31;
creeps me out a little, it makes me think I have to throw Hacker's Delight [1] in the trash if I want to use it but I could be wrong. It makes me think of the the deep history of computing, where there were a few straggler architectures (like the PDP-10 [2]) that didn't use the 8-bit byte were around and when Knuth wrote a set of programming books based on an instruction set that wasn't necessarily binary.
(Lately I was thinking about making a fantasy computer that could be hosted in Javascript which was going to be RISC but otherwise totally over the top, like there would be bitwise addressing like the PDP-10. First I wanted it to be 24 bit but then I figured I could pack 48 bits in a double so I might as well. It even would have a special instruction for unpacking UTF-8 characters and a video system intended for mixing latin and CJK characters. Still boils down to an 8-bit byte but like the PDP-10 it could cut out 11-bit slices or whatever you want. I was going to say screw C but then I figured out you could compile C for it)
Love that feature, along with derived types[1] and subtype predicates[2]. There's a special place in my heart partitioned for things that won't let you screw things up.
> The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction.
I don't think what you're describing is a RISC processor at all.
Whether accelerated UTF-8 decoding breaks RISC-ness is an interesting question, and not one with an obvious answer IMO.
Supposing we loaded 4 bytes into a register with a load instruction (quite RISC), we could then have a "decode UTF-8" instruction which could set two output registers: one receiving the decoded code point, and the other receiving the number of bytes consumed (1-4). That's another perfectly RISCy operation: read one input register, operate on it, update two output registers. Most RISC architectures also allow base+offset addressing at least, so you can chain this to another load using the output of that second register; worst case, you'd need to add base+offset in a dedicated instruction. No violations of RISC here.
However, you'd start running into problems with alignment. Loading 4 bytes into a register typically requires 4-byte alignment, but UTF-8 is a variable-length encoding (hence the desire to accelerate it in the first place!). Is unaligned load RISCy? Many architectures now support it, but they usually didn't start off with it. Then again, 64-bit architectures can just ignore the problem entirely, since they can load 8 bytes at a time, which will always be enough to fit an arbitrary 4-byte value at any alignment. You'd just need to shift the value in the register by the amount of the misalignment, which is another valid RISC operation.
If you wanted to sidestep the alignment issue, then an alternate solution would be to decode the UTF-8 sequence straight from memory, but that definitely feels more like CISC to me.
And... So? It seems you are saying Day_type + Integer is Integer, and Integer (in general) cannot safely be coerced into Day_type and... that’s logically correct?
Pedagogically oriented computer science profs circa 1980 were aghast that BASIC had become the standard for teaching young people to program. It was difficult to fit Pascal into a microcomputer then, so we were stuck with the atrocious UCSD Pascal [1] which used a virtual machine to make up for the weak instruction sets of many micros, particularly the 6502. Since the compiler ran inside the VM, compiling a small Pascal program was like compiling a large C++ program today.
Not long after that I got to use Pascal on a VAX which was not so bad, but pretty obviously not up to doing systems programming, particularly compared to C, which was starting to show up on micros such as Z-80s running CP/M and the TRS-80 Color Computer running OS-9.
Then I got a 80286 computer and I really liked Turbo Pascal because it added the (non-standard) facilities you need to do systems work, but went back to C when I went to school because it was portable to the 68k and SPARC based Sun machines we had.
> compared to C, which was starting to show up on micros such as Z-80s running CP/M
I had some very limited access to CP/M z80 machines. Probably somewhere between 86-90. All I remember about the C compiler they had is that I had to swap 2 or 3 floppies to produce a linked executable, so it sounds similar to this UCSD Pascal.
My first real contact with Pascal was Turbo Pascal, and that ran loops around any other compiler I had access to back then...
I had a Coco with 2 floppies, I could compile C without any swapping. OS-9 was a real multitasking operating system, I could log into it with the keyboard and mouse and two serial ports at the same time (one with a UART, another with the terrible bit banger) The latter wasn't too bad with the portable DEC printing terminal I got that that was surplus at a credit agency.
UCSD Pascal was one of very few compiled languages you could get for the Apple ][ and other 6502 machines because the virtual machine was an answer to problem of code generation for that terrible instruction set with insufficient registers and addressing modes.
You wouldn't guess it from AdaCore's homepage, but SPARK tooling (including provers) and the GNAT Ada compiler are both made freely available, with the goal of attracting interest from FOSS projects.
ajxs is right to say that the way to get your hands on this tooling is to use the Alire package-manager. AdaCore no longer offer alternative means of distribution (except, presumably, to paying customers).
It failed to catch on outside the defense industry because that is where is started. It was long sneered at as a "language designed by committee".
In one programming class, in college, we used a text-book called "Oh my, Modula 3!". On the back they explained the title, saying "better 'Oh my, Modula 3!' than 'Oh no, Ada!'".
It's just too bad, because Ada looks like a language that should have gotten more popular. I was playing with some of the concurrency constructs built into it, and it was fairly pleasant, and it makes me think about the "what if?" universe where it caught on instead of C++ dominating the 90's and 2000's.
It would mean we need more RISC-V performant hardware. Because the real and honnest way out from C is assembly (with super high level languages with interpreters written in assembly). And it means a worldwide royalty free standard.
I guess we could get a middle ground with a simpler C dialect with the bits required for modern hardware architecture programming (no integer promotion, no implicit casts, only sized type, only one loop keyword, no switch...). The defining criteria for this language: it must be easy to get a real-life bootstrap compiler with one dev (reasonable amount of time, effort, etc).
I am currently coding many of my applications in RISC-V assembly, with a small interpreter to run them on x86_64, dodging complex macro-preprocessors (I use the basic features of a C preprocessor, I don't even use the RISC-V pseudo-ops).
The only thing keeping me on x86_64 are video games for linux and the fact that x86_64 micro-archs are mature, performant wise.
There's also Frama-C, but having used both Frama-C and SPARK I can say I'd pick SPARK any day. Having a rich type system and not having to work with pointers makes proving a program so much easier.
We are talking about firmware. I am just a web dev and dont know much about the embedded space, but my impression was that dynamic memory allocation is usually problematic in firmware in general.
Pascal strings used to have their actual length in a field at the beginning of the string. Imagine how many off by one errors and how many (even hidden) calls to strlen() that saves.
All they needed to modernize Pascal is some library code to resize the buffer if needed, just like curly brace languages. It may already be present in modern Pascal implementations, I haven't followed.
Pascal also has array bounds checking by default. Imagine how many out of bounds errors this could prevent.
But no, you have to have curly braces instead and build layers and layers of classes and languages over classes, all over the same impractical string representation.
Object Pascal and Delphi dialects have had open arrays for ages.
And this was even fixed in ISO Extended Pascal by 1990.
Not to mention that Wirth fixed this in Modula-2 in 1978, already, which everyone keeps forgeting it was actually designed for systems programming, while Pascal was originally designed in 1972 for teaching programming concepts.
But sure enough, lets complain about ISO Pascal from 1976.
In my talk I expounded on how people were concerned with safety even in the 80s, and how Ada provides tools to construct programs that were safe in many other ways besides just memory safety.
Millennials are getting old though. When is Gen Z going to build a language? They probably won't though, because it's hard to ask an AI to build a brand new Ada for you.
Yeah, I gave a talk on Ada a few months back at a local meetup. Calling it "Boomer Rust" at the start helped break the ice, everyone laughed, and it also helped them get the gist of Ada's particular value-add while I went into details.
"Testing security is pretty much impossible. It’s hard to know if you’re ever done,”
"We wanted to emphasize provability over testing as a preferred verification method"
Yet another developer seeking a panacea that will allow them to keep their job while diminishing their workload.
For typical workers, it's a fact that work is infinite, so you put your 8 hours and then do it again the other day.
Whenever devs get put in this position they go for the nuclear option of going for the moon.
Man just keep finding bugs to the best of your ability, it will never be 100% secure, except insofar as you gerrymander what's a vulnerability and what is your responsibility.
Good luck. But if I have to put my chips in Nvidia's VP vs C, I put them in C. The list of C contenders has been long, and it seems more likely that the vp is at the peak of their life ala musk or altmann and they have their own delusions. Not sure whether going to mars is crazier or whether finally replacing C is.
Details, slides and video are in this HN discussion, linked from sibling response.
vacuous: Devoid of substance or meaning; vapid or inane.
disingenuous: Pretending to be unaware or unsophisticated; faux-naïf.
guarded: Cautious; restrained.