The point of chaining exceptions is to add information to an error result. It prevents one from having to pass down information to produce an error message that makes sense to the user.
That really falls into "3) need to log some local context" which could be parameters, state, etc.
But generally, for the user, I posit that it should still be handled at the highest level of granularity and the message returned to the user on an error should be similarly high level.
I try to use exception chaining to create a message that is useful for the user
to actually debug the problem (solution-directed error message).
The classic toy case is either getting back "permission denied" or "can not open foo" but not both. Chaining of error messages gives a straightforward mechanism for this.
Then, the high-level text, along with the caused-by messages, is displayed without a stack trace to hopefully give the user enough information to address an issue.
Chaining can be done with explicitly returned errors as well as exceptions. The hard part is actually thinking about "is this use to the end user"
No data checksumming is one of the main reasons not to trust APFS for valuable data. Maybe it is fine on Apple SSDs, but what about terabytes of data on external disks?
It's an interesting package, and I have some of the use cases it appears to address. However, the documentation is inadequate to quickly understand how to robustly build some of the more complex cases. In particular, how to build bash-style process substitution. Robust here is the pipeline exits non-zero if any of the substituted processes fail, as demonstrated by this example:
#!/bin/bash
set -beEu -o pipefail
cat <(date) <(false)
echo did not exit non-zero
If this is addressed, it would be worth more time to figure out Pipexec.
My view is that automated testing is not a substitute for QA, but an additional tool. It lets QA focus on harder to automated tasks.
For unit tests, a developer is going to try something out anyway, so capturing it in a unit test for the future should be just a little extra work. Also, writing a unit test means the developer has minimally used what they are writing.
Higher-level (system) testing, especially with GUIs, can be more work than the value added. It is a cost trade-off, but ultimately, a human adds value not matter how much automated.
AI will help too, but these are also tools. Drop QA and you are trading off costs for quality.
I worked at a place that did pretty strict TDD and had a dedicated QA person embedded on each team. Our high-level systems tests severed more as a smoke tests and only ever tested the happy paths. Our integration and units tests of course covered a lot more, but QA was essential in covering corner cases we never thought about as developers.
There is one programming language that fascinated me (maybe it was Ada) where it tried to have some basic tests inline with the code, by defining basic guidelines for legitimate results of the function.
For example, you could make a function called `addLaunchThrusterAndBlastRadius` (I know it make no sense, but bear with me), and then right alongside declaring it was an integer, you could put a limit saying that all results that this function can return must be greater than 5, less than 100, and not between 25 and 45. You could also do it when declaring variables - say, `blastRadius` may never be greater than 100 or less than 10, ever, without an exception.
I wish we could go further that direction. That's pretty cool. Sure, you can get that manually by throwing exceptions, but it was just so elegant that there was just no reason not to do it for every function possible.
Modern C++ supports this pretty extensively via the type system. You can define/construct integer types with almost arbitrary constraints and properties that otherwise look like normal integers, for example. The template / generics / metaprogramming / type inference facilities in C++ make it trivial. Some categories of unsafe type interactions can be detected at compile-time with minimal effort, it isn't just runtime asserts.
This is common in C++ for reliable systems. You infrequently see a naked 'int' or similar (usually at OS interfaces), almost all of the primitive types are constrained to the context. It is a very useful type of safety. You can go pretty far with a surprisingly small library of type templates if the constraint specification parameters are flexible.
(This is also a good exercise to learn elementary C++ template metaprogramming. A decent constrained integer implementation doesn't require understanding deep arcana, unlike some other template metaprogramming wizardry.)
You can do that in Swift (and, I suspect, lots of languages).
Swift has a fairly decent assertion/precondition facility, as well as reflection[0]. They also have a decent test framework[1] (and I know that some folks have extended it, to do some cool stuff).
Some of these add significant overhead, so they aren't practical for shipping runtime, but they can be quite useful for debug-mode validation.
Assertions are a very old technique. I think I first encountered them in Writing Solid Code, in the 1990s. Back then, we had to sort of "roll our own," but they have since, become integrated into languages.
Of course, all the tools in the world, are worthless, if we don't use them.
> Of course, all the tools in the world, are worthless, if we don't use them.
True...
I wonder if there would be any way, to simplify the syntax, and require basic assertions to be on every function. There might be a super easy cop-out like `any`, but at least by being forced to type it, you become aware of what it means and that it exists.
Almost like:
`public any int addXandY (any int x, any int y) {`
I also wonder, if there could be such a thing as an `assertion exception` (or whatever it would be called). Maybe it would just make things a mess, but I'm just thinking out loud. Basically, you could have a function that behaves a specific way 90% of the time, but for that 10% of the time where it doesn't work, you could pass that assertion exception to override. Maybe that would just be awful... or it would keep functions much cleaner?
Maybe you wouldn't even call it an exception. You'd just have multiple sets of assertions that could be applied to each function call.
I just had another thought. What if you could have a bank of assertions? Like this pseudocode:
```
assertion acceptableBlastNumber (int x) {
x < 25;
x > 5;
}
assertion acceptableBlastRadius (int x) {
x > 500;
x < 1000;
! (x > 750 && x < 800)
}
assertion acceptableBlastAddedNumber (int x) {
x < 1025;
x > 505;
}
public acceptableBlastAddedNumber int addBlastNumbers (acceptableBlastNumber int x, acceptableBlastRadius int y) {
return x + y;
}
addBlastNumbers (10, 720) => 730
addBlastNumbers (26, 750) => Exception
```
Though I suppose that this is getting really close to just... classes. It would just be a little more... inline? Less complicated because it would never hold state? Though I suppose, this would also mean your class can just focus on being an objec, and not on having all the definitions for the things inside it, because you can have an <assertion> <object> rather than just <object>.
Preconditions and postconditions around procedures. I thought that it was an innovation from Eiffel, though wikipedia lists Ada as an influence, so maybe it did originate there!
> My view is that automated testing is not a substitute for QA, but an additional tool. It lets QA focus on harder to automated tasks.
This is my take as well. Automation is great for a lot of the repetitive work, but humans are better at creatively breaking stuff, handling UX testing, and improving and enhancing the automation itself.
> Capture one is always used and only amateurs who watch YouTube for premade settings use LR.
I use C1 (in fact, I just purchased the yearly subscription today for 50% off) and my personal impression is that my images look better in C1 than with LR.
But I was always under the impression that many many many "pros" use LR simply because it has a lot of bells and whistles (including that somewhat recent "AI" content fill).
Also, C1's layer adjustment paradigm makes the most sense to me compared to everything else I've tried.
Those bells and whistles are for amateurs that can't take a good photo and are pushing it to make it look good. Professionals often have worst equipment than amateurs in photography.
Setting backup-directory-alist to ((".*" . ".emacs.bak")) keeps the files local to the actually files, without the cognitive overload of being adjacent to the
edited files.
We really should say "a human genome". Reference genomes serve as a Rosetta Stone of genomics. So we can take DNA/RNA sequences from other individuals and align (pattern match) them to the reference as a way of understanding and comparing individuals.
It is not perfect, as a references can be missing or have large variability in DNA regions. The goal of the Human Pangenome Reference Consortium (HPRC) https://humanpangenome.org/ is to sequence individuals from different populations to address this issue. We are also working to develop new computation models to support analysis of data across populations.
After years of BSD and Linux on my desktop, I finally figured out that the desktop is about applications. Many of us need to actually accomplish things in addition to develop Open Source software.
Had M$ been broken up, maybe it would be different. Until things change, I am happy that at least I can run applications on UNIX based MacOS.
It is amazing how much time projects seem to spend on rewriting history for the goal of displaying in in a pretty way. Leaving history intact and having better ways to display it seems far saner. Even after a merge, history in the branch maybe useful for bisect, etc.