> I assume some tests at least broke that meant they needed to be "fixed up"
OP said:
"However, we did not have any tests asserting the behavior remains consistent due to the ambiguous language in the RFC."
One could guess it's something like -- back when we wrote the tests, years ago, whoever did it missed that this was required, not helped by the fact that the spec proceeded RFC 2119 standardizing the all-caps "MUST" "SHOULD" etc language, which would have helped us translsate specs to tests more completely.
Oh, they explain, if I understand right, they did the output change intentionally, for performance reasons. Based on the inaccurate assumption that order did not matter in DNS responses -- becuase there are OTHER aspects of DNS responses in which, by spec, order does not matter, and because there were no tests saying order mattered for this component.
> "The order of RRs in a set is not significant, and need not be preserved by name servers, resolvers, or other parts of the DNS." [from RFC]
> However, RFC 1034 doesn’t clearly specify how message sections relate to RRsets.
The developer(s) was assuming order didn't matter in general, cause the RFC said it didn't for one aspect, and intentionally made a change to order for performance reasons. But it turned out that change did matter.
Mistakes of this kind seem unavoidable, this one doesn't necessary say to me the developers made a mistake i never could or something.
I think the real conclusion is they probably need tests using actual live network stacks with common components, and why didn't they have those? Not just unit tests or with mocks, but tests that would have actually used real getaddrinfo function in glibc and shown it failing?
Even if there weren't tests for the return order, I would have bet that there were tests of backbone resolvers like getaddrinfo. Is it really possible that the first time anyone noticed that that crashed, or that ciscos bootlooped, was on a live query?
solid_queue by default prefers you use a different db than app db, and will generate that out of the box (also by default with sqlite3, which, separate discussion) but makes it possible, and fairly smooth, to configure to use the same db.
Personally, I prefer the same db unless I were at a traffic scale where splitting them is necessary for load.
One advantage of same db is you can use db transaction control over enqueing jobs and app logic too, when they are dependent. But that's not the main advantage to me, I don't actually need that. I just prefer the simplicity, and as someone else said above, prefer not having to reconcile app db state with queue state if they are separate and only ONE goes down. Fewer moving parts are better in the apps I work on which are relatively small-scale, often "enterprise", etc.
Can you be more specific about the issues you have run into that make you advise GoodJob over SolidQueue?
I am (and have been for a while, not in a hurry) considering them each as a move off resque.
The main blocker for me with GoodJob is that it uses certain pg-specific features in a way that makes it incompatible with transaction-mode in pgbounder -- that is, it requires persistent sessions. Which is annoying, and is done to get some upper-end performance improvements that I don't think matter for my or most scales. Otherwise, I much prefer GoodJob's development model, trust the maintainer's judgement more, find the code more readable, etc. -- but that's a big But for me.
The first one that jumps out at me when I've evaluated it are batches (a Sidekiq Pro feature, though there are some Sidekiq plugins that support the same)
It's reasonable for basecamp, but the complaint of GP is that basecamp controls what is the Rails standard/default solution intended to be useful for multiple rdbms, without being willing to put rdbms-specific logic in rdbms-specific adapters.
Nothing that has worked yet. to say it hasn't worked yet is very different than saying nobody is doing anything. All we can do is try to figure it out. If you aren't doing something, look for something to do. Everyone I know is doing something. How are you going to figure out what will work except for trying things and seeing what you learn from that?
"Blaming" people for not being successful yet is very different than blaming them for not doing anything at all.
Someone once said the reason we had alcohol before civilization is that we carry around a chemical testing laboratory in our faces.
It just so happens that everything in beer that can go wrong and hurt you (any sooner than cancer) creates a distinct aftertaste and you can learn to avoid it rather easily.
The only exception of course is if you use poisonous ingredients in the first place.
I don't understand how discontinuing (eg) Honda Fit or Mitsubishi Mirage would have helped manufacturers meet CAFE standards. I think they in fact were not selling very well. I get how light trucks have different standards so they like to produce them, but wasn't every (eg) Honda Fit or Mitsubishi Mirage sold an aid to meeting CAFE standards too? (The profit margin isn't as good though, true). I don't have them in front of me now, but I think I did see sales figures that were a downward curve for those models though. What am I missing?
It also looks like, for better or worse, CAFE compliance penalties were eliminated in the "one big beautiful bill" act? So the changes you advocate have been made? (And applies retroactiely to model years 2022 and above). So we'll see if small cars come back as a result, I guess? https://news.sustainability-directory.com/policy/congress-el...
OP said:
"However, we did not have any tests asserting the behavior remains consistent due to the ambiguous language in the RFC."
One could guess it's something like -- back when we wrote the tests, years ago, whoever did it missed that this was required, not helped by the fact that the spec proceeded RFC 2119 standardizing the all-caps "MUST" "SHOULD" etc language, which would have helped us translsate specs to tests more completely.
reply