Why is it silly? Is it reasonable to hold the opinion that DOGE should not have been given access to these systems (note: this doesn't mean that the opposite view isn't also reasonable)? If it's a reasonable position to hold, then getting access to these systems can be reasonably construed as an attack, can it not?
I don't really think this argument merits a comparison to "technology is the mark of the beast" or that the only people that can be opposed to DOGE suffers from "personality derangement" or "glorifies bureaucratic power"
> Is it reasonable to hold the opinion that DOGE should not have been given access to these systems
"We audited ourselves" typically doesn't fly, so no, I'd say not a reasonable position. Someone external has to do it, and DOGE is the one tasked by the president to do so.
Audit are already conducted by outsiders, but the objection to DOGE is less the concept of auditing but how they’re doing it by bypassing all kinds of policies. Normally auditors would be qualified, have passed background checks, and agree to follow the same security and privacy policies.
The outrage is because they’re taking a lot of risk and clearly treating it as a political exercise when it shouldn’t be.
It's because people ended up with models that were thousands of lines and difficult to reason about. Out of curiosity, did you end up running into this issue and how did you deal with it?
I work on a few projects that do have a model that is over a thousand lines long. A lot of times as the model gets more complex, you start moving associated model logic into their own models, which helps reduce the problem space. I think its fine because the logic ends up being cohesive and explicit. Whereas services end up with logic being hard to track down when they get very large and usually scattered.
You are correct that the problem is an information bandwidth issue.
The number of variables that must be considered for any given case are extensive and multi-layered. I don't have the time to get much in depth on this but off the top of my head:
- There can be errors in our lab tests
- The errors are test-dependent in degree and significance
- Less than ideal handling or sampling affects different analytes differently
- Errors in one analyte can affect others
- The absence of deviations from normal in some tests can itself be an abnormal finding
- Sometimes abnormal findings can be due to error or genuine in an unknowable way
- The confidence of what constitutes "normal limits" is dependent on the underlying epidemiology of what is being tested for
- There is no realistic way to have a unified data model for everything we assess
- Much of what is interpeted/assessed gets reported as paragraphs of text
- Missing data can either be untested or unavailable and it is not always clear which it is
- What is communicated to you may be dishonest (such as drug seeking behavior)
- As much as we have tried to standardize fundamentally subjective assessments (such as murmurs) they are still subjective
- The MO is to push providers to the limit of what is safe in terms of number of cases and aggregate complexity
- Putting humans in the loop of anything opens up to emotional/social factors that can influence interpretations/trust in various factors
I could go on, but the point is that as a result of all of this, the most efficient route by far is passing information directly from one caregiver to the next since they are often operating on a platform of context (much from their training) that is difficult to replicate in software.
The solution in my mind is lifting providers off of the low level data and activities. If we can close the loops on automating decisions in a generally agreed upon and reliable manner, it removes the need to discuss or think on the information fed into that system. Clearly that is a big task, but the realistic path forward in my mind. Harping on matters of process at the same level of detail is a waste of time and effort.
Good code to me usually comes down to things like state management, code organization, ... Having code that reads like prose isn't a high priority to me, but I'm familiar with that style and I can see why people like it. I just wish they'd realize they're expressing an opinion on style instead of a fact.
I think every subreddit should have created a community on a reddit alternative, like lemmy, kbin, etc. and actively promoted it as a "temporary" replacement. This way, Reddit waiting out the blackout risks losing marketshare to the alternative.
Right now, that risk is very low because the alternatives didn't seem to have picked up enough critical mass, especially outside a few big topics like technology or news. Without an alternative picking up steam and stealing eyeballs, Reddit doesn't have an incentive to come to the table and can easily wait this out.
Interesting, I wonder if we're seeing an underlying Pareto distribution of "Redditors who engage with the community, moderate, or otherwise produce content" vs. the ~80% who just vote or lurk and nothing more.
I'm not affiliated with Gimp and don't know why they didn't support fat binaries in the end; but I did look into compiling Gimp as a Universal Binary on my own at one point. My experience matches those of another comment, which is that not all dependencies supported compiling to fat binaries (i.e. you couldn't just add a bunch of flags and get a fat binary at the end). The only solution I thought of was to compile for both platforms and then lipo all the built files. The main problem is I couldn't figure out a strategy to do this without making the build scripts into a gigantic mess.
Getting a Silicon build of Gimp wasn't actually that difficult. I know at least one other person besides myself that had gotten a build working from source and published how to do it in some form. The problem is that the CI system Gimp used for the Mac build did not have ARM runners, yet. This meant that to produce the production build required cross-compiling from an Intel Mac. While I'm sure it's possible to accomplish this, it was quite tedious and I gave up. As an example, one problem was that the Gimp build process builds tools that need to be run on the system doing the build and just splitting out those parts from the parts that need to be compiled for the target system was tedious.
> Honestly, if someone showed up with a gap in their resume and claimed that they were doing start-up, open source, etc. for an interview, I'd dig deep into that hard.
This is just as, if not more toxic than the advice that you're opposed to. I've had similar experiences with interviewers for non-technical questions, and it comes off as aggressive, antagonistic and traumatizing; especially in your example where they left it off their resume as a gap. The interviewee's perspective might have been to say that they've been spending some of their spare time keeping their skills sharp and now you're hammering them to see if a project left off their resume qualifies for some "high bar," while all they see is a negative and dismissive attitude.
Personally, I would much rather be programming my own projects or doing leetcode than play video games, but I wouldn't judge someone negatively if they told me they played games on their own time.
I don't really think this argument merits a comparison to "technology is the mark of the beast" or that the only people that can be opposed to DOGE suffers from "personality derangement" or "glorifies bureaucratic power"