If the cartels attack American civilian infrastructure with drones, the American public will support a full on land invasion and annexation of Mexico if they're told that will make it stop.
I get this, but also genuinely interested to know how to measure outputs. For me it's almost impossible to get it objectively right.
Maybe this doesn't apply to your case, but how would you measure outputs of say product development, or any data related project. Lot's of things don't have a good measure of output before the thing is done. Maybe your product / analysis improves profitability by 10x or maybe it was a flop and lost money.
Tangential, but I'm also seeing the quality of measures going down, with AI it seems that the number of [emails|code|analysis] produced is again a good measure.
> I get this, but also genuinely interested to know how to measure outputs.
Measuring outputs or inputs (hard work) is always hard. Did someone get the thing that was asked done both quickly and correctly? Do they do this consistently?
I also find inputs harder to measure because someone could be in the office 12 hours/day, but on Facebook the whole time. They could also just spin their wheels doing 'fake' work.
I spend some time going through what programmers wrote over the past years and many of them were rewarded for getting things done quickly with no complaints.. The more diligent ones probably didn't last since they got things done correctly which takes a lot more time and thought.
It's why I said quickly and correctly. I think it's a cop out to say someone was slow because they were building it correctly. Famously, the old space shuttle software was developed very slowly because it had to be 100% correct at all times. Most software does not need that level of correctness. Part of a SE's job is to understand that.
I pay a lot of attention when someone claims to have solved a problem I suspect to be NP-hard. There are a lot of possible explanations, for example they may have an incorrect measurement function or they may have chosen a simpler related problem that isn't really NP-hard, or both.
A burger flipper cannot flip 20x the burgers. There isn't really any way to produce more output flipping burgers. Even if you could, if there isn't a queue of people waiting to collect their orders, there isn't any point in producing more blindly.
The person responsible for designing the process that thousands of franchises use probably does make a lot of money.
I think the heart of what they're getting at is that while on paper they are bringing in less income, they have gotten off the hedonistic treadmill, and as a result, quality of life per dollar has increased dramatically. They are less stressed about finances than they were prior, even though their income is lower.
The thing is, some people don't view "maintaining networks" as work, and it's something that not only comes naturally when they do it, but they actually do it naturally, automatically.
These people have a real advantage.
It's like how I may have a real durable advantage because I really enjoy reading about software, computers, etc, so I just consume a lot of information passively.
Or maybe how I get a lot of practice arguing or convincing people on reddit or space battles.com.
If someone viewed reading Hacker News as work, I'm not sure they'd EVER do it.
Managing complexity, modularity, separation of concerns, were already critical for ensuring humans could still hold enough of the system in their brains to do something useful.
People who do not understand that will continue to not understand that it also applies to AI right now. Maybe at some point in the future it won't, not sure. But my impression is that systems grow in complexity far past the point where the system is gummed up and no-one can do anything, unless it's actively managed.
If a human can understand 10 units of complexity and their LLM can do 20, then they might just build a system that's 30 complex and not understand the failure modes until it's too late.
> People who do not understand that will continue to not understand that it also applies to AI right now.
I think this is mostly a matter of expectation management. AIs are being positioned as being able to develop software independently, and that’s certainly the end goal.
So then people come in with the expectation that the AI is able to manage that, and it fails. Spectacularly.
The LLM can certainly not manage any non-local complexity right now, and succeed in increasing the technical debt and complexity faster than ever before.
You only notice plastic surgery when it's bad, but that doesn't mean all plastic surgery looks bad...
reply