When you’re a junior it’s natural to overvalue your code, because you’re still learning this incredibly complex thing called programming, and that code took a lot of effort to produce!
But the actual truth is that the code itself is nearly worthless. The important thing is who you became when you wrote it. You learned interface abstraction, api design, how to debug, how to execute it in your head, the tooling, and most importantly you got that spark of pure joy when it finally worked.
If others copy your code they get none of that, they only get the little worthless bit, they might win a battle (cheat on an assignment) but they will loose the war (they don’t grow or experience joy) so don’t worry about it, they will flake out, while you grow and reach for even greater things!
Me too, once you’ve had a lot of practice with it (like anything) and know how to mitigate some of its weaknesses, then it’s a superpower.
I currently pay 200 USD a month for AI, and my company pays about 1,200 USD for all employees to use it essentially unlimited - and I get AT LEAST 5x the return on value on that, I would happy multiply all those numbers by 5 and still pay it.
Domain knowledge, bug fixing, writing tests, fixing tests, spotting what’s incomplete, help visualising results, security review generation for human interpretation, writing boilerplate, and simpler refactors
It can’t do all of these things end to end itself, but with the right prompting and guidance holy smokes does it multiply my positive traits as a developer
> and I get AT LEAST 5x the return on value on that
You make $800 by paying OpenAI $200? Can you please explain how your the value put in is 5x and how I can start making $800 more a month?
> holy smokes does it multiply my positive traits as a developer
But it’s not you doing the work. And by your own admission, anyone can eventually figure it out. So if anything you’ve lost traits and handed them to the Llm. As an employee you’re less entrenched and more replaceable.
>You make $800 by paying OpenAI $200? Can you please explain how your the value put in is 5x and how I can start making $800 more a month?
I estimate that the addtional work I can do is worth that much. It doesn't matter that "I do it" or "The LLM does it" - Its both of us, but I'm responsible for the code (I always execute it, test it, and take responsibility for it). That's just my estimate. Also what a ridiculous phrasing, the intent of what I'm saying is "I would pay a lot more for this because I personally see the value in it" - that's a subjective judgement I'm making, I have no idea who you are, why would you assume thats a tranferrable objective measure that could simply be transferred to you? AI is a multiplier on the human that uses it, and the quality of the output is hugely dependent on the communication skill of the human, you using AI and me using AI will produce different results with 100% certainty, one will be better, it doesn't matter who, I'm saying, they will not be equal.
>But it’s not you doing the work. And by your own admission, anyone can eventually figure it out. So if anything you’ve lost traits and handed them to the Llm. As an employee you’re less entrenched and more replaceable.
So what? I'm results driven - the important thing is that the task gets done - it's not "ME" doing it OR the "LLM" doing it, it's Me AND the LLM. I'm still responsible if there's bugs in it, and I check it and make sure I understand it.
>As an employee you’re less entrenched and more replaceable.
I hate this attitute, this is an attitude of a very poor employee - It leads to gatekeeping and knowledge hoarding, and lots of other petty and defensive behaviour, and it what people think when they view the world from a point of scarcity. I argue the other way - the additional productivity and tasks that I get done with the assistance of the LLMS makes me a more valuable employee, so the business is incentivised to keep me more, there's always more to do, it's just we are now using chainsaws and not axes.
> I estimate that the addtional work I can do is worth that much. It doesn't matter that "I do it" or "The LLM does it" - Its both of us, but I'm responsible for the code (I always execute it, test it, and take responsibility for it). That's just my estimate. Also what a ridiculous phrasing, the intent of what I'm saying is "I would pay a lot more for this because I personally see the value in it" - that's a subjective judgement I'm making, I have no idea who you are, why would you assume thats a tranferrable objective measure that could simply be transferred to you? AI is a multiplier on the human that uses it, and the quality of the output is hugely dependent on the communication skill of the human, you using AI and me using AI will produce different results with 100% certainty, one will be better, it doesn't matter who, I'm saying, they will not be equal.
I disagree, I brought all this up because it seems you are confusing perceived, marketed/advertised value with actual value. Again you did not become 5 times more valuable in reality to your employer or by obtaining more money (literal value). You're comparing $200 of "value" which is 200 dollars to...time savings, unmeasureable skill ability? This is the unclear part.
> I hate this attitute, this is an attitude of a very poor employee - It leads to gatekeeping and knowledge hoarding, and lots of other petty and defensive behaviour,
You may hate that attitude but those people will be long employed after the boss sacked you for not taking enough responsibility for your LLM mistakes. This is because entrenching yourself is really the way it's always worked and those people that entrenched themselves didn't do it by relying on a tool to help them do their job. This is the world and sadly LLMs don't do anything to unentrench people making money.
All I am saying is enjoy your honeymoon period with your LLM. If that means creating apple and oranges definitions of "value" then comparing them directly as benefits, then more power to you.
What do you mean "No cheating by generating your results"? we cannot rely on honesty for this, it will have to be publicly visible, externally verifyable results.
AI is probably the worst offender at this (killing mental model diversity) but there’s lots of other things that do it too, the internet enabled rapid sharing of ideas, but it did homogenise human mental models a lot more.
Of course the benefits of the internet far outweigh the negatives but it’s still worth being aware of the negatives so we might mitigate them.
I’m in favour of radical ideas at the edge, I love conspiracy theories, I love fringe scientists, I love the crazy ones, because we need them.
All ideals should be surrounded by a moat of tolerance, that way, the crazy ones can exist in the moats
Email the state congressman and tell them what you think.
Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.
Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy
If that's the case, why do people in Congress keep voting for things their constituents don't like? When they get booed at town halls they just dismiss it as being the work of paid activists.
To be fair, it was in poor taste. And against the general HN policy. I'm proud of my bad pun (in the best French tradition of never letting a bon mot go to waste), but not proud of posting it.
Yes because external storage is much larger, and theres nothing more annoying than being in the middle of doing some science with 30 other bits of complex equipment, and then the camera stops working with storage full errors and youre 7000m underwater in a cramped sub trying to navigate a camera UI to find the setting.
Configure your systems so they are in the configuration that is less likely to cause random disruption in the field.
Which makes me wonder why they bother with the SD card at all. What was it meant to be storing? If it is not intended to be the real storage area, why not just have it in a loop, constantly over-writing the oldest material?
The camera was an off the shelf part (from a very specialty shelf I suppose). It had an SD card built in because some people might not have a thing to stream to; it's probably good for demos and cheap enough to be good for a bullet point. Given the rest of the components inside they probably had enough margin that they weren't optimizing for costs. The value add was in the pressure vessel, and that seems to have mostly worked.
If something that is true scares you, you should think about it and look at it, in little bits, until it doesn’t.
Accept your fragility, be grateful for what the universe gives you, be humble about your limits and faults, and spread happiness, joy and love to the other fragile, limited beings around you. There’s your cure for existential dread.
> When I stop the production line to say "wait, let me understand what's happening here," the implicit response is: "Why are you holding up progress? It mostly works. Just ship it. It's not your code anyway."
This is not a technical problem or an AI problem, it’s a cultural problem where you work
We have the opposite - I expect all of our devs to understand and be responsible for AI generated code
But the actual truth is that the code itself is nearly worthless. The important thing is who you became when you wrote it. You learned interface abstraction, api design, how to debug, how to execute it in your head, the tooling, and most importantly you got that spark of pure joy when it finally worked.
If others copy your code they get none of that, they only get the little worthless bit, they might win a battle (cheat on an assignment) but they will loose the war (they don’t grow or experience joy) so don’t worry about it, they will flake out, while you grow and reach for even greater things!
reply