Hacker Newsnew | past | comments | ask | show | jobs | submit | llmssuck's commentslogin

Why is this using Unity? That's insane? How do we know this is not malware?

Can’t speak about the Unity part, but why would it be malware? If you’re a dev with street cred, I’d imagine you won’t hurt it by putting out malware.

download it and test on virustotal

"no vibe coding" is different from "no ai". I'm not sure where the authors are going with this. No autocomplete? What level of autocomplete? No "deep learning"?

Just to offer a counter-example, using AI makes programming bearable again for me. Most of programming comes down to a short - edit: not quit so short but you may understand the figure of speech.. - list of things which are repeated ad infinitum in myriad variations.

I don't have to slog through yet _another_ way to sort, split, combine a list, open a file, show a UI component, handle events, logging, make data flow through some type of "database", serialize and deserialize endless things, implement yet another protocol in $whateverishotnow, managing authn, authz, the list is endless.

The interesting part of programming, for me, is deciding on and capturing the domain in a tight, surprisingly simple yet powerful architecture. This is hard - for me - and actually has very little to do with "programming" per se, meaning it has nothing to do with wrangling syntax/low-level semantics of whatever platform I'm on and fighting package managers to name just two highly depressing parts of my job. I don't like typing code. I am doing it my entire life and I still don't like it.

I like coming up with invariants and ways of guarding them. To find simple decompositions that turn a hairy, ungodly blob of a problem into a manageable almost trivial network of not-so-complicated things. The not-so-complicated things themselves.. I don't care in the slightest about them. Opening files, managing database connections, forms, the mechanics of i18n, typing the word "class", you name it. I find it exceedingly boring.

Perhaps I am more of the architect type, but I find managing a bunch of AIs and making sure they don't stray from My Path is easy on the mind. Programming works on my level of abstraction finally.


I understand your point, but a slightly more positive reading might be that the quantity of information consumed, while perhaps unable to be precisely quantified, can be related to the type of content being perceived.

Staring at wall produces little information in and of itself, perhaps through reflection, but staring at a TV produces a load of information, most of which is useless like names of characters, their favorite dresses, what food is being eaten where, etc. You can learn a lot by just passively observing even "dumb" TV especially if it contains foreign content or skills like cooking or sports. Again, not saying all of it is relevant to your life, but that's a different issue.


I dunno I feel like brains are always going? It's not like if I'm staring at a wall my thoughts slow down vs if I'm watching a movie. If anything I'll be more "focused" on my thoughts so maybe they are more intense than the "shut brain down" effect of mindlessly consuming media? And I gave example of a wall, but what about scrolling tiktoks vs walking in the woods? Am I really processing more information scrolling tiktok than walking in nature? Hard to believe for me!

Interesting example for sure. Walking the woods seems more complex, but I still think there is a real difference between "this character Xenia in a TV show acts an actor inside a TV show inside this current one and she likes to eat brownies with yellow cream on top" versus "I see trees with many leaves".

TikTok I have no knowledge of, but for sure seeing something like "Arab dude wearing suspicious looking outfit playing unknown instrument that I now have a name for playing a tune I did not know the name of but I do now says weird cultural thing that is highly specific to his or her locale but it kind of makes sense because of clues inside the video" is still very high-load compared to "I see a bird there that I do not care about in any way shape or form but I do remember it is blue".


Yeah I just disagree immediately. Even having to mechanically traverse and move, each step you think way more than, "swipe". Plus all the things to look at all around you, being tired, etc.

Just want to add this is my experience as well. Just solid coworkers. Of course they mess up sometimes, but easier to fix up than with humans and their politic and egos. I find I can actually reason for once instead of always fighting and deferring to whomever has The Biggest Opinion and not rarely just the loudest voice.

I think many people here work at nice, large places with reasonable and knowledgeable colleagues that are cooperative and mostly rational and try to do the right thing. In my experience that is not a common or widespread thing. Of course I only have small to medium business experience, but that's still a pretty good chunk of the economy. LLMs are an absurd, ridiculous win in those kinds of environments.


I know it's unpopular to say (here), but I see it all the time. Myself I sometimes cannot recognize what I wrote and what the agent wrote. It's just that I often have a physical memory of typing it, but that's it. (I also saw a lot of garbage, to be fair.)

There is quite a bit of skill to it, however. You cannot just take an AI from blank to "good code" without doing work. Yes, it takes work and quite a bit of it. By this I mean you have to write a good code style guide and a proper explanation of your architectural style(s), your preferences, your goals, plenty of examples, etc. Proper thought has to be put into this.

If you come across bad code, you need to investigate not castigate: why did this happen? How can we prevent this in the future? Those sort of processes need to become second nature. They actually should be already, because it's not that much different from managing a bunch of humans.

Humans come with lots of implicit knowledge and you also select them to match your company's style when you're hiring them. When they sit down at their keyboards you (and society) has already guided them towards a desirable path. (And even then they often still misfire.)

AI agents operate different. Their range of expression is completely alien to us. We cannot be both von Neumanns and complete morons. LLMs have no problem there. It takes a good while to get used to that.


We might have different concepts of "hard", but if I were a construction worker I think I would agree. Hell, I'm a developer and I agree. Figuring out what to do definitely is the hard part. The rest is work and can be sweaty, but it's not hard as if it's full of impenetrable math or requiring undiscovered physics. It's just time-consuming and in the case of construction work physically tiring.

It might be that I have been doing this for too long and no longer see it.


> it was an extremely hard engineering problem

But that is not programming then? Doing voice recognition in the 90s, missile guidance systems, you name it, those are hard things, but it's not the "programming" that's hard. It's the figuring out how to do it. The algorithms, the strategy, etc.

I might be misunderstanding, but I cannot see how programming itself can be challenging in any way. It's not trivial per se or quickly over, but I fail to see how it can be anything but mechanical in and of itself. This feels like "writing" as in grammar and typing is the hard part of writing a book.


I count "figuring out how to do it" as part of the work of programming, personally.


Fair enough, but I think that never really worked all that well. What I mean is that the term "programming" would then essentially cover anything and everything that can be put into an algorithm of some sort. Neutrino detection, user management dashboards, CRUD APIs, basically everything is programming.

It would explain a lot of misunderstanding between "programmers" though.


It's true that we often split software engineering into two pieces: doing the design and implementing the code. In fact I often prefer to think of it as computer systems engineering because the design phase often goes well beyond software. You have to think about what networks are used, what the hardware form factor should be used, even how many check digits should be put on a barcode.

But then you go onto say this:

> But that is not programming then? Doing voice recognition in the 90s, missile guidance systems, you name it, those are hard things, but it's not the "programming" that's hard. It's the figuring out how to do it. The algorithms, the strategy, etc.

That implies LLMs can't help with design of something that needs voice recognition and missile guidance systems. If that's your claim, it's wrong. In fact they are far better at being sounding boards and search engines than they are at coding. They make the same large number of mistakes in design tasks as they do in software coding, but in design tasks the human is constantly in the loop, examining all of the LLM's output so the errors don't matter so much. In vibe coding the human removes themselves from the loop to a large extent. The errors become shipped bugs.


They can help with those tasks because there are decades of published research on them. I don't think LLMs change anything here. Even before LLMs, it wouldn't have been efficient to ignore readily available lessons from those who solved similar problems before. As you put it, LLMs can be great search engines. It's not that they changed the task.

Whether you have to solve a problem that is hard today because there aren't many available resources, or something well discussed that you can research with Google or an LLM, I don't think it changes anything about their argument that once you know what to do, actually turning it into working code is comparatively mundane and unchallenging, and always has been to some degree.


> They can help with those tasks because there are decades of published research on them.

No.

I suspect the underlying meme behind this is "LLMs can't think, they are just stochastic parrots". If you define an LLM to be the thing the model builders train by ingesting and compressing information on the internet, in a way that yields a very convenient if somewhat unreliable search engine, then that's true. But if you define them that way, the modern models like Opus, Gemini, GLM and so on aren't pure LLMs, and haven't been for a while. They still have an LLM inside them, but have other things bolted on like attention heads, context windows, and Chain of Thought. These things outperform humans at some tasks that require "thinking".

Glasswing is an example. Before Glasswing human "researchers" could find a solution to a particular problem, the "lets find an exploit in this software" problem, as a particular rate. Now with Glasswing, it's orders of magnitude faster than that rate. The same thing has happened in other areas, like protein folding and weather prediction. These models aren't reproducing something they've seen before. They are producing things we consider new - like creating an exploit for a vulnerability they discovered.

With all the evidence around if you are still hanging onto the idea these models can't help with many tasks by doing what we call "thinking", you are in denial.


Well yes, those are all programming, just as there are many different types of engineering, civil, mechanical, nuclear etc.


one example i come back to is multithreaded layouting/rendering browser engines. it was mentioned in the servo blog post back then when mozilla worked on it: "we tried it in c++ but it was simply too hard and complicated" (the argument was that they were able to do it in rust). still, as long as rust wasn't avaiable, the problem was considered "too hard" for the team that wrote firefox. in my opinion that's a solid argument for "programming problem that was too hard".

but as simon said, i too consider coming up with "the algorithms, the strategy" as part of programming. saying "it's easy to do if you know exactly how to do it" is somewhat tautological.


I am spending about 10h per day solving chemical engineering problems (dynamic simulation, model predictive control, etc.). The programming is hard on top of hard science. Even after 25 years of experience, it is still hard to find the right abstraction to implement everything.

Still, one thing I really like with LLM/AI, is that now, I can allow myself to test different abstractions a bit faster. I can allow myself to "try" more complex refactoring on a feature branch, because if I describe correctly the abstraction I want, the LLM/AI tool will be normally good at producing it. But to describe my abstraction, I need to pull all my programming and engineering years of experience.

But at the end of the day, I always tell my wife, that with these new tools, which I could not imagine so powerful 3 years ago, I live in the future :-)


I suppose the complexity of the domain is the main driver of the difficulty level. Perhaps that's the intuition that I'm trying to pin down: programming itself, the typing of words for the compiler, the act of converting pure thought into code, seems mechanical at best. But if you include the act of abstraction itself, then I concede it changes the equation. I don't find it to be all that clear what is and isn't programming to be honest.

Especially once you get to describe your abstractions in plain (or slightly technical) English instead of code I find it hard to say "programming" is being performed, but in many ways the case could be made that it remained the same and only the shape of the artifacts is different now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: