Hacker Newsnew | past | comments | ask | show | jobs | submit | kixpanganiban's commentslogin

> The article is confusing the architectural layers of AI coding agents. It's easy to add "cut/copy/paste" tools to the AI system if that shows improvement. This has nothing to do with LLM, it's in the layer on top.

I think we can't trivialize adding good cut/copy/paste tools though. It's not like we can just slap those tools on the topmost layer (ex, on Claude Code, Codex, or Roo) and it'll just work.

I think that a lot of reinforcement learning that LLM providers do on their coding models barely (if at all) steer towards that kind of tool use, so even if we implemented those tools on top of coding LLMs they probably would just splash and do nothing.

Adding cut/copy/paste probably requires a ton of very specific (and/or specialized) fine tuning with not a ton of data to train on -- think recordings of how humans use IDEs, keystrokes, commands issued, etc etc.

I'm guessing Cursor's Autocomplete model is the closest thing that can do something like this if they chose to, based on how they're training it.


In my defense, I wrote the blog post about quitting a good while after I've already quit cold turkey -- but you're spot on. :)

Especially when surrounded by people who swear LLMs can really be gamechanging on certain tasks, it's really hard to just keep doing things by hand (especially if you have the gut feeling that an LLM can probably do rote pretty well, based on past experience).

What kind of works for me now is what a colleague of mine calls "letting it write the leaf nodes in the code tree". So long as you take on the architecture, high level planning, schemas, and all the important bits that require thinking - chances are it can execute writing code successfully by following your idiot-proof blueprint. It's still a lot of toll and tedium, but perhaps still beats mechanical labor.


I clicked so fast expecting some sort of weird art experiment of maybe a hidden camera stuffed into a delicious cheese burger, as it goes through Munich.

Sadly, I was disappointed.


Hi! I wrote Doctor because I keep struggling with grounding on docs when working with agentic code editing (ex Roo, Claude Code).

Doctor uses crawl4ai to crawl websites, and then chunks and embeds them with langchain + litellm + openai, and finally stores all the vectors in duckdb. This allows your LLM to query the docs using semantic search over MCP, giving it grounded and up-to-date information for the things you're working on.

It requires an OpenAI key for the embedding process, but I'm working on giving users options in the future (different providers, local embedding using something like DPR or other transformer libs, etc.)


Interesting! Do you have an invite to spare? My email is in my bio


We had this same question when IDEs and autocomplete became a thing. We're still around today, just doing work that's a level harder :)


Hehe 12345678-b00b-4123-b00b-b00b13551234


Curious - how many containers and machines images these days come with uv by default?


Right now it looks like Oasis is only trained on Minecraft. Imagine if it was trained in thousands of hours of other games as well, of different genres and styles.

Ostensibly, a game designer can then just "prompt" a new game concept they want to experiment with, and Oasis can dream it into a playable game.

For example, "an isometric top-down shooter, with Maniac mechanics, and Valheim graphics and worldcrafting, set in an ancient Nordic country"

And then the game studio will start building the actual game based on some final iteration of the concept prompt. A similar workflow already to concept art being "seeded" through Midjourney/SD/flux today.


Thanks! That’s such an ambitious endgame that it didn’t occur to me.


This is amazing! I've been meaning to do something similar for all the Show HN threads, granted it's a much bigger set, but I haven't had the chance to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: