Hacker Newsnew | past | comments | ask | show | jobs | submit | adam_patarino's commentslogin

Your FAQ says you don’t store code. But this answer sounds like you do? Even if you’re storing as an embedding that’s still storage. Which is it?

We don’t store your code or any proprietary local content on our servers. When we say “external context” we mean public or user-approved remote sources like docs, packages or APIs. Those are indexed on our side. Your private project code stays local

I feel like people who write articles like this have never worked at big companies.

My wife works at Shutterstock, first as a SWE, now as a product manager. Most of their tasks involve small changes in 5 different systems. Sometimes in places like Salesforce. A simple ask can be profoundly complicated.

AI has certainly made grokking, and code changes easier. But the real cost of building has not been reduced 90%. Not even close.


The author "teaches workshops on AI development for engineering teams". This is nothing but a selling post for companies. I don't know what to discuss here honestly, this is more primitive bait than an average video preview picture on YouTube.

These articles act like creating new demo apps or isolated tools is what software development is all about

I keep using this excel analogy. When excel came out everyone said accountants were dead. Now we have more accountants than ever.

The analogy carries to what you’re saying here. Accountants or folks who know excel deeply can get a lot more from it than folks who are novice to it.

AI coding can be really helpful for an engineer. Keep at it!


I'm telling you, with all the cost and problems with cloud AI, local is where it's going to be.

I'm genuinely shocked people are driving around in Atlas right now, showing OpenAI how to click buttons and how to login to their bank accounts.


This feels like an oversimplification of a difficult problem. But agree local LLMs are the future!

We're seeing the same thing for many companies, even in the US. Exposing your entire codebase to an unreliable third party is not exactly SOC / ISO compliant. This is one of the core things that motivated us to develop cortex.build so we could put the model on the developer's machine and completely isolate the code without complicated model deployments and maintenance.

It’s convenience - it’s far easier to call an API than deploy a model to a VPC and configure networking, etc.

Given how often new models come out, it’s also easier to update an API call than constantly deploying model upgrades.

But in the long run, I hope open source wins out.


Since each chat is virtually independent there’s no switching cost. I’ve moved between Claude and ChatGPT with no cares.

It’s not like Facebook where all my friends stay behind


> Since each chat is virtually independent

That hasn't been true for a while though. Open a new chat tab in ChatGPT and ask it "What do you know about me" to see it in action.


You can turn that off. If you're using LLMs for technical or real world questions, it's nicer for each chat to be a blank slate.

You can also use Temporary Chats for that.

We are working on a fully local coding assistant with auto complete and agentic modes. We created a novel post training pipeline to optimize an 80b param model to run on a standard laptop (16gb RAM) so we can offer truly unlimited and private AI coding.

Sign up for our beta https://cortex.build


I would've considered signing up if scrolling on your website didn't make my modern flagship phone drop frames.

I was interested but looks like it's only available for Macos.


We developed a novel optimization pipeline for LLMs so large models can run on a standard laptop.

Our first prototype optimized an 80B model to run at full 256k context at 40 tokens/s while only taking up 14gb of RAM.

We are currently leveraging this tech to build https://cortex.build a terminal AI coding assistant.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: