Hacker Newsnew | past | comments | ask | show | jobs | submit | tow21's commentslogin

This argument always sounds like two crowds shouting past each other.

Are you a solo developer, are you fully in control of your environment, are you focused on productivity and extremely tight feedback loops, do you have a high tolerance for risk: you should probably use CLIs. MCPs will just irritate you.

Are you trying to work together with multiple people at organizational scale and alignment is a problem; are you working in a range of environments which need controls and management, do you have a more defensive risk tolerance ... then by the time you wrap CLIs into a form that are suitable you will have reinvented a version of the MCP protocol. You might as well just use MCP in the first place.

Aside - yes, MCP in its current iteration is fairly greedy in its context usage, but that's very obviously going to be fixed with various progressive-disclosure approaches as the spec develops.


Context usage is a client problem - progressive disclosure can be implemented without any spec changes (Claude/code has this built in for example). That being said the examples for creating a client could be massively expanded to show how to do this well


In an organisation we can’t limit MCP access. It’s all or nothing. Everything the user can touch, the MCP can touch.

We can trust humans not to do stupid things. They might accidentally delete maybe two items by fat-fingering the UI.

An Agent can delete a thousand items in a second while doing 30 other things.

With bespoke CLI tools we can configure them so that they cannot access anything except specific resources, limiting the possible blast radius considerably.


> In an organisation we can’t limit MCP access.

Why not? I'd imagine that you could grant specific permissions upon MCP auth. Is the issue that the services you're using don't support those controls, or is it something else?


I haven’t seen a single major MCP provider that would let us limit access properly

Miro, Linear, Notion etc… They just casually let the MCP do anything the user can and access everything.

For example: Legal is never letting us connect to Notion MCP as is because it has stuff that must NEVER reach any LLM even if they pinky swear not to train with our stuff.

-> thus, hard deterministic limits are non-negotiable.


it's straightforward to spin up a custom MCP wrapper around any API with whatever access controls you want

the only time i reach for official MCP is when they offer features that are not available via API - and this annoys me to no end (looking at you Figma, Hex)


Indeed, ever since MCPs came out, I would always either wrap or simply write my own.

I needed to access Github CI logs. I needed to write Jira stories. I didn't even bother glancing at any of the several existing MCP servers for either one of them - official or otherwise. It was trivial to vibe code an MCP server with precisely the features I need, with the appropriate controls.

Using and auditing an existing 3rd party MCP server would have been more work.


That’s what we’re doing, but it’s annoying. Why can’t they just let us limit access for the official MCP easily?


Agreed. Sounds like a failure of the services, but not MCP. Can't believe in 2026 we don't have better permissions on systems like this.


“Communism can work we just did not see a good implementation of it”. If majority of implementations fail at it -> protocol is defined incorrectly. With security first approach it would not be the case.


(everything I write about MCP means "remote MCP" by the way. Local MCP is completely pointless)

MCP provides you a clear abstracted structure around which you can impose arbitrary policy. "identity X is allowed access to MCP tool Y with reference to resource pool Z". It doesn't matter if the upstream MCP service provides that granularity or not, it's architecturally straightforward to do that mapping and control all your MCP transactions with policies you can reason about meaningfully.

CLI provides ... none of that. Yes, of course you can start building control frameworks around that and build whatever bespoke structures you want. But by the time you have done that you have re-invented exactly the same data and control structures that MCP gives you.

"Identity X can access tool Y with reference to resource pool Z". That literally is what MCP is structured to do - it's an API abstraction layer.


CLI provides all of that.

I have a configuration file that defines the exact resources the CLI can access. It programmatically checks and blocks access to any resource that's not whitelisted. There's no way for the Agent to get around that without some major fuckery.

The problem with your MCP example is that Identity X has access to most of the data, because humans need that. But when an agent uses MCP with Identity X credentials we need to be able to deterministically block it from accessing anything but very specific resources.


maybe make an mcp that has whatever limitations you need baked in?


But we pay ungodly amounts of money to services, why can't they bake in limitations? I'm a 100% sure we're not the only ones wondering why MCPs have to be all or nothing.


> We can trust humans not to do stupid things. hold my beer

I can definitely delete a thousand items with a typo in my bash for loop/pipe. You should always defend against stupid or evil users or agents. If your documents are important, set up workflows and access to prevent destructive actions in the first place. Not every employee needs full root access to the billing system; they need readonly access to their records at most.


These people aren’t doing bash loops, they’re regular non-technical people who just want to use an AI Agent to access services and aggregate data.

If people accidentally delete stuff, they tend to notice it and we can roll back. If an agent does a big whoops, it’s usually BIG one and nobody notices because it’s just humming away processing stuff with little output.

An accountant might have access to 5 different clients accounts, they need to do their work. They can, with their brain, figure out which one they’re processing and keep them separate.

An AI with the same access via MCP might just decide to “quickly fix” the same issue in all 5 accounts to be helpful. Actually breaking 7 different laws in the process.

See the issue here?

(Yes the AI is approved for this use; that’s not the problem here)


> These people aren’t doing bash loops, they’re regular non-technical people who just want to use an AI Agent to access services and aggregate data.

Over the last few months, this pattern of discussion has become pervasive on HN.

Point.

Counterpoint.

(Not finding a flaw with the counterpoint) "Yeah, but most people aren't smart enough to do it right."

I see it in every OpenClaw thread. I see it here now.

I also saw it when agents became a thing ("Agents are bad because of the damage they can do!") - yet most of us have gotten over it and happily use them.

If your organization is letting "regular non-technical" people download/use 3rd party MCPs without understanding the consequences, the problem isn't with MCP. As others have pointed out in this thread, you can totally have as secure an MCP server/tool as a sandboxed CLI.

Having said that, I simply don't understand yours (and most of others') examples on how CLI is really any different. If the CLI tool is not properly sandboxed, it's as damaging as an unsecured MCP. Most regular non-technical people don't know how to sandbox. Even where I work, we're told to run certain agentic tools in a sandboxed environment. Yet they haven't set it up to prevent us from running the tools without the sandbox. If my coworker massively screws up, does it make sense for me to say "No, CLI tools are bad!"?


My basic point is: why don't major multimillion dollar companies provide us with a way to limit MCP access? "With this ID, this specific MCP connection can only access database X in read-only mode" or "With this ID, this MCP connection can create new pages under this page, but cannot delete anything or modify pages it didn't create". Very very basic stuff.

I _can_ make a custom CLI, a custom MCP wrapper and whatever else to limit the things agents can access. But why do I need to? Am I the only one in the world who doesn't want to let ChatGPT run wild on our internal Notion without any hard limitations? We pay them ungodly amounts every month for the service and basic safeties aren't included unless we build them in.


agree I don't get this discussion anyways Those are two different things, and actually they work well together..


For language geeks: https://kpt.datamediate.com

KPT is a language app specifically targeted at explainable verb conjugation for highly inflected/agglutinative languages. Currently works for Finnish, Ukrainian, Welsh, Turkish and Tamil.

These are really hard languages to learn for most speakers of European languages, particularly English - we're not used to complex verb conjugations, they're hard to memorise and the rules often feel quite arbitrary. Every other conjugation practice app just tells you right/wrong with no explanation, which doesn't really help you learn when there are literally hundreds of rules to get right.

The interesting part was using an LLM to create a complete machine-executable set of conjugation rules, which are optimized for human explainability, and an engine to diagnose which rule is at fault when you get it wrong. There's several hundred rules needed for each language in order to cover all exceptions.

NB as a bonus it also works fully offline because my best practice hours are when I'm travelling and have poor connectivity.


Really cool stuff, I thought about launching something similar earlier this year, there's definitely a market there. I see a lot of AI-ative startups coming up against compliance requirements way earlier than before, with much smaller teams, and most existing solutions just need too much from you as you engage.

How do you see yourself against someone like delve.co?


Honestly, Delve is great. Them and Compai are leading the front of modern AI-assisted compliance right now. I'm chasing them.

What I'm trying to do differently is depth of context. Humadroid learns about your company first - how you operate, your stack, your processes. From there it generates control descriptions that are actually actionable for your setup, and policies that need minimal review rather than a full rewrite.

Whether that's enough differentiation? Ask me in a year.


How does the Teltonika work out for you - I nearly bought it earlier this year but it doesn't have support for external antennae. I'm just on the edge of 5G coverage and I'm not sure I want to splash out on something which I can't tune for decent reception.

Seems an odd omission for a ruggedised outside modem - the Unifi also seems to not support external antennae.

(I'd also prefer a unifi version just so it fits in the with rest of the networking infra I have in the mökki.)


OTD500 is antenna + router in a single box. There is nothing else needed. I just put it outdoors with a POE cable. Originally, I used it as a backup, but now I have an unlimited SIM, so I use it as a second internet connection.

If you mean the standard routers (like the Rutx50), Teltonika itself sells external enclosures with antennas. https://www.teltonika-networks.com/products/accessories/ante...


Yeah, I know - but an antenna embedded within a small box is going to be much less effective than a big old directional Yagi antenna like https://www.satshop.fi/en/4g/4g-5g/4g-antennas.html

Seems weird to cripple the product by not allowing me to (optionally) disable the internal antenna and instead use and tune an external antenna. And I suspect that is likely to make a difference when you are on the edge of coverage, but you know exactly where the relevant cell tower is, a few km away.


Google, Meta, Microsoft and Amazon might get through easily as companies. I don't think all G/M/M/A staff will get through easily.


Microsoft is in a pickle. They put AI lipstick on top of decades of unfixed tech debt and their relationship with their userbase isn't great. Their engineering culture is clearly not healthy. For their size and financial resources, their position in the market right now is very delicate.


I think that's the impression you get if you focus on Microsoft as a OS vendor. It's not that anymore, that's why their OS sucks for many years now. Their main business is b2b, cloud services, and azure. I think they are pretty safe from OpenAI. Plus they have invested big in OpenAI as well.


Windows is hard to replace in large organizations. Is there actually any real AI competitors in the stack? Well Google, maybe. The whole Windows+Office+AD+Exchange and now Azure stack is unlikely to go any time soon. However badly they screw it up.


True. Basically any medium to large scale business is reliant on Windows/Office/AD. While there are open source alternatives to Windows/Office, I can't think of a good open source alternative to AD/Group Policy/etc


M365 is arguably far worse than office97. Drive/sharepoint is confusing and team is especially broken.

Azure is a product all right, but there’s nothing particularly better there than anywhere else.


SharePoint has been a dog’s breakfast since forever.


M365 is inarguably worse than Office 97


I don't think so.

They are one of the few companies actually making money with AI as they have intelligently leveraged the position of Office 365 in companies to sell Copilot. Their AI investment plans are, well, plans which could be scaled down easily. Worst case scenario for them is their investment in OpenAI becoming worthless.

It would hurt but is hardly life threatening. Their revenue driver is clearly their position at the heart of entreprise IT and they are pretty much untouchable here.


> Worst case scenario for them is their investment in OpenAI becoming worthless.

And even then, if that happens when the bubble pops, they'll likely just acquire OpenAI on the cheap. Thanks to the current agreement, it already runs on Azure, they already have access to OpenAI's IP, and Microsoft has already developed all their Copilots on top of it. It would be near-zero cost for Microsoft at that point to just absorb them and continue on as they are today.

Microsoft isn't going anywhere, for better or for worse.

Despite them pissing off users with Windows, what HN forgets, is they aren't Microsoft's customer. The individual user/consumer never was. We may not want what MS is selling, but their enterprise customers definitely do.


I disagree. They're the one place that can get away without investing in frontier model research and still win in the enterprise.

Google is only place that serves the enterprise (Workspace for productivity, Cloud for IT, Devices for end users) AND conducts meaningful AI research.

AWS doesn't (they can sell cloud effectively, but don't have any meaningful in-house AI R&D), Meta doesn't (they don't cover enterprise and, frankly, nobody trusts Zuck... and they're flaky.

Oracle doesn't. They have grown their cloud business rapidly by 1) easy button for Oracle on-prem to move to OCI, and 2) acting like a big colo for bare metal "cloud" infra. No AI.

Open AI has fundamental research and is starting to have products, but it's still niche. Same as Anthropic. They're not in the same ball game as the others, and they're going to continue to pay billions to the hyperscalers annually for infra, too.

This is Google's game to lose, imho, but the biggest loser will be AWS (not Azure/Microsoft).


I agree that AWS/Amazon seems to be uniquely badly positioned to benefit at all from AI, while also being potentially screwed by AI companies failing.


I cry for Elon, that precious jewel of a human being.

Tesla (P/E: 273, PEG: 16.3) the car maker without robots, robotaxis is less than 15% of the Tesla valuation at best. When the AI hype dies, selloff starts and negative sentiment hits, we have below $200B market cap company.

It will hurt Elon mentally. He will need a hug.


He's gonna need a lot of ketamine in the aftermath that's for sure.


Never bet against TSLA. Elon will just start selling tickets Mars colony.


The fanboys obsessively buy any dip. It should have been back at a $200billion market cap countless times but it never gets there.


Then show us your puts, mr buffet


Buffett isn't a put buyer but did well investing in Tesla's rival BYD.


lol - yea…


On the plus side, maybe this means the endless churn of JS libraries will finally slow down and as someone who isn’t a JS developer but occasionally needs to dip their toe into the ecosystem, I can actually get stuff done without having to worry about 6-month old tutorials being wrong and getting caught in endless upgrade hell.


For what it’s worth - vanilla JS is pretty darn good and if you’re only dipping in for some small functionality I highly doubt a framework brings much benefit.


I find vanilla JS unusable for anything bigger, though. It was designed for quickie scripts on throwaway web pages, but it's not great for anything you'd call a web app.

Typescript, however, does scale pretty well. But now you've added a compiler and bundler, and might as well use some framework.


Right tool for the right job.

I’ve written some pretty complicated vanilla JS and it works fine. I’m not dealing with other people crappy code however so YMMV.


Has this actually been true, though? I admit I don’t write JavaScript much recently, but to me it feels like things have pretty stabilized. React released hooks in early 2019 before Covid, and after that things don’t really change much at all.

At this point there are several large Rust UI libraries that try to replicate this pattern in web assembly, and they all had enough time to appear and mature without the underlying JSX+hooks model becoming outdated. To me it’s a clear sign that JS world slowed down.


> React released hooks in early 2019 before Covid, and after that things don’t really change much at all.

Server-side components became a thing, as well as the React compiler. And libraries in the React (and JS at large) ecosystem are pretty liberal with breaking changes, a few months is enough to have multiple libraries that are out-of-date and whose upgrade require handling different breaking changes.

React Native is it own pit of hell.

It did slow down a little since a few years ago, but it's still not great.


Yes. When I dipped my toes into the front end ecosystem in 2021 to build a portfolio site, the month old tutorial video I followed, was already out of date. React had released an update to routers and I could not find any documentation on it. Googling for the router brought me to pages that said to do what I had done, which disagreed with the error message that I was getting from react.

React had just updated and documentation hadn’t.

I then discovered that Meta owns React so I got frustrated as hell with their obfuscation and ripped out all of the React and turned what was left into vanilla html+js.


React-router is it's own separate project not affiliated with Meta. React library doesn't ship a router.


Yet at the time it seemed to need one. Glad I never looked back at that fragmented mess.

I also don’t ‘KTH-Trust’ Meta of all corporations to have a compile step for a web technology.


Only if you're only talking about income from work. If you own property in country A which you rent out while you live & work in country B, then you probably still owe tax on that rental income in country A. (but it will depend on the exact wording of the relevant DTA if one exists)

And since you are now filling in two tax returns for different countries, with different tax allowances across rental income and work income which interact in decidedly non-linear fashion, you probably need to make sure both country A and B have no confusion about where your work income was earned.

Having spent the last 8 years obsessively counting days across the UK and Finland (and every other country I have visited) exactly to account for this scenario, I am very sympathetic to attempts to solve this problem space!


> If you own property in country A [...]

But then, that's because you own property in country A, not because you're a citizen of country A! The same would happen if you were a citizen of country B, lived and worked in country B, but bought a house to rent out in country A.


Much harder to enforce against services.

Physical goods you can hold until tariffs are paid.

Services are paid for by invoices between two corporate entities whose legal domicile may have nothing to do with the real country of origin of the services.

Lots of European SaaS providers invoice US customers from their US subsidiary - impossible to distinguish the transaction in order to put a tariff on it.


Brilliant! Literally my first thought when I saw the original submission was “I wish there was a banjo version”!

Definitely will be using your app.


Otherwise pointless pedantry, but in line with the "nobody cares about quality" ...

"the hoi polloi" grates every time I read it. "hoi polloi" literally means "the many", so this is an awkward pleonasm, "the the many", amounting to a lack of quality in a piece of writing.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: