If there's a standard instead of needing to download the Jira mcp server, you just visit their website and all the tools are described and usable from there.
Or put differently, as a company / group / individual, instead of needing to build and distribute an mcp server and public API, you can just support WebMCP.
Another alternative is LLMs / agents operating playwright or the equivalent which will likely be less reliable and consume more tokens. (By a fair margin)
it seems like a cleaner approach to declare a handful of tools that users can approve/ask for granularily, than just say "my website can run any wacky script, here is some bookmarklet, nerds" or the very generic permission model of browser extensions
It's more providing permission granularity on the action level rather than the sandbox level. Your script might not be able to make external api calls, but there is no way to gate the ability to take destructive action within the webpage.
With something like WebMCP you get elicitation and the ability to disable tools from the client.
WebMCP essentially turns your website into an MCP server. Which means it is kind of like building a UI for the LLM that lives alongside the human UI.
It's also a contract for how LLM's interact with a website, they can do no more than the tools allow them to do. When you are running javascript on the page, the entire website is an attack surface.
Let's take gmail, for example. There is no way to protect your webpage from an agent running a script that sends an email by triggering the send email button. But with WebMCP, you can explicitly disable the "send_email" tool when the agent interacts with gmail.
> What can MCP enable? 1) Agents can access your Google Calendar and Notion, acting as a more personalized AI assistant. 2) Claude Code can generate an entire web app using a Figma design. 3) Enterprise chatbots can connect to multiple databases across an organization, empowering users to analyze data using chat. 4) AI models can create 3D designs on Blender and print them out using a 3D printer.
Sure 1 and 3 make sense if they mean "summarize" and not "analyze", 4 maybe, but 2... Oh I don't know where to begin other than to say that even really smart humans have a very hard time dealing with that task based on a figma doc. Wouldn't it make more sense to generate the figma doc if they're already that awful to begin with?
I had to skim through this a couple times before I realized that I still need to run an MCP server locally. This is basically a proxy between an LLM and the proposed protocol.
It’s a nice proof of concept.
And makes sense that the goal would be for LLM clients to adopt and support the standard natively. Then the proxy won’t be necessary.
That's not how I'd describe it- it's not meant to centralize servers, it's the idea: maybe you don't need to build and distribute a separate downloadable thing for users to interact with your service/product/whatever via agent, and instead they continue to use your website via an appropriate interface for agents.
The npm package is only there as the browser doesn't natively support the behavior (yet). Similarly MCP clients don't have built in support. So it's a bridge/proxy to demonstrate what could be done.
I think the centralization aspect sounds potentially very useful so I didn't mean it like a deal-breaker. I've been thinking it's a matter of time before someone figures out a good way to centralize MCP tools. This kind of thing could be huge; like the Google of MCP tools.
This seems like a security nightmare, a way to inject insecure content onto everyone's PC which can then automate actions executed with full user/admin privileges?
I attempted to acknowledge the security implications and am not trying to push this as a product/service - this was just a proposal.
Despite it being a proposal, I added token based authentication to mitigate potential abuse by forcing users to intentionally authenticate with a website before it can be used.
> Standardization effort: We're working to standardize all of these APIs for cross-browser compatibility.
> The Language Detector API and Translator API have been adopted by the W3C WebML Working Group. We've asked Mozilla and WebKit for their standards positions.
> The Summarizer API, Writer API, and Rewriter API have also been adopted by the W3C WebML Working Group. We've asked asked Mozilla and WebKit for their standards positions.
> We're launching today a public preview for the new Chrome DevTools Model Context Protocol (MCP) server, bringing the power of Chrome DevTools to AI coding assistants.
> Coding agents face a fundamental problem: they are not able to see what the code they generate actually does when it runs in the browser. They're effectively programming with a blindfold on.
> The Chrome DevTools MCP server changes this. AI coding assistants are able to debug web pages directly in Chrome, and benefit from DevTools debugging capabilities and performance insights. This improves their accuracy when identifying and fixing issues.
How could the Chrome DevTools MCP be integrated with the Gemini Computer Use model?
> Competency Story: The customer and product owner can write BDD tests in order to validate the app against the requirements
> Prompt: Write playwright tests for #token_reference, that run a named factored-out login sequence, and then test as human user would that: when you click on Home that it navigates to / (given browser MCP and recently the Gemini 2.5 Computer Operator model)
This explains all the new random GPO settings I had to go disable at the office this week! (A lot of users are reporting performance issues with browsers, seems like all the browsers are adding AI things... seems like a good place to start.)
This is as bad or worse than agreeing to voice search with them.
Hadn't realized we've all been opted-in.
My voice assistant used to be able to create a reminder without siphoning everything out to "must be reviewed because it's AI" remote AI.
Is it possible to use non-AI voice search on YouTube (with GoogleTV) without signing one's life away?
Try voice searching for "weather in [city]" with YT on GTV: it launches another (Google) app instead of just adding text to the search field.
When they asked for suggestions for OpenAI's fork of Chromium, I suggested adding fuzzy and regex search in a drawer and sending it upstream; like vimgrep for Chromium. That would help solve for Search, like the original mission of the company.
A lot has happened since I proposed / built this.
WebMCP is being incubated in W3C / webmachinelearning, so highly recommend checking that out as it's what will turn into WebMCP being in your browser.
https://github.com/webmachinelearning/webmcp