I really don't think this should be a registry-level issue. As in, the friction shouldn't be introduced into _publishing_ workflows, it should be introduced into _subscription_ workflows where there is an easy fix. Just stop supporting auto-update (through wildcard patch or minor versions) by default... Make the default behaviour to install whatever version you load at install time (like `npm ci` does)
No, it does need help at the publishing side. Most places I know are not updating by default. Everything has a lock file. But the nx attack a few months ago happened because the VS Code extension for nx was always ran the @latest version to check for updates or something like that.
So yeah… people will always have these workflows which are either stupid or don’t have an easy way to use a lock file. So I’d sure as hell like npm to also take some steps to secure things better.
As far as I know, using a lock file with npm install is both the default behavior and also doesn’t randomly updates things unless you ask it to… though it’s definitely best practice to pin dependencies too
We’re heading to sleep as we’re on London time! Thanks to everyone who commented & for providing feedback. It’s been immensely useful to hear everyone’s perspectives.
Please reach out to us if you would like to at product@kenobi.ai
You shouldn't be(!) We have a very high bar set up to be met to be considered bot traffic. Sometimes we are at the behest of the model providers and there's way more latency + even timeouts under higher load.
We get asked this a fair amount and the way we’re strategising on it is to build more opportunities for the site owners to define context as part of the broad site research that goes into creating the interpolations.
If I was to do this, I'd decide on what audiences my site was targeting and ensure the landing page had pre-approved content for each of them. Then I'd only use the LLM to rearrange the pre-approved marketing content, such that it puts the content that it thinks best targets the visitor above the fold. This way, the worst the LLM can do is to order the content incorrectly, and the visitor would need to scroll to see the content that targets them.
Even better, the LLM can make up rules for matching traffic to targeted profiles (corp IPs shows enterprise content, gov IPs shows gov offering, EU IPs shows European hosting options, etc). This way you don't use an LLM while rendering the page, reducing cost and speeding up page load times.
If you would be willing to give us another chance please email product@kenobi.ai with the site you used and we can get the research context that it generated fixed! (This has been a bit of an issue for some users, when the research gathering agent goes down the wrong track)
We think that in commercial buying there will still be a place for “discovery”, where B2B visitors gain from being able to independently digest public facing materials themselves. And that this will mean the adoption of agentic browsing is slower than people think.
However, we did already start experimenting with the agentic browsers like Atlas and Strawberry — I built a PoC for the former. But this is still very much experimental!
Edit:
Just wanted to add that your question is a prescient one and it is something we get asked a lot by investors, VCs etc, but hardly ever by people who run businesses with websites, or the people who visit them / do commercial buying.
Aah! Thanks for reporting! We’ve seen that in a couple of cases. Feel free to reach out to product@kenobi.ai with the site you’re using and we’ll look into it for you.
So right now (what we’re demoing), we do “on demand” personalisation, so there isn’t an SEO angle there really. However, we started with pre-rendering changes onto hardcoded URLs and while that did affect content, we didn’t see any SEO issues come up since these URLs were being used in campaigns only.
Thank you. Accessibility came up in another comment and to be honest we've only thought in terms of _preserving_ accessibility so far, not _improving_ it (as you're suggesting) -- would love to see if we could explore something along these lines soon even though right now we're focussed on B2B...
> if the buyer enters their company and the copy just changes into what we think they want, are they going to lose trust that the copy is a true representation of our focus?
Great point. That's pertinent to how we've been configuring the research → computing "intent" pipelines. Our focus right now is mainly just to streamline content and show brands "in context" as much as possible without having too much of an "opinion".
Your idea about showing how specifically the company could be helped + a use-case is lovely way of putting some of the more complex layout generation ideas we've been working on!
reply