Hacker Newsnew | past | comments | ask | show | jobs | submit | more hyfgfh's commentslogin

The thing I have seem in performance is people trying to shave ms loading a page, while they fetch several mbs and do complex operations in the FE, when in the reality writing a BFF, improving the architecture and leaner APIs would be a more productive solution.

We tried to do that with GraphQL, http2,... And arguably failed. Until we can properly evolve web standards we won't be able to fix the main issue. Novel frameworks won't do it either


RSC, which is described at the end of this post, is essentially a BFF (with the API logic componentized). Here’s my long post on this topic: https://overreacted.io/jsx-over-the-wire/ (see BFF midway in the first section).


But with a considerable amount of added complexity and bulk. And operational drawbacks. A well designed API (Go, ASP.NET, Java) and a fast SPA (let's say Solid) without client side global data management, just per component data fetching, are simple and fast. You can use a CDN to cache not only the app but the data.


Doesn't that depend on what you mean by "shave ms loading a page"?

If you're optimizing for time to first render, or time to visually complete, then you need to render the page using as little logic as possible - sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed.

If you want to speed up time to first input or time to interactive you need to actually build a working page using user data, and that's often fastest on the backend because you reduce network calls which are the slowest bit. I'd argue most users actually prefer that, but it depends on the app. Something like a CRUD SAAS app is probably best rendered server side, but something like Figma is best off sending a much more static page and then fetching the user's design data from the frontend.

The idea that there's one solution that will work for everything is wrong, mainly because what you optimise for is a subjective choice.

And that's before you even get to Dev experience, team topology, Conway's law, etc that all have huge impacts on tech choices.


> sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed

This is often repeated, but my own experience is the opposite: when I see a bunch of skeleton loaders on a page, I generally expect to be in for a bad experience, because the site is probably going to be slow and janky and cause problems. And the more the of the site is being skeleton-loaded, the more my spirits worsen.

My guess is that FCP has become the victim of Goodhart's Law — more sites are trying to optimise FCP (which means that _something_ needs to be on the screens ASAP, even if it's useless) without optimising for the UX experience. Which means delaying rendering more and adding more round-trips so that content can be loaded later on rather than up-front. That produces sites that have worse experiences (more loading, more complexity), even though the metric says the experience should be improving.


It also breaks a bunch of optimizations that browsers have implemented over the years. Compare how back/forward history buttons work on reddit vs server side rendered pages.


It is possible to get those features back, in fairness... but it often requires more work than if you'd just let the browser handle things properly in the first place.


Seems like 95% of businesses are not willing to pay the web dev who created the problem in the first place to also fix the problem, and instead want more features released last week.

The number of websites needlessly forced into being SPAs without working navigation like back and forth buttons is appalling.


> the experience should be improving

I think it's more the bounce rate is improving. People may recall a worse experience later, but more will stick around for that experience if they see something happen sooner.


> If you're optimizing for time to first render, or time to visually complete, then you need to render the page using as little logic as possible - sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed.

I think that OP's point is that these optimization strategies are completely missing the elephant in the room. Meaning, sending multi-MB payloads creates the problem, and shaving a few ms here and there with more complexity while not looking at the performance impact of having to handle multi-MB payloads doesn't seem to be an effective way to tackle the problem.


> speed up time to first input or time to interactive you need to actually build a working page using user data, and that's often fastest on the backend because you reduce network calls which are the slowest bit.

It’s only fastest to get the loading skeleton onto the page.

My personal experience with basically any site that has to go through this 2-stage loading exercise is that:

- content may or may not load properly.

- I will probably be waiting well over 30 seconds for the actually-useful-content.

- when it does all load, it _will_ be laggy and glitchy. Navigation won’t work properly. The site may self-initiate a reload, button clicks are…50/50 success rate for “did it register, or is it just heinously slow”.

I’d honestly give up a lot of fanciness just to have “sites that work _reasonably_” back.


30s is probably an exaggeration even for most bad websites, unless you are on a really poor connection. But I agree with the rest of it. Often it isn't even a 2-stages thing but an n-stages thing that happens there.


At least this post explains why when I load a Facebook page the only thing that really matters (the content) is what loads last


When I load a Facebook page the content that matters doesn't even load.


What's a BFF in this context? Writing an AI best friend isn't all that rare these days...


BFF (pun intended?) in this context means "backend for frontend".

The idea is that every frontend has a dedicated backend with exactly the api that that frontend needs.


It is a terrible idea organizationally. It puts backend devs at the whims of often hype train and CV driven development of frontend devs. What often happens is, that complexity is moved from the frontend to the backend. But that complexity is not necessarily implicit, but often self inflicted accidental complexity by choices in frontend. The backend API should facilitate getting the required data to render pages and perform required operations to interact with that data. Everything else is optimization that one may or may not need.


One huge point of RSC is that you can use your super heavyweight library in the backend, and then not send a single byte of it to the frontend, you just send its output. It's a huge win in the name of shaving way more than ms from your page.

One example a programmer might understand - rather than needing to send the grammar and code of a syntax highlighter to the frontend to render formatted code samples, you can keep that on the backend, and just send the resulting HTML/CSS to the frontend, by making sure that you use your syntax highlighter in a server component instead of a client component. All in the same language and idioms that you would be using in the frontend, with almost 0 boilerplate.

And if for some reason you decide you want to ship that to the frontend, maybe because you want a user to be able to syntax highlight code they type into the browser, just make that component be a client component instead of a server component, et voila, you've achieved it with almost no code changes.

Imagine what work that would take if your syntax highlighter was written in Go instead of JS.


Too many acronyms, what's FE, BFF?


I was asking the same questions.

- FE is short for the Front End (UI)

- BFF is short for Backend For Frontend


Front end and a backend for a frontend. In which you generally design apis specific for a page by aggregating multiple other apis, caching, transforming etc.


that's what a LLM would say


> The return to office fad was a big part of this effort, often largely motivated by reacting to the show of worker power in the racial justice activism efforts of 2020.

Not against this point, but I don't get it, maybe because I don't live in the US, but I see as another way to "soft-fire" people, as is this AI crazy What I'm missing?


People have the notion that tech workers are overpaid man-children that have to be babysat by their companies

The reality is, we're constantly dealing with increasingly tighter deadlines and unreasonable demands to deliver products that, even if they make it to market, are likely to be shut down within five years. We're basically glorified sandcastle builders

With AI, things have gotten even worse, people now believe everything is easy just because they can build a to-do app with a prompt


From what I’ve seen, tech workers *used* to be what you describe, it’s just not like that anymore.

The faangs are not that appealing anymore.


Yes, and I already got those


Now, running with scissors


you can also offer your body as fuel


How will that work, practically though? Could we perhaps contain human bodies in efficient gel filled bio-pods, effectively turning them into biological batteries?

That will be very boring for the humans thouhg. Perhaps a virtual reality simulation will keep them satisfied.

Let me know when you can start working on a neural jack interface to a brain.


Agree US market seems bloated, not only salaries but also positions, you can find "seniors" with 2 years of experience, maybe a side effect from the pandemic boom


I'm glad for AI, I was worried that future generation would overtake me, now I know they won't be able to learn anything


Dont add Elon Musk quotes to any serious thing please


no rest for the wicked


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: