Hacker Newsnew | past | comments | ask | show | jobs | submit | sktrdie's commentslogin

I'll try to review the article with comments to make this a more critical discussion instead of just hates on Next.js (I'm just a Next.js developers for years now and am quite happy with it - but I do agree it requires some deeper understanding)

> React is now using the words "server" and "client" to refer to a very specific things, ignoring their existing definitions. This would be fine, except Client components can run on the backend too

It was hard discussions to come up for naming of these things in the beginning. Even calling them "backend" and "frontend" (as they suggest in article) wasn't clear about their behavior semantics. I understand the naming annoyances but it's a complex issue that requires lots more thought than just "ah we should've called it like this"

> …This results in awkwardly small server components that only do data fetching and then have a client component that contains a mostly-static version of the page.

> // HydrationBoundary is a client component that passes JSON > // data from the React server to the client component. return <HydrationBoundary state={dehydrate(queryClient)}> <ClientPage /> </HydrationBoundary>;

It seems they're combining Next's native hydration mechanism with TenStacks (another framework) in order to more easily fetch in the browser?

To follow on their WebSocket example where they need to update data of a user card state when a Websocket connection sends data. I don't see what would be the issue here to just use a WebSocket library inside a client component. I imagine it's something you'd have to do to in any other framework, so I don't understand what problem Next.js caused here.

What they're doing screams like a hack and probably the source their issues in this section.

> Being logged in affects the homepage, which is infuriating because the client literally has everything needed to display the page instantly

I'm not sure I understand this part. They mention their app is not static but instead is fully dynamic. Then, how would they avoid NOT showing a loading state in between pages?

> One form of loading state that cannot be represented with the App Router is having a page such as a page like a git project's issue page, and clicking on a user name to navigate to their profile page. With loading.tsx, the entire page is a skeleton, but when modeling these queries with TanStack Query it is possible to show the username and avatar instantly while the user's bio and repositories are fetched in. Server components don't support this form of navigation because the data is only available in rendered components, so it must be re-fetched.

You can use third-party libs to achieve this idea of reusing information from page to another. Example of this is motion's AnimatePresence which allows smooth transitions between 2 react states. Another possibility (of reusing data from an earlier page) is to integrate directly into Next.js new view transitions api: https://view-transition-example.vercel.app/blog <- notice how clicking on a post shows the title immediately

> At work, we just make our loading.tsx files contain the useQuery calls and show a skeleton. This is because when Next.js loads the actual Server Component, no matter what, the entire page re-mounts. No VDOM diffing here, meaning all hooks (useState) will reset slightly after the request completes. I tried to reproduce a simple case where I was begging Next.js to just update the existing DOM and preserve state, but it just doesn't. Thankfully, the time the blank RSC call takes is short enough.

This seems like an artefact of the first issue: trying to combing two different hydration systems that are not really meant to work together?

> Fetching layouts in isolation is a cute idea, but it ends up being silly because it also means that any data fetching has to be re-done per layout. You can't share a QueryClient; instead, you must rely on their monkey-patched fetch to cache the same GET request like they promise.

Perhaps the author is missing how React cache works (https://react.dev/reference/react/cache) and how it can be used within next.js to cache fetches _PER TREE RENDER_ to avoid entirely this problem

> This solution doubles the size of the initial HTML payload. Except it's worse, because the RSC payload includes JSON quoted in JS string literals, which format is much less efficient than HTML. While it seems to compress fine with brotli and render fast in the browser, this is wasteful. With the hydration pattern, at least the data locally could be re-used for interactivity and other pages.

Yes sending data twice is an architecture hurdle required for hydration to work. The idea of reusing that data in other pages was discussed before via things like AnimatePresence.

What's important to note here is that the RSC payload exists at the bottom of the HTML. Since HTML is streamed by default this won't impact Time-to-first-Render. Again, other frameworks need to do this as well (in other ways but still, it needs to happen)

I totally understand the author's frustrations. Next.js isn't perfect, and I also have lots of issues with it. Namely I dislike their intercept/parallel mechanism and setting up ISR/PPR is a nightmare. I just felt like the need to address some of their comments so maybe it can help them?

As a first I would get rid of tanstack since it's fighting against Next.js architecture.

Or yeah just move entirely elsewhere :)


> but I do agree it requires some deeper understanding

You are agreeing with who?

"large disagreements about fundamental design decisions" is not a lack of understanding. NextJS is the problem.


I mean, if you want to be an artist, you really can. I've lived off a van for almost a year... just for adventure. Traveled close to beach where I could shower. I spent literally close to nothing day-to-day. I could remake my weekly expenses just working at a few restaurants in the weekend, and enjoy the beach the rest of the week surfing.

Now that I'm back to my normal office coding job, I feel like I'm actually saving less money because I have rent, and general city life to spend money on. It's all about the comforts one is used to.

The story of artists not having enough money is probably about people that are used to too many comforts. I've seen people complain they didn't have money to go by, whilst living in an apartment close to a densely populated city and having a car... get rid of those comforts if you want to make it!


> then why am I paying for a senior ?

Because they know how to talk to the AI. That's literally the skill that differentiates seniors from juniors at this point. And a skill that you gain only by knowing about the problem space and having banged your head at it multiple times.


The actual skill is having knowledge and knowing when to not trust the AI, because it is hallucinating or bullshitting you. Having worked on enough projects to have a good idea about how things should be structured and what is extensible and what not. What is maintainable and what not. The list goes on and on. A junior rarely has these skills and especially not, when they rely on AI to figure things out.


>That's literally the skill that differentiates seniors from juniors at this point.

If your product has points where llms falter, this use a useless metric.

>and having banged your head at it multiple times.

And would someone who relied on an LLM be doing this?


Except most junior devs will be better than sr devs at wholehearted ai adoption


Cool but how does it compare to something like subreddits? There are still biased moderators behind the scene just like subreddits. Seems to not have the upvoting/downvoting side of it which imo is crucial to democratize the entire thing.

I think upvoting/downvoting is a crucial aspect to news/information/knowledge. But we've been doing it with just numbers all along. Why not experiment with weights or more complex voting methods? Ex: my reputation is divided in categories - I'm more an expert in history then politics hence my vote towards historical subjects have more weights. Feels like that's the next big step for news. Instead of just another centralized aggregator?

No offense to the cool system and website though


Next.js hydrates only client components - so effectively it's doing island architecture. And it's react end to end. How's that different from Astro? Stating things like "Components without the complexity" doesn't really mean anything unless you do some comparisons?


Client vs server routing


I was inspired by Bret's articles at a young age. Made be think of software more from a visual perspective. Even re-reading this article now, after many years, inspires me to think of possible ways we can improve building visual systems - thinking more from a designer's perspective; rather than an engineering one. And how far ahead his thinking was.

Even his imaginary "snapshot/example driven design tool" (described at the end of the article) seems quite intriguing and thought provoking. I wonder if with AI being so easily accessible nowadays, a retake on this tool can provide something that is actually usable and useful to people?


Will this whole idea of writing more server-side logic pay off?

I'm not sure. It felt like we were moving towards dumb backends that sync automatically towards frontends that would contain most logic. Things like https://localfirstweb.dev/ or https://electric-sql.com/ felt like the future

Writing more server code (as quoting/react-server-components are suggesting) will increase the surface area where errors can occur. Limiting that to either just the server/client feels like a much saner approach


That's a good question! I'll maybe post some time about that. I don't think sync engines are "the future" although they have their uses. But if you care about large-scale aggregation, private data with sophisticated permission models, aggregation across private data, feeds, and such, server/client is a much more powerful model. For example, I don't think you can implement something like Twitter in a pure peer-to-peer fashion without compromising on functionality a lot.


> I don't think sync engines are "the future" although they have their uses

I think the same argument applies to RSC. For many use cases, it doesn't make sense. Many organizations and projects do not need SEO or server code specifically for their FE. If the organization has committed to an API service in order to support a range of clients then RSC/react server framework is "pure overhead."

As someone who has been building with React for a decade, RSC was the moment where I felt the complexity vastly outweighed the benefit. I'm in a position where I can argue that SPAs are dramatically simpler to implement compared to RSC/nextjs, which I think would be surprising to outsiders who bemoan SPAs as complex.

I find the "preload then rehydrate" data synchronization model simpler to understand and can turn even the slowest APIs into an app that feels instant: https://starfx.bower.sh/learn#data-strategy-preload-then-ref...


I'm not trying to get you or anyone to adopt RSC, I'm just trying to explain that if you're bought into the idea that a dedicated backend for your frontend is good (and there are plenty good reasons for people to come to this conclusion), there exists a way to structure the code of that backend that has interesting compositional properties.

I think that's a slightly different kind of message than "servers are unnecessary, peer-to-peer is the future". You can keep servers dumb, but some kinds of product features demand the servers to do the job. And then if you want to squeeze the most out of the server as it relates to your client app, RSC is one way to look at it.


You can keep servers dumb, but some kinds of product features demand the servers to do the job. And then if you want to squeeze the most out of the server as it relates to your client app, RSC is one way to look at it.

I feel like the design space of making the server as dumb as possible is not sufficiently explored yet. I’m imagining PWAs that work offline by default, hosted on static hosting, talking to CORS-unlocked PKCE-authenticated APIs, storing their state as dumb files in APIs like dropbox, and doing all of the cross-client p2p syncing and merging client-side inside of a service worker.

It wouldn’t work for all categories of software, but so much productivity software ultimately reduces to a per-user file paradigm instead of a central database (outliners, notes, task managers, image editors, …) that I think a lot of complex web apps could be built this way. They wouldn’t work well on low end android phones, but then most of the products from those categories already don’t work well there, when half their logic is still on the server.

And yes, I know, Apple does not play nice with PWAs, but I still think there’s something there that I wish more people would explore.


I've been a webdev for 20 years, and I didn't understand what any of that post meant.


How did they get a hold of the enron.com domain?


They probably now own the name as well as the domain.

It doesn't seem like someone hijacked the domain, but instead actually bought it. There's a FAQ which openly states their website is protected speech as "parody".


To me this is what all AI feels like. People want "hard to make things" because they feel special and unordinary. If anybody with a prompt can do it, it ain't gonna sell


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: