For me it is difficult to give good code comments just when code is written. The problem is solved, the tricky parts if any are internalized. I dont mind reading code so just documenting what the code is doing does seldom bring value. The important thing is to document why the code does things in an non obvious way and unintuitive scenarios and edge cases etc.
When revisiting code is the best time to add comments because then you will find out what is tricky and what is obvious.
Code reviews are also good for adding code comments. If the people reviewing are doing their job and are actually trying to understand the code then it is a good time to get feedback where to add comments.
There is a hack for that: write the comments before and/or as you write the code. When things are still unclear, weird.
Of course do a final pass on them to ensure that they are correct and useful in the end.
This is one example of document as-you-go, instead of doing it after "the work" is done. I find it generally leads to better outcomes. Doing documenting only at the end is in many ways the worst way to do it.
Here's my method, which is a bit similar to the siblings.
Your first "docs" are your initial sketch. The pieces of paper, whiteboard, or whatever you used to formulate your design. I then usually write code "3" times. The first is the hack time. If in a scripting language like python, test your functions in the interpreter, isolated. Then "write" 2 is bringing into the code, and it is a good idea to add comments here. You'll usually catch some small things here. Write the docstrings now, which is your 2nd docs and your first "official" ones. While writing those I usually realize some ways I can make my code better. If in a rush, I write these down inside the docstring with a "TODO". When not rushing I'll do my 3rd "write" and make those improvements (realistically this is usually doing some and leaving TODOs).
This isn't full documentation, but at least what I'd call "developer docs". The reason I do things this way is that it helps me stay in the flow state, but allows me to move relatively fast while minimizing tech debt. It is always best to write docs while everything is fresh in your mind. What's obvious today isn't always obvious tomorrow. Hell, it isn't always obvious after lunch! This method also helps remind me to keep my code flexible and containerize functions.
Then code reviews help you see other viewpoints and things you possibly missed. You can build a culture here where during review TODOs and other similar things can be added to internal docs so even if triaged the knowledge isn't completely lost.
Method isn't immutable though. You have to adapt to the situation at the time, but I think this is a good guideline. It probably sounds more cumbersome than it is, but I promise that second and third write are very cheap[0]. It just sounds like a lot because I'm mentioning every step[1]
[0] Even though I use vim, you can run code that's in the working file, like cells. So "write 2" kinda disappears, but you still have to do the cleanup here so that's "write 2"
[1] Flossing your teeth also sounds like a lot of work if you break it down into all subtasks 1) find floss, 2) reach for floss, 3) open floss container, ...
Please don't. It will just become the usual Americanized trope filled insult to the original story. Stick with Marvel to produce your generic entertainment.
Oh boy, it's not just unentertaining, but a grotesque farce of what you get when you want to check all the bingo boxes for making something bankable, and in doing so, completely destroy the original content.
In the first minutes it's already game over: they try to get you interested in the human characters in a book where the main protagonist is an entire galaxy over centuries. All named persons are going to die every 3 chapters, I get that you can't get ROI on your expensive hot actors faces, but that makes no sense for the material you work with!
Is it bad? I never finished the novels and was looking forward to the show, just haven't had time to pick it up. Should I skip it entirely or is a 'it isn't great, but watchable and you will understand the story' adaptation?
The story is split in two and thus far spans some 30 - 40 years, where one part has been able to focus and develop the same characters and themes over that course owing to a quirk in how those characters work into the plot, while the other part has been hurriedly shuffling characters, motivations, and mysteries on and off the stage to keep pace with that same timespan.
I wager its harder for the writers to get the second partition down pat as they're still getting used to adapting a story that has to move quickly owing to the limitations of TV, but its clear they can write good material which the first partition makes clear. I think the first season will be rough for them but a second should be more promising as they should get a handle on working with a plot that spans space and time on a grander scale than Game of Thrones.
Probably nice if you haven't read the books, because the direction is spotless.
But if you are a fan of the original story, it's a spit in the face of a masterpiece.
The title, names and arcs are basically used for branding.
It was predictable, the Foundation cycle is mostly dialogs spanned over hundreds of years. Little details is given to action, and characters, which are superficially developed and serve mostly the purpose of a vehicle for history, die of old age every few chapters or so.
Very hard to adapt, and pretty much impossible to make money outside of the show, with the face of your actors, by selling toys or pretty symbols. Also, good luck keeping hooked the general public. There is no focus on sexuality, little humor, hardly anyone relatable, they talk a lot then goodbye.
So they made it a block buster. It sure can work, but it's not Foundation.
It is amazing, by far the best Science Fiction TV show in years. But it isn't really doing justice to the books, more using them as a lose focus point to spin their own story. They take things that are mentioned as a side note in a chapter and do a whole storyline of it.
Which is actually what I hoped for, since a) the format of the books is impossible to translate closely to a TV show or movie and b) Asimov is definitely a bit high and low in his writing. The first two books of Foundation are amazing, afterwards it gets quite meandering.
My main criticism would be the mystification of Psychohistory, which is a very serious flaw, but I'm willing to overlook it due to seriously lacking good science fiction entertainment.
I've seen nothing which amazes me in it. The last thing which came near to amaze me was the SFX in the intro of Raised by Wolves where the ship suddenly pops out of hyperspace through a suddenly appearing disc-like membrane, stretching it until it bursts, to speed down to the planet, and membrane vanishes. Just a few seconds. The rest? Utterly forgettable.
It's like having a Cheeseburger. Or a Banana. Not even an Apple.
This is a nice summary of the underlying pattern of the adaption. If they actually understand the idea, and follow the spirit, I think it's not too hard to add concrete details that substantiate the show. It is this inherent desire to not describe science precisely, but deliberately guts the details and label liberal value onto it, that causes the whole detachment of form and spirit.
I cannot judge that since I love the books so much that I already stopped watching TV series after watching episode 2.
I saw review saying "it's nice if not considering its connection to the books", i.e., by itself its quality is good. For how much that's true for someone, I guess it has to be discovered watching it.
Minimalist as in the README does not specify that it refers to the UI though.. If I read minimalist software I would assume a more holistic interpretation unless otherwise specified.
Maybe replace minimalist with modern for a more correct description of software with 100s of dependencies? (Joke!)
Nitpicking aside, The app looks good and I do like this TUI trend!
This is a rather pretentious take. Not sure I agree that a lowest-common-denominator i.e. violence and sex TV-drama counts as common culture. The setting could be whatever, it would still sell using the same tropes. I guess it is in a way similar to how McDonald is European "common food culture"..
I have been day dreaming about a program taking a factorio blueprint and running it through a genetic algorithm to produce a compressed but still functional version. I can imagine it could produce some interesting spaghetti layout! One way to ensure that the end result still works the same would be to only let the algorithm work with a selection of predefined refactoring steps that guarantee that the factory does not break. Another maybe more interesting approach would be to include some actual simulation that could test the factory. With the simulation the algorithm could work more freely with possibly destructive changes and apply them until the factory works as expected. This I figure would produce even more interesting spaghetti..
A more natural way would be to arrange for people to work together in smaller groups on tasks as part of the normal work. If you provide this then I agree with the parent poster if there is a social interaction to be had it will come. If not that's OK as long as we can work together in reasonable harmony. I value professionalism, if I can make a friend through work that is great but it should not be the norm that we all have to be friends to get along.
I am not a front-end developer but looking at it from a distance I really don't get modern web design. Sure some sites might need fancy javascript single page features, like if your webpage is an interactive map or realtime game, but most sites are just text and some pictures. Whats with all the javascript? Your site looks just like the next one anyway! It feels like an "Emperor's New Clothes" situation or maybe more likely I just don't understand the allure as an clueless user.. I am almost tempted to make a webpage to see what the big fuss is about!
As someone involved in hiring, this is what Bootcamps and Universities are teaching, and what companies are looking for: backend spits JSON, frontend consumes it using React.
Rendering HTML on the server is not really "the default" anymore as it was 10 year ago: it's more of an optimization for when your React site is slow, and it's a black box to most people. Even static websites are "strange tech" to new graduates outside the HN bubble.
Also, developers hate mixing tech. You mentioned an "interactive map" in your example. This can be made with React or something like that, right? The issue is that a lot of developers will want to use React for everything else on the page, because they think it's "icky" to use other kinds of tech in other parts of the website. They sorta have a point (the "microfrontends" discussion was a thing a couple years ago), but on the other hand they're not considering the tradeoffs.
Also, the frontend is officially the centre of the application on medium sized companies (50+ devs). It's way easier to add new code to the frontend and spin another microservice than to coordinate between multiple teams of backend engineers.
I'm not saying this is good or bad, btw. It's just how it is.
EDIT: One thing that really bothers me that people fresh in the industry don't really believe that websites were faster 10 or 20 years ago, so I don't really see any light at the end of the tunnel. Sure we can do new things on the web, but what was already possible before has been made slower by our collective refusal to "use the right tool for the job". Even the frontend tooling today is very heavy and slow, and I'm in a 2020 MBP. I don't think we progressed much. React is an amazing idea (and the implementation is great), but the community has become too dogmatic.
A few years ago when teaching at a previous coding bootcamp that started with FE JavaScript, I remember my surprise when well-performing students got through 3 months or so of it and were confused and very impressed when I showed them how an <a> tag worked, since they had only been aware of (jQuery) JavaScript powered pages. When you are stuck just doing JS powered SPAs, an <a> tag seems like advanced technology!
I ended up at a new school creating a new curriculum. This approach is where we "recapitulate the evolution of the web", so we start with SSGs & server-side programming (Python/Django), then only at the end cover SPAs and React.JS -- since, as you mentioned, that's still the main skillset that companies are new devs for.
Almost every JS app I have seen uses a tags though. Even if they are just links to # pages. With most router libraries you can even use real paths and it works all on the front end. Sounds like you found one anecdotal case that didn't know this.
Maybe I wasn't clear -- What I meant to say is that the curriculum barely covered <a> tags, but instead started just with JS DOM manipulation, which meant students were using document.location on click events since they weren't taught anything else. This isn't a criticism of JS best practices, instead it's a criticism of how it's taught, specifically in this curriculum that's sold as a white-label curriculum and used by MANY bootcamps across the US. It might even be the most popular coding bootcamp curriculum in the country. It's possible they've improved it since I was teaching it, but definitely it was not the case a few years ago!
Exactly. And sometimes they're not even taught it! But since they also weren't taught the foundations, they also won't know how to "ask" questions. So they end up searching for "how do go to another page using Javascript", which will obviously yield results with window.location.
It definitely happens. I've seen some internship/junior interview tests where candidates use javascript's window.open or window.location to link to some fixed destination rather than just using an anchor tag.
>The issue is that a lot of developers will want to use React for everything else on the page, because they think it's "icky" to use other kinds of tech in other parts of the website.
It is though. I work on an app thats 80% react and 20% rails SSR and when working on anything, seeing that an area that needs change is written with SSR makes the job 10x harder as you have to come up with alternative methods to get it working or just rewrite it in react. When everything is react everything is quite easy and you can pass around data and update things without a refresh easily.
> seeing that an area that needs change is written with SSR makes the job 10x harder as you have to come up with alternative methods to get it working or just rewrite it in react
Without details it's hard to know what you mean. Do you have an example?
It feels more like the SSR part was not the right tool for the job in this case (or maybe it was from the speed point of view and not development). Your point seems to be that react is better than SSR and not that mixing technologies is bad.
Thanks for the response, very informative, I feel it explains a lot of what I was confused about.
So maybe a simplified view for me is: massive adoption of JS + frameworks, cookie cutter development, no point learning anything else when you have the golden hammer.
I actually thought you still did learn HTML and stuff before the digging into the frameworks.
But maybe it is analogous to learning assembler before python, i.e out of fashion and not needed to get the job done.
In this context what seems like the simpler and more suitable solution to me (static HTML+some minimal JS) might not be on the map for most modern web developers. What they do is overkill but it is what they know and it gets the job done.
One thing I love about Svelte/Sapper is that it can be used for both the server-side rendering and the interactive map. And it doesn't feel like an afterthought - idiomatic usage leads to server-side rendered html with progressive enhancement of JS.
Oof. Yeah, I am now very glad I got out of that game, does not sound like a nice place.
SPAs were for a specific, narrow, set of use cases where it really didn't make a lot of sense to even have a backend...
Companies don't really have the incentive to build good software, though. Fads and cargo cult 'lets use this popular BS' tend to catch on more when the name of the game isn't software, rather its about pandering to a job market. Outside of sudden and painful corrections that market seems to promote bad engineering, probably because it costs more over time to maintain and thus creates more demand in a 'negative' positive feedback loop...
Trying to legislate it back in by pointing out that most sites are hot piles over garbage that are unusable by anyone with any sort of impediment (physical, technological) seems like a nice idea, I don't see much growth in that area anyways so perhaps it is time for strict law enforcement...
The fact is, the Javascript ecosystem is unmatched when it comes to very quickly creating frontend applications. Maybe another set of tools would have been better, but that doesn't really matter. This set of tools is what everyone uses, and a lot of effort and creativity goes into making js frontend development as smooth and fast as possible.
I often need to very quickly make internal services at my job and while I like working in other languages (and i do for longer term projects), those always take longer to set up and get working. In js with next you're up and running in less than 60 seconds and if you're doing crud stuff, most of the work is frankly done for you.
Rails was already faster for CRUD apps back in the mid-2000s. Batteries included, maybe a gem or two for an admin panel or for auth. Not that it matters, but rails new is probably faster than 60 seconds too. Django is pretty good too.
ASP.NET Webforms was made for CRUD, and it even provided a WYSIWG designer back in the early 2000s. And it was just a matter of launching Visual Studio and creating a new project. Again, doesn't matter, but it was/is way less than 60 seconds, and a lot less fiddling. Webforms got an MVC version a few years later if you prefer that. And they haven't stopped: Blazor is new-ish and very productive, and runs both in the backend and in the frontend (using WASM).
Both Rails and Microsoft tech also give you a backend, which you're not putting into your equation. Sure you can have a Backend in JS but the experience it's nowhere near as ergonomic or as fast as using Rails or ASP.NET.
Sure, if you're comparing with building an app in Xcode or Android Studio then JS tooling is faster to use. But if you compare to what people were using to build CRUD for the last 20 years then JS is not really special. For interactive frontend apps? Then it's a different discussion. But for CRUD, modern JS is not that special.
I can't really speak to all the technologies you mentioned, but some of the other apps I maintain are in python using django. It is also "batteries included", and does give you more tools out of the box for backend work. It's actually really nice to work with and maintain in my opinion.
I would say though, that for very quickly creating new applications that get the job done, js has been the fastest for me. And while django and similar give you more database stuff to work with, honestly that feels like a solved problem, and it's often easier to not have to do any backend work at all, and instead use something like hasura. And next does include backend code too for functions that you need to write for the backend.
Obviously, this isn't ideal for a lot of problems. I think if you can get away with it, you can be really productive. I've made crud apps for work with this stack in very little time.
It's unbeliable how many things Django gets right out-the-box. I am talking about things like i18n and i13n which are usually are afterthought for most of these frontend frameworks but Django already includes a amazing system for it.
You cannot compare those though. Rails is entirely different from plain Ruby. If you want to compare you need to take something like NestJS, which approaches Django in immediate useability.
Rails is an excellent choice for the backend but on the frontend its much less productive than react. Most rails apps these days would use rails as an api backend and react on the frontend.
> and if you're doing crud stuff, most of the work is frankly done for you.
React sucks for large complex forms unless you pair it with another technology or two.
Want to write a crud app fast? C# and Winforms.
Couple years back, I once did a prototype of a web app in C# and Winforms connected to Firebase. Less than a day, done.
Month later, had it working in React. Granted the React website looked nice, but the difference in efficiency is huge.
Data binding 20 fields on C# is a matter of minutes. Data binding 20 fields in React, and with Redux, ugh. I really should have just learned Redux Forms.
And for any given forms library in React, you then get to practice learning how to beat it into shape and style it how you want.
I, no kidding, think writing CRUD apps was easier with VB6 and Microsoft Access.
That's really interesting, I should look into that pattern, maybe it can be applied elsewhere (I have to maintain a vb6 application at work, and I'm not terribly interested in doing that anymore).
Yeah working with redux sucks in general imo, I try to avoid it when I can. If I can get away with it, I try to avoid the problem entirely by using hasura as a backend. But sometimes that's not viable.
> That's really interesting, I should look into that pattern, maybe it can be applied elsewhere (I have to maintain a vb6 application at work, and I'm not terribly interested in doing that anymore).
I hadn't used C# for years, I was shocked that the winforms designer duct taped to a community contributed Firebase library, was able to get a prototype out in hours.
I had been using JS for 2 years at that point. I estimate at 10x-15x productivity boost in C# over JS.
The C# tooling is just that good.
Now if I'd been trying to learn XAML or something at the same time, eh, probably wouldn't have gone as well.
Of course the downside here is that a Winforms app is hard to distribute, and usable only on Windows desktops.
The react web app was, well, beautiful, and usable everywhere.
But I could have written the C# app once for every platform I care about and still been better off.
Ugh, I am sad that Winforms is only ever going to be Win32 based.
I'm interested in knowing how you get to do "crud stuff" so fast with Next.js
- How do you do validations?
- How do you handle database migrations?
- How do you handle background jobs?
- How do you handle authentication?
- How do you handle authorization?
- How do you handle file uploads (to filesystem or S3)?
- What ORM or database toolkit do you use for queries?
- How do you send emails?
- How do you do websockets/real time?
All of these of course have answers, but each one of these comes with a lot of "decision fatigue", discussions, maintenance work and I highly doubt the cost of reaching the same level of robustness of a full stack framework is any smaller.
Real life production "CRUD stuff" is not just some http handler doing sql inserts into sqlite. There's a lot more involved. That's what Rails/Django/Symfony give you a solution for. I agree Next.js could have a faster startup for a landing page. But as soon as you need the most minimal backend, you are way, way, way further to get something robust working if not using one of the full stack frameworks.
This is just a personal preference. I would say the same about Python / Django: You get extensible login/signup/password-reset/etc authentication & user-permissions management, and even an admin interface with user groups, right out of the box. I can put together a web app in "60 seconds" that would take weeks to assemble using the JavaScript ecosystem. Having taught classes on both, I'd also say it's also easier for beginners. Again, might be my own bias or other factors, but it seemed to me that the Python/Django student's final projects have on average "out-shown" the JavaScript SPA student's final projects in terms of features that they had time to complete.
This isn't to hate on JS, this is just to acknowledge that other languages and frameworks have substantial advantages in many use-cases.
Django's user system is .. limited. You need a bunch of third party libraries to add very commonly used things such as auth APIs, oauth, 2fa, etc.
I've worked with Django for over a decade and i dislike it more and more. It hasn't evolved to match the environment around it and how to best use it. Typing is missing. Something like FastAPI is very promising but Django's admin is still superb for prototyping and its orm is a lot more instinctive so i put off switching away ...
Agreed, it definitely is limited. Also the admin panel is limited too, and although it's very extendable, the admin extension API is kind of clunky and inconsistent at times. Yeah FastAPI looks great! I've used Sanic in the past and FastAPI seems cool for some of the same reasons. I'm also following closely Django Hotwire - https://hotwire.dev/
I only brought up Django in a narrow context to challenge an assumption made by the post I was replying to, that seemed to imply that JavaScript ecosystem is indisputably the fastest for rapid prototyping. Hence my providing of a counter-example. I think this is especially true for new coders, as having certain things out of the way is super vital to keep motivated when building MVPs -- I've seen many students give up on ideas simply because implementing authentication with a Node.js-based stack was too challenging, when they would have been happily coding away had we been teaching a batteries-included framework (django / rails / etc).
Funny you should mention that, I'm watching that too, and I can't wait until that's to the point where it's just as easy and seamless to get something up and running in django or a similar framework, while also having all the advantages of a frontend application and a backend one. Hotwire does look very intriguing to me.
I think overall we need a new wave of backend frameworks with what we've learned in the past decade. I'm hoping it's in python or clojure, but my guess is it's going to be in javascript (see react server components for one potential contender).
I've been in the NextJS world lately and it's honestly gotten really good. Nowhere near perfect yet but they are clearly on the right track and it's already more than usable.
Last time I hanged around #django, they said admin pages are for prototyping and technical admins only. Admin is not meant to be extensible. For true power without issues, you should roll your own CRUD views. Django has features which make it easier, such as viewsets.
I see your point, and I agree it's definitely not a good solution to some problems. But I think it can be very productive if your constraints allow you to use it.
That would be fine if the attitude were 'best tool for the job'...
Often there is the people element to tech though - that isn't the attitude, normally optics is your tech wiz kid is doing something cool with (metaphor) peanut butter sandwiches, lets build our next office complex with peanut butter sandwiches. Architecture? What's that. Move fast, break things, eat lots of sandwiches.
Javascript is as privileged as programming languages get. It's welded into all web browsers except perhaps Lynx. To get traction, an alternative language would need to be simultaneously included in several browsers. Good luck with that.
Some people are used to using javascript packages (so importing a whole framework when they are only using a small portion of the functionality) to help with UI like collapsing/expanding menus depending on the screen width. You can do a lot of those things with pure CSS now, but that's a more recent thing and a lot of popular tutorials are still JS + CSS.
From what I can tell the short answer is that you're right, there's really no good technical reason for all that weight. Which is to say, it's just bloat.
The JS performance difference between high and low-end devices is stunning (9s vs 32s load times, based on the Medium article). Web devs, who are often used to using the latest and greatest devices, will have no idea how terrible their code performs on slower devices. And I fear that modern CPUs with excellent JS performance will only exacerbate this issue
I had always been using 2-3 year old android phones for the last 7 years and I was unable to comprehend how anyone uses the web on mobile because everything was so incredibly slow in the browser while native apps were fine. Then I got the latest iphone and my mind was blown at how I could scroll a website at 60fps. I guess if you have the fastest phone on the market, everything seems fine.
It's strange as poor performance is known to be off-putting to users, which presumably translates into less traffic and thereby less profit for the site.
As an Android developer who's now making a hobby project that has a web UI, I don't understand it either. I'm doing it the old-fashioned way and it's so fast that my browser doesn't have the time to display a loading animation when I refresh the page.
It's a bit more complicated than that. Modern web development done correctly can help: Pages are generated in the server, no JavaScript required. If you start with a PWA mentality, then your application/site is progressive and you should cover everyone; or almost.
However:
1- Progressive is expensive. You'll need more time as you need to sort what features can work; and work your way up to the full experience. The full experience, however, is what you are getting paid for (or showing). Unless the client or the company cares about these categories, you don't want to expend budget on that.
2- Web development is becoming complicated. You can get started quickly with Nextjs/Gatsby thanks to "DX"(Developer Experience). The reality is that you don't understand much of what's going on behind the surface and if you bootcamp-ed your way in 3 months, you probably have no idea what's going on. But it works!
I agree with you. It definitely does not serve the user. I have two thoughts, as a nobody.
1. Ads/trackers/etc. need javascript
2. It's a way of flexing and saying "we have resources to put into this webpage which makes us a serious business."
Things that are running based on JS on "regular" sites from the top of my head:
- Toggling widgets such as menus, modals and other things you only want to show when the user requests it. This includes updating accessibility related HTML attributes.
- filtering, sorting etc. of larger data sets in the client.
- live updates of fresh, time related data
- search that doesn't force a complete reload, via AJAX or cached on the client.
- smoother page / content transitions via AJAX
- everything related to forms / user input: you want to instantly react
- managing and preserving state / context per user
- visualizations / graphs that are explorable / interactive
- polyfills for older browsers that don't support optimizations such as lazy loading.
- interactive widgets such as chat boxes (not a fan but still)
Yes the ads/trackers/etc is most likely a reason that a webpage cannot be completely without javascript.
Two other possible reasons from the top of my head:
If a web developer is hired to make a site they can probably charge more if it is a fancy javascript site. In some cases it might be in their self interest to up-sell this to a client that does not know better.
If a web developer makes a site for themselves I am sure many want to take the opportunity to get some experience in the latest web-tech while they are at it. Just as I will use an obscure programming language for my next side project..
The irony is that it took AMP, a proprietary solution, for all these websites that mainly deal with text and image, to start cracking down on all that stuff.
Remember "mobile first"? Well the majority of web developers clearly do not have a Mobile first strategy, when you look at how heavy their webpages are.
Even the javascript could be lighter, but since everybody is using node.js and NPM behind the scene with a lot of complex modules, well code bundles become crazy big.
I'm not either, but I've written quite a few webpages over the years, and even the occasional use of JS, but only when it's not possible to do without.
I suspect much of the superfluous JS comes from a " if you have only a hammer, everything looks like a nail" mentality.
What's with all the server round-trips? If you have a UI that takes user input and just reacts to it, without any data needed from the server -- why should it go on a full round-trip just to get a new UI element that it could create locally just as well?
Look at other things on your computer: a text editor, or a calculator. Would you expect every interaction to send a request to some remote server just for the sake of it?
Shitty server code isn't an excuse for shitty frontend code though.
Moreover you are confounding apps with the web. A calculator should probably never use the network...
Pretty much every web browser can and has to open and send network packets - as long as they are small and there is not a lot of state, you are fine. You can support almost any device from 20, 30 years ago.
'Modern' JS dogpiles huge swathes of mostly unused, uncompiled code, resulting in huge network transfer costs, extreme overusage of CPU and RAM, and encourages a lot of e-waste because it mainly only works well with the latest devices...
It's arguable that this is such a bad engineering design, just to save some network packets, I wouldn't be surprised if the carbon footprint of a JS developer was at least similar to that of burning coal...
Developer productivity: Making complex pages and reusing components across pages is much easier with a library like React than with more vanilla approaches. For a lot of companies with large numbers of engineers, making sure that engineers can be productive without intimate knowledge of HTML/CSS ends up taking priority over things like performance and accessibility.
Branding/customization: The built-in HTML controls are difficult/impossible to style or customize. A lot of UI designers will design some fancy looking select dropdown, not appreciating the fact that they're forcing developers to reinvent the wheel in order to implement their design. Alternatively, there are cases where the UX for built-in controls is lacking enough that you're somewhat forced to implement a replacement (e.g. <select multiple>)
The two biggest rants on HN are 1) Why doesn't this site work without JavaScript? It's inaccessible. and 2) Why doesn't this app work offline? It invades my privacy.
Maybe I'm wrong, but as far as I can tell, you can't have both. Sorry folks.
You are right, JS is not needed if the site truly is static content. But if you try to make an interactive app that could be implemented client-side (AKA javascript) and attach a server to it, everybody will complain that the application doesn't respect the user's privacy, since it could be offline-only but it's not.
Don't get me wrong, I think "interactive" can meaningfully include a simple site with links if you are looking at it from a privacy angle. Just look at how StackOverflow recently was able to track all the pages their hacker viewed. [1] SO is pretty much static content. So do you want StackOverflow to work without JavaScript? Are you happy that in-so-doing it needs to phone home whenever you look something up? You can't have one without the other.
There is also the argument from scalability. You'll get less QPS on your servers if you implement a 3-step form with validation in the frontend, and send off all the data in one go. It's also faster/better UX and is more resilient under bad network conditions.
Edge computing is maybe an alternative there, but that doesn't address inherent privacy concerns of phoning home to a server.
Last there is the reality of a spectrum of interactivity of websites. If you are doing a blog, sure, don't do it in JS. At that point you make a decision to make it difficult to add any interactive features to you site which require JS. If you are building an evolving app with interactive features, there aren't many options for easily mixing static HTML with interactive JS. You could see how far you get with static HTML but then what if you need interactivity (JS/JQuery)? What if you need complex interactivity (React)? Are you willing to pay the costs of a heterogeneous app architecture of HTML mixed with interactive JS?
Think of Facebook. It is kinda like a blog, but what about infinite scroll? Etc.
Anyway that last point, I think, is why people are excited about 37signals' Hotwire[2]. It's more of a HTML-but-interactive architecture as opposed to the fully-interactive JS/React vs static HTML forms.
> You are right, JS is not needed if the site truly is static content. But if you try to make an interactive app that could be implemented client-side (AKA javascript) and attach a server to it, everybody will complain that the application doesn't respect the user's privacy, since it could be offline-only but it's not.
That's the gist of it. Don't use JS if not absolutely needed (or at the very least, don't make the site break without JS enabled). If needed, consider whether there is a valid technical reason why this should be a server-dependent web app in the first place - and if there is, consider supporting off-line mode where possible anyway.
It's a perfectly valid position to hold. It's a user-respecting and waste-minimizing view.
I don’t understand your second point. Which web app works offline?? (Unless they are deliberately made for that purpose. Hell, even most electron apps refuse to work without a connection) They regularly make new requests, there is literally no difference between SSR and CSR in this regard - it seems a bit that you are arguing with a straw men. Like, what does a webshop do which is written in react/whatever and you go to the next “page”, it hadn’t loaded yet?
Also, noone would even think that it is unreasonable for a WEBapp to phone home. What people have trouble with is tracking, that is orthogonal to the current topic and should be condemned.
I see this as a part of the trend towards a bleak future of auto generated low quality generic content flooding the web. It already feels like it has been going on for a long time even though it has not been fully AI-driven before. This is a natural evolution of the direction things are going towards. I believe that non-algorithmic curation and aggregation will become even more important services that people will be willing to pay for.
I disagree with this perspective as it implies gate-keeping. Making websites easier to create is a good thing. If a tool like this gets more content onto the web then that's a good thing IMHO.
Besides, search engines already deal with a deluge of duplicated and low-quality websites...
Not all content is equal. "More content" does not imply good, useful or even correct.
Pandora's Box has open for a long time, but automation tools now are more convenient and accessible than ever. I shudder to think at the ammount of drivel that will keep flooding the internet. I concede it will be "fun" (for lack of a better word) to see the eventual search algorithm changes necessary to keep the web useful for research. That, or a massive curation effort. The sad thing is, for such a massive body of knowledge, the curation effort itself will have to employ automated tools, and the algorithms for these tools can eventually be cheated to allow for low-effort content as well.
Interesting product, but the examples are weak. I expect AI to personalize landing pages for every visitor in the future. Still, right now, I don't trust it enough to touch my bottom line.
When revisiting code is the best time to add comments because then you will find out what is tricky and what is obvious.
Code reviews are also good for adding code comments. If the people reviewing are doing their job and are actually trying to understand the code then it is a good time to get feedback where to add comments.