These are fantastic technical reasons when considering a SPA vs. a traditional server-side app, but to me the largest consideration, outweighing other technical decisions, is the overall experience the end customer is going to need and expect.
Migrating from a desktop-based application to a web version? That desktop version was likely extremely responsive, and full page loads may feel like a step backward.
Does the application require constant, fast interaction, such as data analysis, frequent order entry, or highly dynamic UX? That's the domain of the SPA. But if the situation is more of a request/response scenario, then a traditional server-side set up works wonders.
The technical difficulties surrounding SPAs vs traditional are for the most part solved. It does require more overall work, but that gap is shrinking. Therefore, IMO the largest deciding factor should be user expectations, and that can go either way.
The problem is that most SPAs today are cargo-culting.
They break middle mouse button (or Ctrl-Click or whatever your favourite "open link in new tab" shortcut is on your Apple computer), at the same time break browser history, and at the same time make you wait for any single interaction you're doing (show spinning wheel for 6 seconds before opening a confirmation modal)
The ideal solution is a) minimize server roundtrips (<100ms) and b) reload parts of the site via AJAX where it makes sense.
> They break middle mouse button (or Ctrl-Click or whatever your favourite "open link in new tab" shortcut is on your Apple computer), at the same time break browser history, and at the same time make you wait for any single interaction you're doing (show spinning wheel for 6 seconds before opening a confirmation modal)
I am not defending SPAs, but the points you've made are not indicative of SPAs, they're indicative of bad software. This problem existed before SPAs were a thing. Remember ASP.Net pages where the whole page is wrapped in a giant <form> and you couldn't use the back button most of the time? This problem also doesn't exist with properly built SPAs (e.g., by using <a> for links, using the history API correctly, and designing a sensible UX).
And certainly, if your app is sending 7MB of JavaScript over the wire and it's not a video game, something is very wrong--SPA or not.
I think SPAs get a bad reputation partially because they're trendy. Which means that inexperienced programmers flock to them and make them awful.
I recently re-wrote an SPA from a previous dev into a regular set of query/response web pages. The SPA was 2,143K. The new set is 78K. A big part of that savings was because javascript is verboten for this project due to client hardware constraints.
Yes, the previous dev was not the best in the world, but IME SPAs are always heavier than their response/query counterparts.
Not trying to defend inexperienced programmers but I've seen plenty of 20+ year senior programmers contribute poorly to SPAs.
They refuse to learn the best practices / new patterns because "they have 20+ years experience".
I have been caught in the middle between some younger programmers and some elder folks for the body of the past 3 years and the bottom line is this:
Learn the suggested best practices, or just don't build apps like this.
The OP's original thoughts on simplicity do ring true, but pushing for better UX is actually a reason to redo something. There are plenty of legacy systems out there with frustrated users, and the "complexity" can be simplified if effort is invested to accomplish those goals.
Problem: Best practices change every 5-year, new patterns get deprecated by the hot-new opinionated framework at the moment.
As a senior programmer is difficult to explain to jr's and sophomores to not look all problems with the "disrupt everything" mentality and have continuous battles about that we don't need TDD, that we don't need CI, that we don't built everything as a webpage and definitely don't need slack integration for everything.
jr's and sophomores tend to go trendy on software development, don't like to work on the boring stuff, the stuff that makes money (audit and management, government compliance and payroll software) prefer to go on FB and built the hot-new AI for choosing the right ad to show you.
Calling TDD and CI "trendy" is...well...I'm gonna be real: those are junior attitudes anywhere I've ever been.
But then, I also don't know any senior developers who would write "fuck all JavaScript/NodeJS developers"[0]. Or randomly rant about how somebody who notes that features that break stuff aren't good features[1] (which I noticed because I was fascinated by your post and wanted to learn more about what you think--and boy, did I!).
Dude: maybe it's not your juniors who are a problem here?
For you? Or for everyone? The former is a reasonable position, which I assume of you because I think your posts tend to be solid and well-thought-out--and also because the latter is demonstrably cracked.
And even if you don't, personally, like TDD, ranting about junior developers "wasting time" using it is some weird stuff. Whose juniors actually want to be writing tests? Most times, you have to hammer it into them.
The distinction being that you may feel less productive when doing TDD--but that TDD very obviously yields significant benefits for developers who aren't you. I'm not saying that you should do TDD because it works for me, but I am saying that it very demonstrably does work for me.
If you don't like TDD and don't practice it for this reason or that, that's totally fine. Going out of your way to disabuse junior programmers from using it strikes me as crazy-town, which is kind of the point I'm making here?
First, thanks for taking time in checking my profile and comment history, m quite honored that you have devoted that much time before replying my comment but usually when somebody does this is because they are trying to appeal to authority or a plain ad hominem.
Second, i didn't said TDD/CI is trendy, i said "you don't need them", probably should have add "at all times" to clarify.
Each year it's becoming a increasingly difficult to keep jr's from wasting time implementing TDD+CI+Slack+Hot-New-Thing on all the projects, wasting resources and time from devops for the projects that do need them. Not all the projects that you built on your lifetime need the same amount of bureaucracy as those that clearly are business-critical.
Let's think a little about this post and your prior one. Juniors don't like to work on "boring stuff" like testing and ensuring that your builds correctly complete, hmm? While you're complaining that they "don't need" stuff like TDD or CI. TDD is "wasting time", even though it's how you can actually specify software and then just knock out the issues to turn those tests green, increasing velocity once you're comfortable with the rhythm. CI is "wasting time", even though it's how you avoid rolling broken stuff out to production (and thus have to, yanno, do it again).
What I'm saying is that your post was incoherent (and this reply is worse), which is why I read deeper into your comment history. There's a lot more incoherence there, too. Frankly? It reads like a junior who's found what they think is a local maximum and is insufficiently humble to realize it's not a global maximum either. And I have a little bit of extra oomph on that one, I think, because--lest we forget--your public-facing profile says fuck all JavaScript/NodeJS developers.
Most people would pay a lot of money to juniors who wanted to do that stuff. Maybe those dang kids have a point.
> TDD is "wasting time", even though it's how you can actually specify software and then just knock out the issues to turn those tests green, increasing velocity once you're comfortable with the rhythm.
What kind of software are you writing that this works? full-blown TDD-as-a-specification is both too detached from the domain and too bogged down in details at the same time; you end up creating a Frankensteinian codebase that barely makes a coherent whole, but it's sure easy to test.
> CI is "wasting time", even though it's how you avoid rolling broken stuff out to production (and thus have to, yanno, do it again).
You need CI if you're working on a big project that will get deployed to multiple places (counting dev, staging and production machines). You don't need CI if the project is small and limited in deployment - about all it does then is protect you from the lazy-ass developer who thinks committing code that doesn't build is a good idea.
The thing that I believe 'SadWebDeveloper is aiming at is that hippie-devs mistake finger for the moon; they think of TDD and CI and other popular practices as dogmas to follow, instead of potentially useful techniques that may or may not be beneficial to any particular project. Noticing the limits and the trade-offs comes with experience, I guess.
> Most people would pay a lot of money to juniors who wanted to do that stuff. Maybe those dang kids have a point.
Would they now? Paying structures in software companies have usually little to do with actual value provided to the project/company.
> And I have a little bit of extra oomph on that one, I think, because--lest we forget--your public-facing profile says fuck all JavaScript/NodeJS developers.
Maybe too strongly worded for my taste, but I can kind-of understand the sentiment. The software ecosystem around the web is just bonkers.
> What kind of software are you writing that this works? full-blown TDD-as-a-specification is both too detached from the domain and too bogged down in details at the same time; you end up creating a Frankensteinian codebase that barely makes a coherent whole, but it's sure easy to test.
For clients, where I most often practice TDD? Libraries. Web applications (mostly services, occasionally web). DevOps tools. Personally, where I do it for core logic when I understand the problem fully going in? I have a React Native app in the Play Store which I basically proved working via tests before I ever built a UI. I've built a service-oriented authn/authz stack I'm looking to open-source soon, and it has 100% test coverage and the test cases have been looked over by much wiser heads than me to bulletproof it. All sorts of stuff. I naturally write code that's decoupled sufficiently to avoid mocking (except when forced to by writing sufficiently downstack stuff, but that's rare enough these days).
Specifications are in English from the client (either from them directly or written by me to their approval), I turn them into tests as I rough out an overall architecture, and I then make those tests turn green. And, because I actually know how to write tests in the large and in the small, I have a reasonable--not perfect but reasonable--expectation that my code will work.
(I don't do this as much as I should for personal projects, but that's because most of them are much more exploratory or are effectively plumbing over somebody else's API.)
> The thing that I believe 'SadWebDeveloper is aiming at is that hippie-devs mistake finger for the moon; they think of TDD and CI and other popular practices as dogmas to follow, instead of potentially useful techniques that may or may not be beneficial to any particular project.
This is the sort of in-your-own-head view that makes one old prematurely and unnecessarily both unwise and unlikeable.
I know that, because I used to say what you do, and I was wrong. It used to be Ruby, though, not just JavaScript. Things change too fast. It's all too reckless. Then I started using them in earnest--first Ruby a few years ago, then Node and JavaScript over the last year. And I realized that the things that draw external criticism are things that exist for a reason. These things are not discussed merely because they're "popular", it's because they make the end result better and they are transferable from project to project in a way that amortizes the cost even for small ones.
But, more importantly, the notion that these are somehow "hippie-devs" because they care about practices you don't is one, disrespectful, and two, kinda really often wrong. (I know this because I thought this, and I was full of shit.) These are not stupid, uncritical, or foolish people. There are problems and there are gaps--the Node ecosystem's general lack of understanding of underlying Unix is continually troubling--but when it comes to the platform and the practices of it, the notion of "change for change's sake" is mostly unfounded.
Anyway, the software ecosystem around the web is bonkers because the web is bonkers. Because people want the web to do stuff it wasn't built to do and is really hard to do effectively. That's why there's so much churn. Because people are learning new things and applying them. (Heaven forfend.) But it's working. If you step into this stuff, today, here and now, with React and ES6, you get an experience that is flat-out better than any I have had with any system, ever--with WPF being the closest, but not that close, I can think of, and miles better than any server-side HTML chewing that I can think of. Stuff is changing because stuff is improving. And it's also slowing down as practices gravitate towards what appear to be reasonable local maxima. It did for Ruby, it is now for JavaScript; you can write a frontend using React and be reasonably assured that it'll be supported for most of a decade and that you'll be able to find developers, and the JavaScript backend stack hasn't really changed much in a couple of years). It's part of the lifecycle. Whining about it is whining about the weather; nobody else cares, and nobody wants to hear it.
I run a devops consultancy. Sometimes this is marriage counseling for engineering teams; sometimes this is "come provide technical leadership." I am paid to have opinions and to lead people towards best practices. I have learned over time that holding those opinions so tightly that you say "fuck all JavaScript/NodeJS developers" doesn't make you good or wise or experienced or decent, it makes you a jerk, and it makes you unfit to interact with people, let alone mentor young adults. And it is unworthy of your defense.
wow, that's a really long reply for a couple of short rants on HN, you either have tons of free time (kinda implied now that you have explained that you work as a full time freelancer) or really take this site seriously; personally i don't take this site seriously for that kind of discussion because the vast majority of the audience here is not a developer, i even have a comment on my history asking this in a thread.
Anyways, i believe that all your comments/rants can be resumed on you getting offended by my profile motto on javascript/nodejs developers, my disdain for testing-all-the-things mentality and some out of context sarcasm that may or may not have hit hard on you (m guessing it hit, otherwise you won't even try to pull an ad hominem move like the one before), the only answer i can think to this is... sorry that my jokes/rants got you, it wasn't intended, was a rant i write when i first originally join HN almost 2 years ago.
Now that we have made that clear let's rewind to my original rant, the one talking about jr's pushing TDD+CI+Slack+Hot-New-Thing on everything and add that we were talking before about jr's trend to make everything on SPA's and OP rant on seniors not grasping the need for that tech.
In today "web development best practices", it is common to build things out of duck-tape, jr's like to bootstrap their environment with all the batteries included and by today "standards" this includes TDD+CI+Slack+Hot-New-Thing, which is fine if you are going to build the next-uber but not-so-fine if you get hired to built a simple webpage for a client (the boring stuff, nobody wants to work that jrs usually had to grind before touching business-critical infrastructure), webpage that barely has dynamic stuff but now requires a hole built process because jr's thought that in the future they might need it.
This trend leaves me or any other senior always talking to them on the good and bad things of being a "lego/duck-tape developer", because not all of the jrs are full engineers or have a PhD on computer science the vast majority join either fresh from college or just took a couple of online courses and were able to navigate the HR interview process.
That was is my original rant that you took it wrongfully as an attack on your belief (well kinda your main job as a devops consultant), maybe you have been working so many years in big projects that you forgot that not everyone is building the next uber.
Kindest Regards
PS: Why m not in your hall of shame? i don't deserve that honor yet? ;)
Funny how you mention Ruby, because as I was writing the previous comment, I actually thought of it; I was a bit dismissive of the Ruby culture (that around Rails in particular) as well, but - unlike JS - it was entirely tangential to my work, so I didn't dwell on it.
Maybe I'm projecting a bit when I talk about "hippie-devs", but that's because I used to be one too (more than once), and I remember being eventually disillusioned about the things I was in so much awe of.
(Also, I don't call something "hippie-dev attitude" for just caring; the attitude I don't like is following and promoting popular trends without understanding the actual (not imagined) benefits and trade-offs they involve. Which is something I was both doing at times, and frequently witness on the web.)
--
> If you step into this stuff, today, here and now, with React and ES6, you get an experience that is flat-out better than any I have had with any system, ever--with WPF being the closest, but not that close, I can think of, and miles better than any server-side HTML chewing that I can think of.
I started playing with React recently and I admit, it seems to be great and living up to the hype (barring the broken setup process that suffers from ever-shifting JS dependencies, but that's orthogonal).
> you can write a frontend using React and be reasonably assured that it'll be supported for most of a decade and that you'll be able to find developers, and the JavaScript backend stack hasn't really changed much in a couple of years).
I don't feel confident about it. I mean, the last stable JS thing I remember was... jQuery. Everything else I've seen so far is shifting sands. Works today, broken tomorrow, works again next week, abandoned half a year later. But you have more experience there, so I'll take this as evidence toward "stability" hypothesis. I really do hope it's true - then maybe we can iterate on overall React performance, fix the damn unstable mess of underlying dependencies, and we'll have something solid for everyone to use.
> I have learned over time that holding those opinions so tightly that you say "fuck all JavaScript/NodeJS developers" doesn't make you good or wise or experienced or decent, it makes you a jerk, and it makes you unfit to interact with people, let alone mentor young adults. And it is unworthy of your defense.
Thinking about it more, I believe you're right.
--
Thanks for calling me on my bullshit. Really. That's one of the reasons I come to HN - so that people can tell me where I'm wrong, especially in the opinions I hold strongly.
Haha, hey, I appreciate your response - I recognized your posts, I figured you were an alright guy, so I gave you my honest take. =) Like I said, I've totally been there. I still rip on Golang more than I probably should, but that too is out of knowing how the whole thing works. (Before editing, I wrote "the whole tire fire" here. Old habits die hard.)
I think being disillusioned with tech is also part of the cycle. You can't be frustrated by something until you really know it and really like it. Usually anyway--my exception is Golang, I got deep into it but never liked it. I haven't hit the frustration point with Ruby personally, but I also don't use it as a substrate on which Rails runs. I think I would be much more frustrated by it if I did, because Rails seems to have gone down some weird holes (ones that make a lot of sense--if you're writing a Basecamp framework), but Ruby-the-ecosystem, not so much.
I do think React is going to be a Thing for at least five years. There's too much momentum behind it and, unlike Angular, it seems to have a straight-up smart basis for existence. Making the UI fundamentally stateless (or easily made stateless--I think component state is going to be left out of "React: The Good Parts") will age pretty well, and it's already really fast at DOM rewrites in ways I tend to think browsers can key in on with batch APIs down the line (to avoid context switches into/out of JS-land).
Couple that with already having stuff like React Native out there, and it seems pretty open-and-shut that it'll be a worthy choice for a while even if Facebook abandoned it--which I don't expect them to do unless something clearly superior, and worth the cost of migrating a lot of stuff, comes up.
As another "senior", I think these things almost always depend on context. Some of the shiny new tools and techniques really do move the field in a significant way. Most of them don't. The ones that do might advance one area but might not be particularly relevant or helpful in other areas.
For example, personally I find it easy to imagine projects that don't have much need for CI tools, particularly not the formal ones that have "CI" in their name. Maybe a project is small enough that the only developer(s) involved are constantly making builds for test purposes and rebasing is standard practice when pushing to shared branches. Maybe a very large project has a complicated and carefully controlled branch/merge plan for their repository, and merges at key points are systematically checked following a whole review process. Either way, the extra effort to run a dedicated CI tool might not provide much benefit.
Of course, in other cases, you'll be in a sweet spot for this kind of tool. Having something to automatically warn you if a key branch stops building could save a significant amount of time and trouble. The point is that it depends on your situation. Automating something means finding, learning and deploying software and infrastructure to do that job for your particular project. That's time and resources you aren't spending on something else, and these things are always trade-offs.
> But we’ll see if I’m actually following my own advice 15 years from now.
The one thing I've learned after doing this for 5 years is you probably won't be needing to follow best practices in the next 3 months.
I've been in my current role about three years. Most of my 2015 was building an Angular 1.X transactional app. By the time it was released, I was told to start learning HapiJS and EmberJS because we were transitioning as a group to that. Then 8 months later and several smaller apps, I was told we're going back to AngularJS and Angular 2.0 transitional. So I had to almost completely relearn that. Then after one huge disaster trying to integrate it into our services, we gave it up and spent the rest of 2016 moving that app to ReactJS.
I then spent most of last year jumping around (because now all these apps we built needed supporting) between those three frameworks (Angular, React and Hapi). Now there's talk of moving to VueJS but there is a loud push to just stick with one (Angular or React) and be done with this.
My point here is you may want to learn best practices, but how fast my org was changing, and building things quite literally on the fly, you just don't have time. Sure, in my own private projects, I learn best practices and use them myself. For our customers and "the business"? No time man, no matter if it will us time or not. Getting something shipped that's working is far more important.
Ironically, you've just provided an excellent illustration of some real good practices: find tools that are good enough to do the job, only depend on things that are built to last, and concentrate on actually doing that job. The web development world's obsession with constantly moving to the next thing that does essentially the same job as the last thing but maybe 5% better (or maybe not) is a huge productivity sink and best avoided as much as possible. Getting something shipped that's working is far more important than exactly how you made it, and it always has been. If "best practices" aren't helping with that, are they really best practices at all?
While this is generally true, the situation with SPAs is a bit more complicated. Each popular approach early on had a lot of serious issues and pitfalls. IMO React was the first thing to really break through this and while it’s not perfect it addressed the biggest issues Angular and Ember had quite well.
However, at this point, once you’ve adopted React, NG2, Ember 2.x, or Vue, there’s not much to be gained by switching between them. They’ve all sort of gotten past the initial immaturities of the SPA approach.
This is really something you ought to carefully evaluate once and be done with it.
Sometimes I find myself fighting the framework and someone inevitably shows me the best practice and I see how it would have just worked had I done more reading.
The trouble is that web development is such a young field and moves so fast (or at least appears to) that the whole idea of "best practices" is fallacious. In many areas, we barely have enough experience to distinguish good practices from bad ones. Even then, most of the good practices that will stand the test of time are about much more fundamental things than which framework or build tool you're using.
It’s definitely not about frameworks or build tools. It’s just about applying whatever you’re using well. It certainly doesn’t have to be perfect, but it should at least be clear that an effort was made.
That makes no sense. What other "practice" besides "best" would they be using, given that SPA didn't exist 20 years ago? Would they look at the SPA sample and say "I see that you're using history to have the back button work, but I'm removing that code"? Obviously not.
If bad behaving SPAs are commonplace then there is still a lot of work to be done to improve the tools and frameworks. But after many years of churn in tools maybe the problem runs deeper: the language and platform. I wonder if my time is best spent in this field.
> SPAs are always heavier than their response/query counterparts
Heavier by what metric? Initial load size? Sure. But the static assets will be cached after the first page load, after which an SPA should definitely be faster than a traditional server-side web app given that a) less data is sent over the wire for each page load (you don't need to re-send headers/footers, JSON is usually smaller than HTML, you can implement intelligent caching strategies, etc), and b) only a part of the page is re-rendered, rather than the entire thing every time. So "heavier" really depends on what you're optimizing for.
I measure by the amount of data sent to the client to accomplish a task. I keep a very close eye on this because it's part of my POP (proof of performance) reports each month.
Don't forget that plain query/response caches many static assets, too.
Sure, but what about responsiveness and other metrics? Reducing the absolute amount of data sent over the wire may be what you are optimizing for, but it's not by any means the absolute metric of performance. Personally, as a user, for many web apps I would rather see a longer initial page load but have all subsequent actions appear instantaneous (a la Gmail), than spread the load times out across every action.
My point isn't that SPAs are better than traditional web apps or anything like that, just that these decisions depend entirely on what you are optimizing for and what your priorities are. If initial page load time is your priority, then an SPA may not be the way to go; however, if your goal is to maximize responsiveness or interactivity, then the extra 2MB of JS will likely pay off.
> I am not defending SPAs, but the points you've made are not indicative of SPAs, they're indicative of bad software.
When it's a bug that happens on Facebook, which has millions and millions of dollars spent on their web UI and on developing whole entire frameworks for them, you can't wave it away as an application problem anymore.
I agree that correctly written SPAs shouldn't break middle/ctrl click, but browsers make it harder than it should be. A few years ago I was implementing a SPA. I carefully made every clickable element that takes you to a new permalinkable "page" an <a> tag with the correct href for that permalink, and then set up some click handlers to intercept the user clicking the link and do an AJAX call instead of a full page load.
Unfortunately some browsers would still send a click event for a middle or ctrl-click, which meant that if I wanted to avoid breaking that I had to write extra code to check if it was a middle click or ctrl-click and not call preventDefault if so. My recollection is that Chrome got it right but Safari and/or Firefox had the behavior I'm describing.
Browsers should make it easy for apps to do the right thing, not make it harder.
It sounds like your experience with SPA's is the exception, not the rule. I have never built a product or website using Angular or React that performs as poorly as you describe.
- Routing modules in all the major frameworks interact directly with the history APIs, and are fast as f*.
- There are only a handful of locations in the US where you can expect sub 100ms server round trips, usually they're around 300ms, but lets call that negligible. The real time spent waiting is usually if you're executing some complex database query over large data sets. This has nothing to do with SPA's vs Monoliths. Even so, it sounds like you just experienced bad code if you're waiting 6 seconds for a modal to open.
Also...React with React DOM is only 10kb larger than jquery, and when gzipped it's only 37kb.
I was going to say the same thing you said. Angular/React routing, if done correctly, don't have those problems at all. If you let the SPA framework do your page-changing for you, open-in-tab and history still works. If you do amateur stuff like, you bind a button to a function, and the function does an document.location.href update, then you're going to have some weirdness.
My biggest problem with SPA the last time I did it (a year ago) was optimizing it for search engine crawling.
One of my biggest problems in writing data-heavy web applications is that browsers seem to be really lousy at handling the data. In fact I have 2 applications in this boat. My queries run fast (couple/few hundred ms), but displaying 10's of MB of data, over 10's of thousands of rows of data, has been brutal for a browser.
Trying various standard JS table widgets only made the problem worse, but I've discovered ag-grid (a paid JS table widget) which made this tolerable. It takes about 6 seconds to render my big page of data (which is actually on-par with a WinForms version of the system I wrote, using DevExpress).
It sounds like you have the opposite problem, and I'm wondering how you display "large data sets" from "complex database queries" in sub-second renders in a browser. Incremental loading (like a news feed) won't really work here, because my users need to be able to sort and filter and rearrange the data as a whole. I guess I could "chunk" the data to be able to see it sooner, but it's still going to hold up the browser until it finishes. I've been looking for a better way for a long time.
You aren't going to like this answer at first, but bear with me. There is another way. When you see the other way done well you may conclude, as I have, that it is the best trade-off given the technology you've chosen (the web). You want to use pagination. That is, your user won't be able to see 10s of thousands of rows at once. They will have to page through them. That doesn't mean your user won't be able to sort and filter and rearrange the data as a whole; but after each action, they will only see the first page of the changed result. If you do this well, every user action will result in a near instantaneous change on screen. You'll need to make every link in the chain as fast as it can be. The key is going to be the database. The first thing you'll need to know is the fastest way to do pagination in the particular database you are using. You need to be able to do something like this: getCustomers(81,160) and get instant results. You'll need database connection pooling. You'll want to send the smallest possible payload over the wire. HTML, XML, and JSON are too verbose. Don't send the word "LastName:" with every row. This will double your payload. Include the column names only one time in the response. When the client gets the payload, slap html on it, and inject it into the DOM (replace a div: <div id="results"></div>). Let the user set their page size (40, 80, 160). If you play with this, you'll find that every device/browser/connection combination will eventually choke if you keep increasing page size (and have a large enough result set), but every device/browser/connection will enjoy beautiful responsiveness at 40 or 80 results per page. When you feel what this is like you wont want to go back to watching a spinner for 10 seconds and then watching your browser hang up every time you scroll. And your user who is using an iPhone 4 with some old version of Safari on a 3g connection really doesn't want to watch a spinner for 30 seconds, then see a blink of results, then see their browser crash. But that is exactly what will happen if you give them 10s of thousands of rows.
Even better is where you clearly separate your use-cases to avoid pagination except when you really need it. "If someone really needs to be able to visit 104th result, they need a different workflow anyway."
It's especially dangerous when you have some "Find Order" UI, and then some executive chimes in with: "My group could really use some totals about the result-set. Since this is already 90% done..."
Eventually you discover the "Find Customer" UI is abominably slow, having sacrificed the ability to find a customer for a bunch of quasi-reporting bells and whistles.
I'd bet the winforms version would be faster without DevExpress. We use the web version at work and it's extremely apparent that their use cases are small shops without a lot of data, they do a lot of things in incredibly inefficient ways that require/encourage encourage n+1 problems and things like server side (database) paging are like pulling teeth. To get a page working with any sort of efficiency you have to fight the framework which takes more time and effort than you save from using it.
For the JS stuff, a lot will depends on how you add the data because things like DOM redraws can be brutal. If you have a thousand items in a list for instance, adding them to the list 1 at a time could trigger the DOM to do whatever it does a thousand times, but if you can add them in one go then the DOM only changes once (with a decent framework). Whatever you're using you have to know how it interacts with the DOM.
I'm not sure how ag-grid works, but I'd be willing to bet that any table built with a DOM like structure will always be orders of magnitude slower (and eat more ram) than a real grid that simply paints.
> but displaying 10's of MB of data, over 10's of thousands of rows of data, has been brutal for a browser.
I can display >~14K rows a second with jquery datatables if I load the data from the client side up to ~100K rows then it slows down slightly (on my Thinkpad i7-7700HQ) and that's doing some funky stuff with dynamically created elements mounted with Vue - I was frankly surprised how fast browsers had gotten when you optimise in a certain dimension.
I had to do some slightly funky stuff to do it but I was surprised how little code it actually took, jq.dt is damn impressive for the sheer scope of what it does.
If I use it's ajax functionality I can display several million (but actually however many is currently on screen) in the time it takes the DB to respond.
Not possible, because the speed of light is too low.
I live in Australia. Servers tend to be in America. That hundred milliseconds has just been blown in latency, along with another hundred or two, depending on where in the US the servers happen to be.
Take something like FastMail. Our servers are in the US, our team is mostly in AU. FastMail couldn’t be fast for us and many of our customers without being crafted in the SPA style—and even customers in the US benefit from its latency-compensation techniques, which reduce at least 50ms of delay (plus further page load, so call it 100ms at a minimum, but probably more like 250ms) to zero.
Please don’t blame navigation-breaking on the SPA style of app; blame it on developer ignorance and occasionally laziness, with perhaps a side of unfortunate browser APIs (onclick is a poor event for this sort of thing for various reasons). And I assure you, simple JavaScript-enhanced pages manage to have at least as much problem with such links as SPAs do.
I agree with the others. Those are signs of bad software dev, not SPAs in particular.
It's relatively easy now to break up the 7MB bundle into chunks and load on demand. So you get the benefit of both the fast, small initial load while other stuff loads in the background.
Then I'm not sure what you're referring to on links and waiting for single interactions...
With SPAs it's actually easier with the History API to create deeply nested stateful apps via the url / links. The confirmation modal, maybe your backend API sucks? Idk what your talking about with a 6s delay.
On top of that, a pure SPA means you have a static website, which means you can host the whole app off something like S3. We save so much money by hosting our stuff statically.
> They break middle mouse button (or Ctrl-Click or whatever your favourite "open link in new tab" shortcut is on your Apple computer), at the same time break browser history, and at the same time make you wait for any single interaction you're doing (show spinning wheel for 6 seconds before opening a confirmation modal)
> [...]
> No need to shovel 7MB of JavaScript to the user.
Well that's exactly the problem. Someone thinks they don't need all the "dead weight" of something like angular that solves every one of these problems and instead they do a half-assed version where it's all broken.
I see no way that a properly built SPA could perform worse than a properly built server-rendered app form a network bandwidth perspective. JSON is always smaller than HTML.
Let us all give some credit to LastPass, which manages to tick all of the points you're making here. More importantly, at the same time being a simple browser plugin.
I use Trello, Asana and Toggl, all single page apps (all fairly "trivial" in my book).
In all of them, every now and again something will go wrong.
Like yesterday, I was trying to replace a project on a task in Toggl and it kept reverting back. In Toggle if you click start stop too quickly, to create a bunch of tasks to add, it doesn't register it. Asana's just super slowwwwwwww. Trello has wildly inconsistent UI, some places saves on edit, some you need to click a save button.
And given these are all well funded apps with good dev teams, doing extremely simple apps, it shows just how hard it is to get SPA right and I don't really think the difference is shrinking much at all.
And I think this professionally too, a previous contractor turned part of the SaaS I'm working on at the moment into a SPA, and frankly, it's a nightmare (doesn't help it's durandal). They're so inflexible and so brittle.
That's SPAs for you, there's always something a bit wrong with them, they're extremely hard to get right, unlike traditional web apps. I often find myself rolling my eyes at YetAnotherPointlessSPA because there will always be something you have to fight with in it.
> In Toggle if you click start stop too quickly, to create a bunch of tasks to add, it doesn't register it.
> That's SPAs for you, there's always something a bit wrong with them, they're extremely hard to get right, unlike traditional web apps. I often find myself rolling my eyes at YetAnotherPointlessSPA because there will always be something you have to fight with in it.
Okay, I'll bite: What would have been the alternative in these cases? Having each action submit a form POST to refresh the entire page, like in the 90s? Then you wouldn't have been able to click it multiple times in the first place, because each one would have caused a page refresh. Or are you saying that they could have implemented it in JavaScript without implementing an entire SPA? In which case, the problem sounds like it's with the logic, and has little to do with the fact that it's a SPA, so you would have likely seen the same bug regardless.
I totally understand all the frustrations people have with modern web apps, but your frustrations are with sloppy engineering, not with the idea of SPAs. I believe that, in most cases, the superior user experience that SPAs provide far outweighs the extra margin for error that comes with the territory.
Another user made a similar comment and although you are (both) correct that the issue is sloppy engineering and not directly related to the idea of a SPA, I think you are (both) missing the point.
The issues pointed out above are likely not manifesting out of laziness or inexperience, rather, the sheer amount/complexity of Javascript used to create an SPA can result in seemingly simple issues (like the one above) to crop up, and often makes fixing them far more complex than updating a few lines of code.
If I make a page with a few simple JS functions and one is used to "add items to a list on click" (or whatever), yeah, detecting the cause and identifying a solution is trivial. But these days we have an entire front-end ecosystem of controllers, components, events, models, callbacks etc. with many of the above including numerous life-cycle states and actions themselves. Add to that the sheer amount of indirection many of the popular JS frameworks employ, and it no longer takes a sloppy developer to create a bug like the example. In fact with today's component-based architectural style, it's likely the functionality to add items to that list was developed and tested in a completely different context than where the user above was encountering it.
I think it is entirely possible (maybe even reasonable) to assume that the folks who built that web app are aware of that bug, but know that the solution isn't worth the amount of time needed to deploy a fix (in relation to other bugs of course) or, even worse, are constrained in such a way that prevents a fix from being possible.
> But these days we have an entire front-end ecosystem of controllers, components, events, models, callbacks etc. with many of the above including numerous life-cycle states and actions themselves.
Can you be more specific? Because to me this looks like essential complexity that's going to exist somewhere, regardless of what's on the client and what's on the server.
While you are absolutely correct that many of the "pieces" of an application I list above are going to have to exist somewhere, "Essential complexity" is a loaded statement here.
There are different contexts and use-cases for complexity. A traditional 3(or 4)-layer MVC web application will obviously contain quite a bit of complexity surrounding data persistence/access, enforcing business rules, and view semantics, but it likely won't have much complexity surrounding "adding an item to a list" on the client. In fact, up until the last 10 years or so, my simple example of how that might look in JS is probably how that would have been implemented on a client (a full page reload is a moot point here).
With a SPA, we are adding another MVC (or MVVM) application on the client that is entirely focused on managing the view. That is the complexity to which I'm referring in the point I made above. I named it a "front-end ecosystem" as to not be opinionated on how it might work.
I'm certainly not arguing against developing SPAs, rather, adding some context surrounding the OP about how/why bugs develop. To reiterate, I think it's an oversimplification to simply hand-wave the OPs example bug as sloppy engineering that has nothing to do with creating a SPA, because doing so glosses over "why" that bug may be occurring in the first place. The point that I am trying to make is that it may very well be the case that such a bug is occurring because of the added complexity an SPA tends to introduce into the view. Does that make sense?
It's a bit misleading to say we're adding another MVC app on the client. By adding that complexity, we reduce complexity on the backend. Views on the backend are no longer true views, but rather serializations of the models. And models on the frontend don't duplicate business logic. That's still all on the backend.
The real complexity is in the front-end frameworks like Angular and React. Sorry, someone had to spell it out.
When you learn these frameworks you have to wrap your head around a lot of concepts that make the browser behave like a desktop application. And you may find yourself trying to do things that your framework doesn't like early in your project.
Compared to that dealing with business logic or a straightforward server side MVC might be trivial in many projects.
I don't know Angular too well, but React is really just a templating system. It doesn't add much more complexity than any templating system, including those you would necessarily have on the backend in a multi-page app.
I didn't really get a chance to reply yesterday but u/kingdomcome50 perfectly expressed my thinking.
While you think it's not more complex, I've done plenty of things like toggl's "add extra time" buttons many times outside of SPAs, but saved by a form POST. They worked flawlessly and never broke.
Because you have to insert the "update the database ajax, [then refresh data]" step, which is abstracted away inside react/redux or angular or whatever, it suddenly becomes very hard to be in total control of what's going on.
You say you've abstracted away MVC on the server, but you haven't really, the server is still there and it's still complex, it still has to run validation and business rules, etc. and it's still MVC, the view just happens to be a json object instead of a html page. You're still going to be stripping stuff off the object for security reasons or for performance reasons, so you still need an intermediate model.
And worse you now have to decide, do I add all those business rules and validation to the client too so I can make it super-responsive, or do I rely on my server to enforce all this, but then after every update I have to refresh the data, adding a potentially 500ms+ delay to the UI that is well in the realms of human detection (which is what happens in the Toggl case).
Using the traditional approach on a pure CRUD app, your server transforms form inputs to inserts, page requests to queries, and query results to HTML. Each of these transforms can be pure functions, and all the state updates can be handled by a boring database.
With pure functional components being one of the few ways to make code actually less complex, and handing things off to boring libraries being another.
Along those lines I think the reddit mobile site is particularly bad. Every action takes ages so you think it didn't "take". Then you press Back nav or buttons. Then, in the last moment just before your fingers hit the screen, it changes to the "page" you requested, and you navigate back to your original page. Frustratingly, the SPA navigation doesn't give feedback, like plain page-based navigation would. Makes you want to throw your smartphone out of the window.
The irony being that if you switch to desktop mode on Reddit, it's much, much faster and totally usable, a small number of stylesheeet tweaks would make it great.
Reddit's mobile version is a travesty of engineering, a total waste of time and money.
I would have personally fired the engineer who came to me with that thing, I don't understand why they keep pushing it.
Being in the middle of a non-trivial SPA build, I can't agree that the technical difficulties are "solved", especially if you want to build on open source tech. Specifically, my project was prototyped in Firebase and for various reasons needs to be moved off. Now I'm left building a lot of plumbing that just isn't available outside of Firebase. In the end, our result will be less capable than the Firebase equivalent because we don't have time to engineer features like offline support and optimistic updates. There are a lot of tradeoffs to be made in order to avoid "hard problems" that are still very prevalent. I'll be happy to be proven wrong :)
Firebase is a solution, not the problem. The problems are:
* How do you allow users to continue using your SPA without a network connection?
* How do you enable snappy and responsive UIs when the network is slow?
* How do you synchronize state between client/server?
These are just some of the "hard problems" that you'll face building an SPA. Firebase has its own solutions but many other tradeoffs. All I'm saying is that we don't have hardened open source solutions for these problems yet, in the same way that we have Rails/.Net Core MVC/Django for traditional websites.
> How do you allow users to continue using your SPA without a network connection?
How do you do this with a traditional web site/app?
> How do you enable snappy and responsive UIs when the network is slow?
How do you do this with a traditional web site/app?
> How do you synchronize state between client/server?
At worst you do it just like a traditional web site/app
True, these are all problems that you deal with when building a single-page app, but the worst case scenario is that you end up with some of the same problems as traditional web sites. None of these are new problems that are introduced by the fact that you're building an SPA.
The new problems that SPAs introduce are things like history/back button breaking and how do you let users bookmark things or share deep links.
I think user expectations (and/or product/UX expectations) have changed along with the rise of the SPA.
In the days of server-rendered apps before SPA was a thing, the answer to all of those questions would've been some version of "You don't," followed by "Why would anyone expect a website to work offline, or be fast when the network is slow?"
But SPAs are in a sort of "uncanny valley" where they sorta look and feel like native apps and users expect them to behave like native apps but doing that is much more work than building a simple webapp.
There's a bit of cargo culting here too, I think. Not everything needs offline support and instant updates from Firebase. But $fancy_native_mobile_app has it, so our imitation SPA must too.
Also, the first rise of the Web was enabled by residential broadband adoption, so that most of your site’s users had a stable internet connection. Now that Mobile devices are the most common, flaky networks are back to being a problem.
serviceworkers for caching content to be available offline (can cache most recent result from an api call so you don't even need to change anything on the frontend)
localstorage for offline state , if your serviceworker reports the app is offline then any changes of state go into localstorage and are queued for updating when the network is back online.
Any changes to the server side state can be diffed against the localstorage changes and you deal with it according your priorities. i.e is the user always right or does the server take priority?
> Migrating from a desktop-based application to a web version? That desktop version was likely extremely responsive, and full page loads may feel like a step backward.
I'm working in this space of migrating existing desktop apps and what I've seen is that most of the SPA are so heavy that they end up feeling sluggish and that paradoxically server-side apps can feel more responsive and snappy to end-users.
Particularly if you consider a fast server-side app combined with perceived performance enhancements like Turbolinks. I know it didn't have the most popular start, but these days Turbolinks is rock solid and basically removes page reloads with a single line of JavaScript. It's an effortless improvement.
For me user expectations and whether you actually have the resources to produce a quality SPA are the biggest deciding factor.
I don't have a strong opinion on SPAs in the general case, but I find it worrisome that it seems that SPAs are used as the default starting point without considering if the gains actually outweigh the cost.
E.g At my last job I build the initial version of the application alone and although I'm pretty confident in my javascript skills I opted for a traditional server-side app too keep complexity and cost of maintenance as low as possible. Javascript was later added on top to make common interaction like sorting a table feel more responsive.
With most SPAs actually, as a user, I get less responsiveness due to overcomplicated UIs and memory leaks and, for instance while browsing in train, bad handling of poor connections.
Facebook is a great example. It's almost impossible to have it opened for a long time in a few tabs simultaneously on a PC with 8 GB RAM. I often end up switching to m.facebook.com full page load version, cause it just works and it works fast.
I'm struggling with this (SPA vs no SPA), and think you are stating the obvious:
overall experience the end customer is going to need and expect.
data analysis, frequent order entry, or highly dynamic UX
The hard part is, how do you measure that? obviously some web apps need highly dynamic UX, a spreadsheet, some not, a blog, choosing between these two is easy.
But most web apps sit somewhere in the middle, I don't know the answer, but for example, how much will github users benefit from the experience of a SPA?
GitHub actually straddles the line a little bit. You'll note that when browsing around through folders in a repo on GitHub that you get the faux loading indicator along the top, and then it shows the contents you're navigating to. Same with switching between code and issues, for example. But switch to a different repo and you get a full page refresh.
For my company's product, we're much more on the side of a traditional server-side app, because users will send off a batch of information (their customers to open a line of communication with), and then wait for the responses to roll in. But the dashboard page where we show responses is a partial-SPA, similar to GitHub: new responses are loaded asynchronously, and you will soon be able to switch to different views of the data without a reload. It's a good balance for our application.
I realize it's not answering your question about how to measure it, but hopefully it provides some anecdotes.
One thing I learnt to love about SPAs is how they distribute responsibilities: obv you get 3 layers:
- your DB which stores your data
- your backend which access your data, retrieve it safely, and delivers it without having to know anything about the client it serves.
- your frontend which handle how / when it needs data, which data it needs, how it caches it.
Separation is good, since you can test your API without needing your client, you can test your client code with mock data, you can build different clients (web, native apps, ..) over the same API (and even do not need to build an API if you're using things like graphql)
To be clear, SPAs do not distribute responsibilities any differently a traditional 3-layer MVC web application. You still have:
- an infrastructure layer for managing data
- a domain layer for enforcing rules and encapsulating data access/management (completely independent of the view)
- (optional) an application/service layer between your domain and presentation to provide a common interface for convenience
- a presentation layer for handling view semantics
a service layer can be very useful if migrating from a traditional app to a SPA because it makes creating your API almost trivial.
There is nothing about creating a SPA that makes a 3-layer architecture any simpler or cleaner. In fact, these days it is quite the opposite. You have all of the above systems in your application PLUS another MVC (or MVVM) framework on the client. The entire point of the MVC architecture is to allow for the switching/replacement of the "V".
1000x this. People seem to ignore this fact, that a lot of times you will need an API anyway (multiple clients), so really adding in the SPA isn't any extra work at all.
1) Hard and easy are relative terms, You feel it hard because it is foreign to 'you', not because it is intrinsically hard. If it seems hard to you then get comfortable with the new pattern, or create a pattern that is more familiar to you, but please don't tell me this is hard because I have to write more code or because there is one more thing that you have to know and maintain. That is not hard, that is been laaaaazy.
2) The 2 points become irrelevant if you do SPA using server rendering. To give an example using C# and MVC: I have 1 javascript function with 0, I repeat: 0 business logic. All my MVC methods return a partial view with a tag saying where it goes. The javascript function does an AJAX call to the controller and on successful return it looks for the tag, and do a $('#' + tagvalue).html(partialView). All the business logic is on the server, all the drawing is on the server, all the validation is on the server. The only UI validation logic is the same one that you would write in the case of non-SPA. There is no data synchronization because the server manages the data. The browser gets reduced to a pretty HTML document display, which is all that it is.
1) instead of using browsers based on standards with hundreds of millions invested in developing them for things like routing, caching, history, etc, you are using ad-hoc amateur implementations of all those things that are full of bugs and unimplemented corner cases.
2) Instead of executing your code in a safe environment you control you are executing it in an unsafe environment on an unknown runtime that you have no control of.
3) Instead of processing stateless requests with large chunks of synchronous code backed by a database that imposes some sane limits on state manipulation you are maintaining all of your app state yourself, continually updating it asynchronously, in an insecure environment, with dropped requests, and layers of hacked together client/server data synchronization and caching.
I'll agree that SPAs are a bit of a departure for folks who have been building legacy ASP.NET or JSP / Java Request / Response apps.
There are some new things they must learn, so they may have a higher learning curve. Additionally the technology is really unstable. The build tools for example (webpack, babel, gulp, grunt) have changed tremendously and in breaking fashion now drastically over the past 4-5 years.
Lots of legacy technology has stable tooling and so a more "seasoned" programmer may be a bit put off by that. Maybe that's this guys's problem.
He should Green Egg's and Ham this mofo and just "try it" earnestly before writing a total dump of a blog post and sharing it on Hacker news.
Doing SPA commercially for years now, never needed any of those: webpack, babel, gulp, grunt, etc.
If the developer understands the technology that they are built upon he will realize that there is no need for them. Surely they add nice things, but in my experience, they add nothing that worth the learning curve.
BTW, I have 25+ years developing software, so I kind of consider my self a "seasoned" programmer, and I can tell you, I am not put off by technology, if anything else, every once in a while, I get a good chuckle out how the javascript world is reinventing what has been done 20 or 30 years ago. And I don't know many people that get to be "seasoned" programmers and that they get put off by what the new kids are doing. Actually, I don't know any at all. Most of the time the reaction is: Do you remember when we did something similar on that old xyz computer? Yeah! and we had to do a, b and c to adjust to the lack of memory space, now this library is wasting it (https://news.ycombinator.com/item?id=15472325)
Your thoughts on webpack/babel/grunt/gulp well echo my sentiment towards seasoned programmers.
I've only been at this for around 15+ years right now. I agree I am starting to see repeated problems solved in new JS languages that I've seen solved in the past.
That's part of the fun in what we do for a living though. I consider myself at least mildly seasoned, but still a ton to learn from those fresh to the field and those who have been around the block at least twice.
I quite well understand what WebPack is doing, and find it very useful. Anytime I find myself thinking I'll make my own recipe for a build system with custom scripting, I seek a well supported framework, you know on the off chance someone else wants to understand and contribute to the build system.
I think it's a poor choice frankly to build 100% custom frameworks for building when you can use a widely adopted community choice. I was more pointing to the volatility of JS frameworks vs stable frameworks in Java like Spring.
As we age in our career though it seems we should avoid condescending tones with more junior programmers though, as they may have a unique solution to problems. I've seen some junior folks get quite offended when a non-producing "seasoned" programmer gives them the "I've been doing this for 20 years" schtick, then turn around to spend a few days on a powerpoint instead of contributing to the team.
I hear you. Instead of thinking about a build system (custom or not), I normally begin by trying to keep the project simple enough that file copy is my build and deploy mechanism.
I don't mean to be rude, but you clearly aren't talking about the same kind of application. I have worked on a SPA before that used knockout and jquery with an ASP.NET MVC backend and Razor-rendered 'templates'. It was a much different beast than an angular or react app.
I am observing that you are right, there is no definition globally accepted by the community on what an SPA is.
For me, a better definition is the one that it is independent of the technology used to implement it. An SPa is a way to structure an app, is not a vendor feature or a tool.
While is true about nowadays angular et all are the tools commonly used to create SPA, the tool shouldn't have the power to change the definition, so your knockout MVC is still an SPA.
If the OT chose a complicated set of tools to develop an SPA, then is his decision and the difficulty lies in his decision of the tools chosen, not on the SPA type of app.
I don't know if folks much consider vanilla JS / JQuery apps Single Page Applications.
I think (but am not sure) that all SPAs are built these days with Angular, React, Ember, Elm, Vue, or some other framework I haven't paid attention to.
Usually tied into Restful webservices that don't do any templating server side unless it's via Isomorphic JS. Maybe I got it all wrong though.
The thing about build tools and frameworks is that once you have tools for managing complexity then the penalty for complexity goes down and complexity explodes.
The apps I have seen built with all of these "modern" tools are much more complex and difficult to maintain than the apps I was seeing 10 years ago that were architected and managed by hand.
That is why many of the people using these tools are switching toolsets every 12 months because they can't maintain what they are creating and have to throw it away.
My experience boils down to inconsistent mindsets on large teams too though.
You get a variety of folks in a room working on a complex business problem, some favor simple tools, some are constantly distracted by shiny new tools, you end up with crazy complexity at times in terms of the solution.
I think it's important to focus on a team and vision for managing the complexity and the rest should fall into place. It boils down to good engineering vision.
<sarcasm> I guess you must still be living in your parent's basement because you must not have many bills to pay </sarcasm>
LOL
Now seriously:
Quite the opposite, I always receive comments on the simplicity of my code.
Also, I code so the user has a pleasant work experience, and also so the CPU executes the code faster, I don't put much attention to the comfort of the next developer, but I am always willing to show and explain what I did and why.
I won't get into a pissing contest, but I've solved and managed large and small teams. Folks don't like to maintain custom built software solutions that don't use common accepted best practice frameworks.
JQuery is widely accepted as an anti-pattern by huge swathes of the programming community.
You are right though, I've paid off most of my bills and do have a choice in what I work on anymore. I definitely don't live in a basement.
I agree about the pissing contest, Just to document the issue: the same way that my sarcasm touched a nerve in you, the comment "Does anyone else even have a desire" can be read as belittling and condescending. But I rather laugh at it with a sarcasm.
I have never been a follower of what the community says. Half of the time figureheads make bold statements based on an economic agenda more than a technical one. If JQuery is an antipattern... don't know, don't care, what I do know is that is light enough, and provide a simple, compatibility layer across browsers. That is good enough for me.
Technically speaking the only reason to avoid JQuery now is because DOM manipulation is more costly than dealing with a more efficient Shadow DOM to manage the lifecycle of UI updates on a screen.
JQuery may be a fast download but JQuery applications can be quite costly to run especially in a non-browser-refresh environment.
I'm sure your JQuery applications are quite efficient, but I am an advocate for choosing modern tools that developers want to work on when you inevitably move on from your projects/customers. I do also advocate for tools that are going to be around to stand the test of time, and in your favor JQuery is a ubiquitous library used all over the place, people do know it, and a customer of yours will have no issue finding someone who is willing to work on it.
I by choice avoid those projects, and have worked with quite a few folks who do things the JQuery way when we are using React or Angular.
Enjoyed the debate today, but my question was largely left unanswered. I was curious if you worked on a large team with your framework/JQuery or if it was a smaller deliverable managed by you and another person or maybe just you.
I have witnessed React and Angular apps with as little as one developer, but up to 30 in some cases all on the same application. Complex applications like Trading Systems and Document delivery with multiple connected users.
I'd wager it's tough to get all of that done without some documented tools and complexity, given the complexity of the business problems at hand.
JQuery is widely accepted as a productive and appropriate tool for many types of projects by huge swathes of the programming community.
I currently help maintain a React app that's approximately a one dev project. It's a beast though. 900 transitive npm dependencies. Huge, brittle build process that has broken in the past and prevented us from deploying critical changes for periods of time while we debugged it. Several long-standing bugs because it's such a hassle to get into. I'm a big fan of jQuery sprinkles in light of this experience.
What I understand by SPA: Single Page Application: No need to reload the full page every time, but only to update the proper parts of the UI that has changed. It started with the GMail client and the ability to use AJAX. I like this definition because is conceptual, it is not tied to any specific technique
I am curious, in 5 years of doing SPA this way I have never hit a limit, what have you found?
Anything where you want low latency interactivity. For example, a graph where you are applying various filters. The more traditional “SPA” would fetch only the data and do local re-rendering on a canvas, SVG, etc.
Right, because you want to reduce the network traffic by reducing the amount of data traveling, and it allows more frequent round trips updates.
The times I got that kind of requirement clients always jump back to a winform/wpf UI client. So far I have not faced this problem on a web browser, sounds fun.
Can I ask where you work? I'm trying to get a better sense of who is still building native business apps and why. What leads to you building a WPF app vs. A Web app?
I am an independent contractor, for NDA reasons I cannot tell you were, but here are some of the apps that I have developed recently:
- Image analysis for a massive amount of images. A system generates 60 images per second for hours. Each image needs to be statistically analyzed, and depending on the results, saved to a database for further processing.
- An underwater robotic controlling software. The UI gets very complex as it has to process in real time a couple of video network feeds, some other position information, and coordinate with other pieces on the system. Several custom network protocols used by other vendors.
- On the fly manipulation of mobile backups. There are certain types of data stored on mobile devices that need to be analyzed and secured beyond what the manufacturer provides.
- 3d data of cavern shapes. While the data could be easily processed on the browser, part of the requirements is to keep the isolation level to the machine where the software is running, sometimes to help compliance laws, sometimes to help companies operate under different countries with specific laws, sometimes because by law restrictions there is no internet on site.
- Marketing data analysis. There are a couple of applications that benefit from being a desktop app because they give an "Excel like" experience that gives confidence to the client's customers. By "Excel like" I mean a desktop app with a single file container for the data. Using local CPU cycles, and 2 people can take the same data and manipulate it differently disconnected from everything. In this project, there is another group that has a web version of it, but it just does not gain user acceptance because continuous performance problems compared with the desktop app.
I think your architecture is sound, and I think it's the easiest way to build a site.
But there are two aspects I think you're missing:
1. SPA means something different to everyone. I believe the most common definition is a site where the UI is rendered on the client and the server is just a bunch of API end points that return JSON. At least that's where the phrase started coming from however it has grown to include some flavor of what you're doing (returning fragments from the server).
Some people think of SPAs as isomorphic or universal webapps, but those both leverage and suffer from the code sharing aspect.
2. Depending on the flavor of SPA, it's a different kind of hard, and I do think it is hard simply because of how drastically different each architecture is. Even sites that run on Node and share code can use anything from a traditional MVC, to using a Redux architecture, etc. And a new paradigm will always be "hard".
My overall point though, is that of all of the architectures, I do believe that rendering on the server and transporting fragments is by far both the cleanest and easiest to implement and rendering on the client will always suffer in complexity and/or performance.
But, the wealth of options does make everything a lot harder.
It started way before GMail client. OWA was the first application to partially manipulate the UI on the client. GMail was the first decent app doing it in a modern way. Even AJAX is invented for OWA.
Your second option seems to suggest that the way to make building SPA's easy is to stop building SPA's and go back to building web apps the way we did in 2005?
1. To me, it is an SPA as long as you are not updating the whole page every time. The year of the tools has nothing to do with it.
2. I use the tools I use because they work and produce the proper and performant results. To be honest, I don't care for developer tool fashion statements, and so the year is irrelevant to me.
I need to get into server rendering more. I find myself doing double validation these days and server rendering could solve some of that. I validate client side, then again server side to make sure people aren't throwing garbage at my API.
This is a horrific pattern - and I know because I've done it in some half-asssed jQuery/Razor/MVC projects in the past. As soon as things get more than the slightest bit complicated, this turns into a huge mess.
If you're building a SPA, and you need it to actually be a SPA, use a framework, don't try rolling your own mess.
Indeed, code reuse, testing, maintainability, etc are harder with the latter pattern. And with the latter you also end up using at least 3 languages in the same files. It's just really awful.
Yes, browser-rendering SPA frameworks are heavyweight, but they give an undeniably-better development experience. And that shows in the quality of the interfaces created through them, assuming your interface requirements are complex enough to justify it. I mean, I'm happy with Hacker News' interface and it's simple, but if I'm building the next Google Calendar, I am not using ajax and server-rendered templates.
Been watching this thread all day and I’m dumbfounded no one has spoken up for React with a modern router like found[0]. Once you learn to use these things, making an SPA is really nice, and you can easily introduce universal rendering which takes your SPA to the next level by rendering it on the server.
I find it strange that people talk like server rendered pages with progressive enchancent a la 2013 aka Stack Overflow aka GitHub are easy to make and maintain. It’s 2018 and when you close an issue on GitHub the count doesn’t change until you refresh the page. It’s 2018 and when you go back on Stack Overflow the alerts you just dismissed are suddenly unread again. These are companies capitalized with hundreds of millions of
dollars yet they are incapable of making server rendering with
progressive enchancement work. It’s not their fault, it’s the
technology, it’s busted.
Yet for some reason people are lauding server rendering as effective and desirable and capable of producing rich UI that’s sane to build and maintain. It’s not. It’s unmanagable at any scale.
With React etc in 2018 all of these problems have been solved, and as I said above, the SPA is not even something to argue about anymore because you can render your app on the client or the server. This whole article is moot. All these problems have been solved and the solutions are free for anyone who has the interest and bandwidth to build products like it’s 2018.
> I’m dumbfounded no one has spoken up for React with a modern router like found.
You're dumbfounded that developers aren't powering their companies' multi-million dollar websites with tools that have a bus factor of roughly one[1][2]?
A couple years ago I was handed a web app built by a team that had used tools with low bus factors. Many of these tools had been abandoned only months after the team starting using them!
You’re right, of course. But that particular repo was one example. Yes that repo is brand new. But at the same time it’s the product of basically one person’s work. On the other hand you have these multi million dollar companies—if their tech stacks were the end all of stacks, wouldn’t they be able to accomplish what this lowly repo accomplishes? But they don’t. And why? Because this repo stands on the shoulders of many attempts and failures in routing with universal rendering, while GitHub and Stack Overflow are stuck in 2013, when people only rendered on the server, progressively enchanced, and back only kind of worked.
Anyway there’s also react-router, which is a project with probably 10k stars. I didn’t cite it because I think it’s not as good as “found”. But it works and will get you back functionality that works in client rendering and server rendering. So if you’re looking for bus factor, react-router has got that in 2018.
What do you mean by a "bus factor"? I'm assuming its related to the number of contributors / contributions over time, but I've never heard the term before.
My takeaway from having worked on both traditional webapps, and SPAs (2 with homegrown frameworks built on backbone.js, and 1 with ember.js): SPAs, being more complex, demand a higher standard of coding and code review to not have eisenbugs. I have seen a lot of effort wasted on making SPAs work, when the customer would have been happier to have a more reliable traditional webapp.
In one of these companies, SPA architecture was chosen because the backend was unacceptably slow. Instead of fixing that problem, we adopted more complexity by transitioning to SPA. When I left, the backend was still slow, and the SPA was breaking all the time. It felt like a waste of effort and I would never do an SPA by choice again.
This is why I'm a fan of having both - no need to limit yourself to one or the other
Data heavy pages with little or no interaction? Statically render it. Admin pages, terms of service, etc. Writing endpoints and serialization to move the data is a waste of time.
Sure, you might have to maintain two separate sets of minor components (header nav and such) but it's well worth it the lessen the development cost of non-impactive pages. Can also get away with loading HTML partials from the backend and throwing them into the dom whole - the perfect case for __dangerouslySetInnerHTML or whatever, then you can reuse your nav and such
I agree that SPAs are always harder. For me, there are two main reasons to consider an SPA.
1. It _can_ let you build a much nicer UI/UX. That said, it's probably easier to build an _okay_ UI/UX with a traditional webapp, and it's also easier to build a _terrible_ UI/UX with an SPA. But the upper limit for excellent UI/UX is higher with an SPA.
2. If you are planning to build a REST API for your system, then the best way to do this is to build a REST API from the start and dog food it with an SPA. Otherwise you end up doing a lot of the server side work twice, once for the HTML app and again for the REST API. It's better to bite the bullet and _just_ build the REST API server side.
The article had me confused until I realized this was 2013. Even then it is out of date.
Muli-Page applications have had business logic on the client-side for years as a technique to reduce the feeling of latency since at least 2005.
If by "Traditional" web apps you mean web apps from the mid to late 90's, sure it was done entirely with POST and no business logic at all client side.
At the very least data validation has been being done client side for s long as Javascript has existed. (and also server side as a second level of checks if you care about data integrity and security)
SPAs did not appear overnight. They are a logic evolution of what has been happening for years.
> sure it was done entirely with POST and no business logic at all client side
Even if you don't have business logic on the client side, you still have a state: tab position, state of controls, etc. These are not part of the data you really care to store on the server, and yet they have to be sent there at each and every page reload, as they might be needed in order to build the response.
I have a simple rule: If the app requires real time updates and push, then SPA it is, but you can also mix server side rendering with SPA's aka micro services, eg the "chat" can be it's own component.
The argument for SPA's is that they perform better, but I don't find that to be the case with all the bloat on real world "apps". Browsers are good at rendering static content, and a server rendered site will perform well even on low end mobile devices. With todays hardware the server can render even the most complexed site in less then a millisecond, and with todays networks the latency, often as low as a few milliseconds is imperceivable and even faster then clicking around in something using a JS framework.
...but much easier and maintainable than a "server-side" webapp that is a mess of jQuery, etc layered on top to make the experience "interactive". Not to mention the ease of unit testing user interface code in SPA.
> If you need to restrict access to a field (e.g. social security number) such that only certain users can see it, you need the authorization check twice, once when writing the JSON and again when building the HTML.
I think the article may have some points worth considering, but in what world should security code for restricting access to display a social security number ever be in the browser? The browser should never get data that it would be dangerous for the user to have access to.
3. Slap on Turbolinks (because it works super easily with any back end framework)
4. Sprinkle in JS as needed
This results in very fast development, very fast page loads and no duplication of things like validations / schemas. The best part is you can use whatever you want on the back end too. I've followed this pattern successfully with Rails, Flask and Phoenix.
Certainly though SPAs are overprescribed and old fashioned template based sites work great for many projects.
But I think for many enterprisey platforms its likely the opposite, front end design logic is likely done by a different team to back end data provider so its a natural split that is more effective than a server generated page. Not to mention most back end devs suck at design.
An easier way to reason about the modern web, and port this knowledge from and to native platforms at a minimal cognitive cost, is to think about a web application as consuming from an API or database layer which already enforces authorization and provides authentication, instead of mixing all these things in a single request.
Once you do this, the idea of duplicated authorization goes away, as the only thing you are doing on the front-end is error representation (and focusing on the design).
Then, you pre-build, or run+cache this same application on the server side to pre-render pages for search engines, first page load, accessibility etc. so that you are actually delivering an application for the semantic Web.
Viewed from this angle, it does not really involve more steps, but it does create more separation of concern. This in turn allows more specialization, scaling, and is more secure.
You can apply this approach to build very simple pages (e.g. blog consuming data from Markdown files), which are not SPAs, but are very easy to turn into SPAs.
So it's not really that SPAs are this hard, it's that the "do everything in one request" thinking is still very pervasive and making it harder to reason about building modern websites in a simple manner.
Well, I think that many people are not aways that SPA != SPA. Actually, SPA means that the site never does a full page reload, but apart from that there is very little defined about SPAs. You can build them with fluid 60 fps interfaces or with server side HTML rendering causing huge delays. You can add aria attributes or render everything within a big canvas. Its still a SPA.
So no wonder there is so much disagreement about SPAs.
A first step towards a more defined and reasonable term is PWA as the term requires the use of some specific technologies (HTTPS, ServiceWorker, etc.). Nevertheless, it is still very broad. For example what happens to an PWA when the server is not available anymore? Does it keep running forever? Not defined, so some do others don't.
I really like well made PWAs, but in general most of them have a too strong server dependency (in my opinion). I would like to download a PWA on day X and install it on my PC 5 years later and it should still work (as other software does). As the old server might not be available anymore, I would also like to enter a new server in the settings.
The technology behind PWAs would allow such use-cases, it is just not that simple to use nowadays.
I don't want to beat up too hard on a five-year-old article. But in case anyone reading it now might not know better, I want to mention that a lot of the reasoning here is plain bogus.
You don't do complex authorization on the client. If your backend is serving SSNs to a user who isn't authorized to see them, you've already lost. Trying to hide them in the UI avails you nothing beyond a false sense of security.
Cache invalidation is trivial. HTTP provides, in entity tags, a very friendly mechanism for checking whether a resource is stale without incurring an expensive refetch unless one is needed. Use it. Resolving write conflicts is more complicated, sure - exactly as complicated as it is to do on the backend.
It's true that SPAs are differently complex. You're doing work on the client that'd traditionally be done on the server, and with different tooling and not all the same paradigms. But more complex? Not once you understand it.
Frankly I think both reasons given for the supposed greater difficulty of building SPAs are wrong- and that SPAs have been successful exactly because they're much easier to develop.
Additional logic tier: exactly the opposite is true. A SPA allows to get rid (almost) completely of the logic on the server, leaving there only a shallow layer of services. While traditional web applications always had the problem of a complex logic layer on the backend, with in addition the need to manage complicated interactions happening only client side (because even traditional web applications couldn't afford an entire page reload for simple things like reordering a table, displaying an error message, etc.). Part of the logic had to be replicated on both sides, with all the problems that it creates.
Data synchronization- again, given the fact that the entire state of the application resides only on the client and is rendered only by the client, data synchronization becomes trivial- it's just a matter of calling some backend service to store or retrieve the full state.
In short, the problem with traditional web apps is that they've always had the need to manage a state on the client as well as on the server; and the client's state was lost at every page reload, or either had to be sent to the server with some hack to be restored after the page reload. A nightmare.
When Tim Sweeney needed to make an engine, he didn't simply hardcode each of the tasks: Update position, update velocity, play animation...
Instead, he created a system that allowed those things to become implicit. They synchronize themselves across the network by default. There's no special logic required.
So when I look at SPAs, I see a similar sort of situation. Rather than hardcoding each of our endpoints, why don't we come up with a system where we can synchronize some variables from the server to the client? Then adding some new data is as simple as declaring a new variable. No need to create an endpoint for every single thing.
This is vague and handwavey, but there's an underlying idea here worth examining. It seems like we could leverage abstraction in a way that unifies these systems (as opposed to simply shoveling another layer of abstraction on top of the current pile).
As someone who's worked with the unreal 3 engine in a professional capacity I can tell you that there's a fair bit about those systems that are hard-coded.
They just look very nice from a 1000ft perspective. There are parts of UE(like the material editor) which are very modular but the networking stack is not one of them.
This is not a knock against UE3, more that things are more complex then they first look.
What you're describing is called "replication" in Unreal Engine. It helps tremendously with writing declarative code, and I've found it to be exceptionally powerful and predictable.
I love this architecture so much that I brought it into the world of React and Redux a couple years ago (see https://github.com/loggur/react-redux-provide and https://github.com/loggur/redux-replicate). Dan Abramov and the ReactTraining guys immediately shit on it for some odd reason (I probably did a horrible job of explaining it), saying it wouldn't work in the real world, but I (and my team) have been using it very successfully... in the real world.
With that said, I do need to update the docs and improve some internals (and remove some unnecessary stuff like query handling), but there's only so much time in a day. :)
I don't really want to contribute to the vague-ness of the discussion, and one failed SPA project is all that I can pride myself of: I'm sceptical this can be done. The synchronisation unit is the transaction: What subsets of data must be updated atomically, which business rules apply before the data can be committed, what chunks of data in the UI should be as updated as possible when an editing form is presented to the user. This requires thought and specialized code (structure).
Not saying that every form must be coded ad-hoc, but I doubt that a well-engineered SPA can be written by anyone without solid domain experience by just grabbing the next framework.
I'm building an SPA with React and instead of hard-coding each and every form I have a few high level components that take some data and render my forms complete with validation and rules for transforming input back into the store.
The web server is configured to respond with the main `index.html` file for any path that is requested. So, using the browser history with JavaScript I'm able to show the user any URL and dynamically render the page for it. The only endpoints that I need to define are for the API server.
If I wanted to, I could make the client completely dynamic. Although I don't have time for that right now, it's not a far leap. So, I expect systems like this are coming.
It's quite different in games where what you want to sync to users is just the global state of everything at a hard-defined frequency (10, 20hz, whatever), with well-defined heuristics like planar distance from a character. That's not what you want or what you get in web apps.
I meant aimed specifically at SPAs. Firebase is nice, but it doesn't really help reduce code bloat on the front end. E.g. if you're making a react app, your instinct is to reach for redux + reducers, even though that means dozens or hundreds of lines of boilerplate code.
In my experience you can/should skip redux if you use firebase-firestore. One good reason is that if you try to use them both, you will end up writing a lot of "mee too" action and reducers that will just wrap firebase update events. It is really not necessary to always use Redux.
The answers to the first difficulty are called "Universal Rendering" and "Isomorphic Rendering". Universal Rendering will render the page at any URL server-side, then the client will render other pages from there. Isomorphic Rendering is a special case of Universal Rendering in Node.js apps where the same JavaScript code is used to render pages in the client and the server.
Anyway, the article is right. Of course presenting users the current state of things in real time and allowing them to mutate this state in real time will always be harder than issuing requests to query the current state and requests to mutate the current state. It'll always be harder because it's way more complex and powerful from a user perspective.
The article helpfully considers the issue of logic being duplicated across layers, and questions the choice of potentially unnecessary layers of business logic being in the architecture at all.
I wonder, though, how this changes in a modern world of web services like Amazon's, where your web host provides REST APIs to talk to your database (e.g. DynamoDB), and you can serve your SPA as static files out of an S3 bucket, not needing to worry about server-side code at all.
Taking out the server layer altogether may not work if you need a specific user auth model which your web services provider doesn't support, but this simplified serverless model has the advantage that attackers can't hack your server code if you don't have any server code.
The frameworks that are popular at the moment make it harder.
I put together a simple set of frameworks, engines and libraries that let me build real-time SPAs really quickly and easily. I've built a complex production app with it in a couple of weeks and it works really well. I tried to promote it a while ago and those who tried it seemed to really like it but it never became popular.
Promoting technical solutions these days is literally impossible if you're not a rock star developer.
I don't even care about promoting it anymore. It's my own secret toolkit now.
If people want to believe that SPAs are harder to build, that's fine. But I'm going to keep building them because for me with my tooling of choice it's much much easier.
Would you enumerate the libraries/frameworks/engines in your toolkit? I'm always interested in alternative approaches. Is it https://github.com/jondubois/nombo ?
The app I built has sign up, sign in, password reset, subscription-based payment processing, usage quota tracking, email verification, GitHub integration and supports different account permission levels and it can scale out indefinitely to pretty much any number of machines. I built it from scratch. Also it has fade in and fade out transitions when navigating between pages and all data throughout the app updates in real time... So for example if you do email verification in a different tab, your main dashboard will update in real time.
And it only receives real-time updates for pages that you're actually looking out so it doesn't over-fetch.
This whole extra logic tier comparison doesn’t make much sense to me. The same logic exists on the server side anyways, you’re just moving it to the client. It’s not like making a site without substantial amounts of JS is really an alternative. You generally can’t just do everything with form submissions.
Your first point is dead on but your second point is way off and I would actually say the harder point is SEO. My clojurescript app is excellent at keeping front end and back end perfectly synchronized across many clients. However, my ability to generate pages that can be SEO'd is not as easy.
For webrtc apps you really need a SPA, currently I use vue2. If you leave the page you lose the session. There are other html5 APIs that are also much happier within an SPA, like webmidi and webgl. But for text based crud apps rails is still the best way to go. Or express and ejs. If you're not using advanced APIs turbolinks is often a better way to make your app spa like within incurring spa pain.
It's not tightly coupled to the database, it's just not on the browser. And because they aren't tightly coupled, you can still make the renderer the client of your REST/gRPC API just as you would the browser. Or it may be more practical to have your non-browser clients talking to your API while your renderer talks directly to the database.
Yes, that's my point. Decoupling it for the purpose of supporting multiple frontends may not be more practical than leaving it coupled and providing an API for other frontends.
"But if you're building an app that honestly is only going to serve 10 people, or 100---or 1000---do you really want to incur higher development and maintenance costs just for the sake of pushing the user-experience envelope?"
Yes, because modern tooling makes it pretty much as easy.
Migrating from a desktop-based application to a web version? That desktop version was likely extremely responsive, and full page loads may feel like a step backward.
Does the application require constant, fast interaction, such as data analysis, frequent order entry, or highly dynamic UX? That's the domain of the SPA. But if the situation is more of a request/response scenario, then a traditional server-side set up works wonders.
The technical difficulties surrounding SPAs vs traditional are for the most part solved. It does require more overall work, but that gap is shrinking. Therefore, IMO the largest deciding factor should be user expectations, and that can go either way.