Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These layers were created for efficiency, not for their own sake; there’s no “DOM transaction API”, so every DOM mutation causes a reflow. Thus, you mutate a virtual DOM, render the resulting changes to a new subdocument, and then replace an existing DOM node with that new subdocument.


Incorrect. This idea was a result of some early React users misunderstanding the purpose of the virtual DOM, and has unfortunately been thoughtlessly repeated ever since.

In reality, browser engineers are not that stupid. Mutating the DOM will queue a layout operation, but it will not actually occur until the current JS task has finished executing. Within a JS task, such as an event handler, an XHR completion, or a setTimeout, you can mutate the DOM as many times as you like, and it will only result in a single layout pass.

(The exception is if you try to read back some measurement from the DOM, such as an element's bounding box, after mutating it. In this case, the browser does have to block while it performs layout, but that is not something can be solved with virtual DOM).

So, what is the purpose of the virtual DOM? It was invented to allow React to provide the illusion of full re-rendering. React's authors wanted to a provide an experience similar to that found on the server, where an entire HTML page is re-rendered from scratch for every load. In that way, there is never any possibility of part of the HTML becoming stale, as it all gets recreated from scratch each time.

However, the browser DOM is not designed to be blown away and recreated from scratch all the time. Nodes are expensive objects, spanning the JS and C++ worlds. Recreating the whole DOM tree each time any part of it needed updating would be too slow. So, instead, they created the virtual DOM as an intermediate data structure. React renders the virtual DOM. The virtual DOM is diffed against its previous state, and then the changes are applied to the actual DOM tree. In that way, every component's render() method can be executed, but only those parts of the DOM that have actually changed will be updated.

It's a nifty optimisation, but it's not about avoiding reflow, it's just another method of dirty checking, similar to that done in other frameworks like Angular or Ember. It's just that React chooses to diff the data structure produced by render(), rather than diffing the model data that is later used for rendering.


Please read this blog post titled "How to win in Web Framework Benchmarks" https://medium.com/@localvoid/how-to-win-in-web-framework-be...

It goes into some details how different approaches in different frameworks work, and why. Moreover, it shows how you basically need to re-implement Virtual DOM and other tricks in vanilla JS code to approach the same speed.

Yes, browser engineers are not that stupid. But they don't batch operations as efficiently as a proper Virtual DOM implementation would (including efficiently handling event listeners, looking for clues like `keys` on repeating DOM elements etc. etc.).


That article doesn't quite support what you're claiming. The Vanilla JS implementation uses optimised DOM mutation techniques that are inspired by how the Virtual DOM libraries do things, but it doesn't re-implement a Virtual DOM. Yet it's faster than any of the Virtual DOM implementations its benchmarked against.

So the takeaway is that DOM mutation can be slower or faster depending on the techniques you use. Using a Virtual DOM library can help you improve performance, because these techniques are often baked in, but they don't intrinsically result from using a Virtual DOM: You could write a V-DOM library that didn't use them, and was slow, and you could write, for example, a static template library that did, and was fast.

In theory, a compiled, static template approach should be faster than V-DOM (or anything other than a set of totally bespoke and optimised vanilla JS functions for each app operation). Because while a compiled template does not have the turing-complete flexibility of something like JSX, it has the advantage of knowing in advance exactly which parts of the DOM can change and how, which eliminates the need to build and diff the Virtual DOM tree.


> In theory, a compiled, static template approach should be faster than V-DOM

Maybe faster in basic microbenchmarks. As soon as you start optimizing UI library for complex applications, there are many other optimization goals you should focus on, like the size of the generated code per component type, the amount of different code paths(increase probability that executed code is JITed), etc.

Virtual DOM solves many issues, it is not just diffing, it is also a lightweight component model, possibility to add lightweight synthetic events, single code path for creating/updating/destroying DOM nodes, the simplest algo that handles complex tree transformations, etc.


> Moreover, it shows how you basically need to re-implement Virtual DOM and other tricks in vanilla JS code to approach the same speed.

Surplus [1] is the fastest in most benchmarks, and it doesn't use a virtual DOM.

[1] https://github.com/adamhaile/surplus


Couple of small changes to the surplus benchmark implementation[1], and it is not the fastest anymore[2] even when benchmark is super biased towards fine-grained direct DOM manipulation libraries like surplus (ratio of data binding per DOM elements ~0.5, number of DOM elements 8000-80000)

1. https://github.com/localvoid/js-framework-benchmark/commit/1...

2. http://rawgit.com/localvoid/js-framework-benchmark/sandbox/w...


I'm not sure what table you're looking at, but at those links Surplus is still the fastest only behind vanilla JS.


https://i.imgur.com/MG3eGTM.png

Inferno here is also slightly patched[1], instead of using stateless components, it is using stateful components to demonstrate how fast it can go down when you start using high-level abstractions in a library that focuses only on low-level primitives.

And if you really want to understand the fundamental flaw in libraries with fine-grained direct data bindings, try to reimplement this[2] 70 lines of React code with such library.

1. https://github.com/localvoid/js-framework-benchmark/commit/2...

2. https://github.com/localvoid/uibench-react/blob/master/js/fc...


Ah, I forgot to check Ivi. Still, Surplus, Ivi and vanilla JS are the top 3. Clearly the virtual DOM is difficult to optimize. Surplus is itself also largely unoptimized because it largely didn't need to be at the time. I had a discussion with the author on incremental reduce optimizations in the S.js issues section.

Anyway, I have nothing else to say on the matter. Clearly anything with only 10% overhead over vanilla JS is definitely fast enough, and Inferno is there now if you want something React-like.

As an aside, do you have any experience with Ivi? I'm looking to learn something else and wondering if I should dig into Web Components or something like Ivi. Web component performance was terrible last I checked.


> Clearly anything with only 10% overhead over vanilla JS is definitely fast enough

And now is the main question :) If virtual dom is competitive in benchmark that is super biased towards direct data bindings libraries, what is the point of using direct data binding solutions when they won't be able to handle even basic use cases that involve client-server communications when server sends data snapshot. There won't be any information about data changes, and you'll end up with reimplementing tree diffing algo, so that you can apply it to the data.


> what is the point of using direct data binding solutions when they won't be able to handle even basic use cases that involve client-server communications when server sends data snapshot.

I'm not sure what you mean. Surplus is built on S.js, in which all data is lifted into reactive expressions. S.js already handles that for you but at the data model level, where it arguably should be, not at the UI model level.


> S.js already handles that for you but at the data model level

It only works as long as it is able to track changes, many client-server applications doesn't send you list of changes that you should apply to your data, they just send you data snapshots. With virtual dom library I'll just update my data and rerender everything that depends on this data, with direct data bindings good luck figuring out solution to such simple problem.


> With virtual dom library I'll just update my data and rerender everything that depends on this data

So you have to visit every data node, and then also visit every changed UI node. Where if you have deltas, you only visit changed data nodes and then changed UI nodes.

Re: data snapshots, it's easy to design your own service to use a delta protocol. But even when you can't, you can separate the code used to construct your reactive objects from the code that initializes them. This is just basic function abstraction, and it doesn't really add any work. From the example on S.js site:

    const                                // 
        a = S.data(1),                   //     a() |   1     3     3     5
        b = S.data(2),                   //     b() |   2     2     4     6
        c = S(() => a() + b()),          //     c() |   3     5     7    11
        d = S(() => c() * a()); // t0    //     d() |   3    15    21    55
    a(3);                       // t1    //         +------------------------>
    b(4);                       // t2    //            t0    t1    t2    t3
    S.freeze(() => {                     //    
        a(5);                            //    
        b(6);                            //    
    });                         // t3    //
Now becomes (quick and dirty to convey the idea):

    const
        a = S.data(1),
        b = S.data(2),
        c = S(() => a() + b()),
        d = S(() => c() * a());
        update(3, 4);
        S.freeze(() => update(5,6));

        function update(aval, bval) {
            a(aval);
            b(bval);
        }
S.js will detect whether the value you're providing is actually different, and will only propagate what has changed. Roughly the same number of lines of code, just more reusable.


Sorry that I can't clearly explain this problem to you, and you obviously don't understand about what I am talking. Maybe you can try to think about ordered lists and complex data transformations, not just basic values.

Or maybe I can just ask you one more time to reimplement this[1] 70 lines of React code with Surplus, but you are probably will just ignore it :)

1. https://github.com/localvoid/uibench-react/blob/master/js/fc...


I've never worked with React, so that code is just noise to me right now.

In any case, there's no inherent difference between basic values and ordered lists and data transforms. The latter two are just recursive application of operations on basic values.


I am finally figured out how to do it with Surplus :) Its documentation is super confusing, first it talks that there are no lifecycle hooks, but then I see lifecycle hooks[1], then it talks that there are no diffing, and then I see inefficient diffing algo[2].

It won't be so hard to solve use cases that I am talking about. Just create an index(HashMap), implement simple diffing algo that updates hashmap index and doesn't care about item positions, then from the ordered list generate a new list with values from the hashmap index, and then apply this inefficient diff algo that uses object identity to track objects[2].

1. https://github.com/ismail-codar/surplus-material/blob/3dce38...

2. https://github.com/adamhaile/S-array/blob/1046fca3032691d4ef...


When you don't have any information how ordered lists were transformed(data snapshots), there is a huge difference between basic values when you can assign the latest value, and ordered list where you need to figure out how to rearrange items in observable array, so that it will be able to track changes and rearrange DOM nodes.


It seems odd to characterize a declarative approach to rendering as an optimization.

The draw for me is that I don't need to manage the state of a bunch of DOM objects anymore. I wouldn't even mind if React were slower- the performance isn't really the point.

I don't and will never want web components for this reason.


You can use vdom in web components. This article, that you didn't read, talks about using one.


Virtual DOM isn't what makes things declarative. You can solve the same problem by diffing the model instead of the render output- the important part is that you have a comprehensive way to tell what parts of the view need to change.


> It's a nifty optimisation, but it's not about avoiding reflow, it's just another method of dirty checking, similar to that done in other frameworks like Angular or Ember.

There's also a lot of "implicit state" not captured in such a virtual DOM, like the user's current selection, the scroll position, form values, and so on. If these are blown away, it creates a really poor experience. The virtual DOM isn't just an optimization, it is actually a correctness fix.


> there’s no “DOM transaction API”, so every DOM mutation causes a reflow

Perhaps there should be. Has there been any discussion on standardizing such a thing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: