Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have to admit, I'm surprised they have no plans to migrate Firefox over to Servo. It seems like a waste, considering that desktops and laptops could both benefit from the increase in parallelism.


Baby steps! We're first trying to figure out how we can share components written in Rust - ideas people have had are URL parsing, media container parsing, image decoding, etc.

If (when!) Servo continues to execute well, plans for integrating more risky pieces will require more top-down planning than the bottom-up, grassroots work that is going on today. As much as I love Servo, even I wouldn't argue we should stop the world and move 200 developers off of Firefox onto working on it full-time for the next year. Such initiatives rarely go as planned.


The way I read it, they have no plans in 2015/2016.

Longer-term, we plan to incrementally replace components in Gecko with ones written in Rust and shared with Servo. We are still evaluating plans to ship Servo as a standalone product, and are focusing on the mobile and embedded spaces rather than a full desktop browser experience in the next two years.

https://github.com/servo/servo/wiki/Roadmap


Multi-threaded rendering is less of a win when a single desktop core is already pretty powerful. Servo is going to make a big difference on mobile platforms with multiple underpowered cores.


A 3 Ghz P4 from 2005 way well blitz a Core-M from 2015 on certain single-threaded benchmarks. But if you're concerned about energy bills, emissions, fanless computing or all day battery life then performance per watt matters even on x86-64 desktop environments.

From the article:

parallelism results in power savings. Multiple threads working in parallel on a page-rendering job allow the CPU to complete the entire page in the same amount of time while running at a lower frequency.


That's not true. Layout is still often slow (multi-hundred ms) on desktop. You can notice that.

Also powerful x86 CPUs are so good at shared-memory multithreading (cache coherency, large caches) that x86 is actually the best case for us (which is not to say we're bad on ARM, of course).


multi-hundred ms is slow.... But what is strange to me we have to go through a magnitude of handles and paralleling a browser engine to get to sub 100ms layout. Could we not further improve the current engine?


That would kind of waist the opportunity to create a new product brand that's not tied to Firefox's current trajectory.

Since Mozilla doesn't have the ability to fund advertisement campaigns the way Google and Microsoft can they have to be more careful about momentum and not waist growth opportunities by spending them on a established product which is consistently shrinking in marketshare.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: