This is mainly because Chrome allocates a process for each page whereas FF only allocates a thread. The important factor being that threads share an address space where processes are allocated their own and thus consume more memory. Separate processes are arguably more secure and of one process crashes it does not take down the whole browser but all this comes at a cost. This cost is very apparent with many tabs open.
The primary security gain of sandboxing is about making it harder for web content to hijack your computer by exploiting security bugs. Many common types of bugs can be exploited to get remote code execution in browsers because of unsafe programming languages used (C/C++).
Both Chrome and Firefox have multi-process sandboxes now, Firefox is a little behind and plans to enable multiple content processes next year.
While Chrome's is more granular, it does share processes between web pages from the same domain so won't start a new one for quite every tab.
They're shared among site instances, not the same domain. It has to do with web standard semantics. Google has been working on https://www.chromium.org/developers/design-documents/site-is... for quite some time which will probably ship in 2017. That's the next step beyond the --process-per-site-instance after --isolate-extensions.
Firefox is more than just a little bit behind when it comes to sandboxing though... there's a LOT more to sandboxing than splitting up processes. That doesn't result in any meaningful isolation without a lot of further work, and improving sandboxing is a major undertaking that Chrome has been working on for years.
But the OS is committing to supplying memory for any writable pages you duplicated, because after all you might write to them. The OS doesn't know which pages you won't duplicate.
Unless you set your vm.overcommit_memory sysctl variable to 1 (= never refuse malloc), duplicated writable pages are going to subtract from available memory, even if all copies are currently the same.