Note that the measurement on geostationary is almost certainly performed on a contended (tdma) 32:1 consumer grade oversubscribed link or worse.
Actual 1:1 dedicated geostationary which is very expensive in $/Mbps is a fixed, flat 492 to 495ms rtt latency, plus or minus a tiny bit either way, depending on modem encode and decode FEC type.
Consumer grade geostationary could be anywhere from 495ms in the middle of the night local time to 1350ms or worse.
Re: the figure for terrestrial fiber service, I'm curious how the presumed residential last mile "fiber" link in Geoff's example which is not real gigabit service would compare to one of the symmetric gigabit last mile operators that exist in some cities. Where you can see actual 980 x 980Mbps speed test results from fast.com or speedtest.net in a browser.
I'm always very suspicious of anything that says it's fiber but is limited to like 25Mbps up, either it's a totally artificial limit or in reality it's some vdsl2 link, or "fiber" delivered over docsis3 copper cable with limited upstream rf channel allocation, etc.
> Re: the figure for terrestrial fiber service, I'm curious how the presumed residential last mile "fiber" link in Geoff's example which is not real gigabit service would compare to one of the symmetric gigabit last mile operators that exist in some cities. Where you can see actual 980 x 980Mbps speed test results from fast.com or speedtest.net in a browser.
Well if you pay for gigabit over GPON it means you have at worst a 2:1 split, which gives you 1.2/1.2 Gbps. Even assuming they're still using an MPoA/ATM transfer layer like they did on DSL (keep in mind, these are ITU standards - GPON is not Ethernet - though there are fiber networks which are Ethernet, these are AONs by definition (no splitters possible), and use 1000BASE-BX10), that doesn't have nearly enough overhead to reduce 1.2 Gbps to below 1 Gbps.
The exact likelihood of being able to max out your lan side 1GbE interface to a home gpon terminal also depends on some factors you can't know unless you are the ISP, such as the usage patterns and traffic volumes moved by your neighbours on the same port. Could be wildly different if you happen to be in a condo with some person that is seeding popular torrents or if you're in a neighborhood of mostly retirees for instance.
Not sure what you mean by at worst a 1:2 split since GPON last mile can be implemented at the physical fiber level as many possible configurations such as a 8:1, 16:1, 32:1 split. Your ISP isn't likely to tell you or share with you the optical link budget and split of your segment.
One interesting upcoming latency twist to this will be when the Starlink inter-sat optical links go online for the whole network. Version 0.9 is already used (and required) for polar and new batches are all launching with them, but I don't think they've hit critical mass yet to bring it up. Once they do though that will be a significant shift for anyone where intercontinental servers form a significant part of their usage. Speed of light in conventional fiber is only about 70% c, and of course for the vast majority of people the actual path their packets take through the network is very far from ideal great circle path between two points on the globe (ie., they will first have to travel to the nearest hub then nearest subocean link which in some cases could add massive travel distance).
But within the Starlink network signals will go essentially 100% c, and as the constellation approaches design capacity the paths will get closer to ideal too (at least to the nearest ground station). At long enough range the 40% speed advantage alone will make up for orbital RTT penalty even before path savings which means Starlink will be able to offer much lower latency then fiber. I think it'll be the first time though where we see a weird split where your local connection speed is no longer the sole deciding factor and you can actually see a radical latency split between local and very long range traffic for two different WAN types.
> Once they do though that will be a significant shift for anyone where intercontinental servers form a significant part of their usage.
I have strong doubts this capacity will be used for random user traffic. It's worth much more to use it to serve oceans, poles, islands, and areas where ground stations can't be built (yet). Other capacity could easily be sold to HFT firms, etc.
I'm not sure I buy this. There is a number of aspects you have not considered here. For the intersatellite links very hop will include an optical/electrical/optical conversion with FEC overheads etc. There is also the question of nearest ground station. It's not clear how many ground stations there are and how far they are from the server you want to connect to (especially considering that they might need to go to a backup station due to weather). It's also still unclear to me how much latency the routing will add, those satellites move pretty fast so during a connection you will likely have to make quite a few handovers, I doubt that you can hold an optimal path the whole time (my suspicion is that we will observe the same as observed in the article, somewhat higher latency with significant variation). That said they are clearly better than GEO, but compared to fibre they will always be niche.
It will be very interesting to see how they manage queuing across the network of ISLs. The latency benefits can be real, but only if they don't allow queues to build in the satellites themselves. It's not too hard if you run the network at low utilization, but that would make it rather expensive. Running such a dynamic and meshy wide-area network at high utilization without significant queuing in the nodes is currently an unsolved problem in the networking research community. I do think it can be done (I have my own ideas how), but it's definitely not something that current networking algorithms can handle.
This is also true of links on the ground, you can't get very far without at least running DCM or other active optical components which add time as well. How it factors between the two really comes down to exactly how it's going to be implemented, particularly on the Starlink side. There's a lot of "could" in terms of getting lower latency than ground links but that doesn't mean it's actually the preferred metric they'll want to chase with their system.
Don't forget that Starlink satellites have a design life of 5 to 7 years, so some of the earlier ones will be getting close to half of this already. All satellites launched in recent years will have a well-documented plan for their operational life including the usual deorbiting process.
I did download of a film using torrent via StarLink once for performance test. I saw up to 5 Mbytes per second in torrent client, which is impressive speed to have in woods. Sadly, StarLink is not suitable for audio or video calls yet, because of frequent pauses.
I admittedly am not using StarLink full time yet (my time is currently split between a city and a farm about 60 miles away), I have had virtually no issues with voice-only or video calls. I did a two-week stretch out here a month ago, and none of my teammates had any kinds of complaints about my video feed either.
Yesterday was the very first time I have seen any issues at all; during a pretty ugly freak wind/snow storm things did get a little spotty off and on for an hour, but remained flawless throughout the rest of it.
Could be a regional thing too I suppose? I'm around 51N 111W and generally speaking it seems like I have really good coverage and maybe a relatively empty cell?
I worked on Google Fiber and one of the things I did was write a pure-JS speed test. At the time, speedtest.net still used Flash. Why did we need this? Installers used Chromebooks to verify an installation so we wanted to be able to tell if the install was successful. That means maxing out the connection (~940Mbps for a gigabit connection). This speed test is still up [1].
Actually figuring out the max speed for a connection is a surprisingly hard problem. Here are some of the things I found:
1. Latency is absolutely everything. With sub-2ms latency I could get 8.5Gbps downloads in a browser on JS over 10GbE on a Macbook Pro. Bump that up to 100ms and that plummets. I forget the exact numbers but this has real world consequences. Australia, for example, rolled out it's ridiculous NBN network with a max speed of 100Mbps. Well Australia has a built-in latency of 150-200ms to the US just by distance and the max effective download speed would be a mere fraction of that;
2. Larger blobs are better for overall throughput but depending on your device this may blow up your browser. Unfortunately for the Internet you're never really going to reliably get an MTU >1500 unles you control every node on the network;
3. This sort of traffic exposed a lot of weird browser bugs, even with Chrome. For example, Chrome could get in a state where despite all my efforts the temporary traffic would get cached and would fill up your /tmp partition on Linux and blow up with weird errors that don't relaly give you any clue that that's the problem and only restarting Chrome will solve the issue. I could never figure out why. Not sure if it's still an issue;
4. The author I guess was talking about Linux defaults but there are a lot of kernel parameters that affect this (eg RPS [2] is absolutely esential for high-throughput TCP beyond a certain point);
5. BBR was in development at the time (ironically I was next to that team at the time for a few months) so I can't really speak to how this changes things. I was going this development back in 2016-2017;
6. Among people who knew more about this than me the consensus seemed to be that BSD's TCP stack was superior to Linux's. Anecodtally this is backed up by real-world examples like Facebook having extreme difficulty moving away from freeBSD to Linux for WhatsApp. That took many years apparently; and
7. I agree with the author here on the impact of packet loss. It's affect on throughput can be devastating and (again, pre-BBR) the recovery time for maximum throughput could be really long.
Netflix's speedtest https://fast.com/ avoids a lot of the TCP tuning issues of the client/server by dynamically scaling the number of streams during the test to try to provide an all around peak number instead of a "precisely single session to this exact server" number.
Regarding BBR I've also found it to be a lifesaver for individual streams over high latency internet links, particularly when there is loss.
In a big city if you're on a last mile service like webpass (Google fiber) you are most likely 2.5ms, not more, from a speedtest.net server that has a dedicated 10GbE port off some regional ISP's aggregation router. Possibly even within the same ISP and your same asn hosts the server internally.
The seach result speedtest is unrelated and was being developed at the same time IIRC. It has a similar philosophy to the Ookla speedtest: just to test if your connection is sufficient and not test the max capacity.
Actual 1:1 dedicated geostationary which is very expensive in $/Mbps is a fixed, flat 492 to 495ms rtt latency, plus or minus a tiny bit either way, depending on modem encode and decode FEC type.
Consumer grade geostationary could be anywhere from 495ms in the middle of the night local time to 1350ms or worse.
Re: the figure for terrestrial fiber service, I'm curious how the presumed residential last mile "fiber" link in Geoff's example which is not real gigabit service would compare to one of the symmetric gigabit last mile operators that exist in some cities. Where you can see actual 980 x 980Mbps speed test results from fast.com or speedtest.net in a browser.
I'm always very suspicious of anything that says it's fiber but is limited to like 25Mbps up, either it's a totally artificial limit or in reality it's some vdsl2 link, or "fiber" delivered over docsis3 copper cable with limited upstream rf channel allocation, etc.