> you don't actually have to form a consensus. You can split off whenever you want.
This is true and is a key property of open source.
But it's also true that network effects and economies of scale are key for how open source projects provide value to their users. Those effects mean that the value an open source project provides to its community is often super-linear relative to the number of users.
A concrete example: If someone writes a blog post about how to use some feature, every other user of the feature can benefit from it. But also every user can potentially write this kind of documentation. So the value people provide through documentation is very roughly quadratic in the number of people reading and writing docs.
Because value like that scales super-linearly with the number of people in the ecosystem, breaking a community in two can result in less total value even if the total number of users of both communities put together is the same.
If you fork and the forks diverge, now a given bit of documentation may only be relevant to one side of the fork. A given person writing some docs may documenting things that are only true for one fork.
Ian Murdock was not forced out of Debian; he resigned in 1996 to focus on school, work, and family. He passed the leadership of the project to Bruce Perens at that time.
Now, since then, the structure has obviously changed... bit it definitely didn't start as a coup d'etat... Debian now has a leadership structures where voting members are defined as well as how leadership positions are voted for. That's far different than a handful of people trying to unseat or take the project away without that infrastructure.
BDFL is perfectly fine as long as original ownership/management has interest in maintaining said position.
Until such a time when my compiler learns to take "the community" as input and still produce working binaries as output, things will remain all about the code. C'est la vie.
That's sacrilege, that game doesn't even come close to the quality of Left 4 dead series and suffers from just about every problem that plagues so many modern games.
Back 4 blood was just another "live service" game that stirred up hype, released in an extremely buggy state, poorly balanced, with terrible AI that was never fixed, without mod support or community servers. They cashed in on the initial surge of popularity, cashed in on the DLCs, and then it quickly died off because it didn't have any of the charm or reply value of the games they claimed to improve upon.
Wimax hasn't been something anybody I've known in the US has talked about for more than 15 years.
I've built fiber networks and fixed wireless networks. Almost ended up becoming an LTE network as well. It didn't make any sense in any sort of financial modeling, even with spectrum availability.
LTE helps solve "general connectivity". What it does not do is build scalable, reliable, high speed, economical sensitive broadband infrastructure.
It was around that same timeframe that "TV Whitespace" was going to become the next big thing.
Anyway, LTE should be the literal last option. It requires more than 2x as many towers as fixed wireless, with gear more than 20x more expensive. That's also more than 2x-3x the required amount of of battery backup systems, networking equipment, and land / tower leases.
If you have extreme density, you NEED fiber and you need WiFi. You extend from the fiber network with extremely high quality ngFW. To fill gaps, use satellite.
Fiber requires a certain density of subscriber/mile(km), the same as any technology.
Even with 0 labor cost, you still need to get conduit in the ground (materials), fiber, terminations, switching, routing, OLT/ONT cost, handholes, any permitting or utilities location, horizontal boring equipment , jackhammers, splicers, etc. The upfront cost is many, many, many times higher for fiber and if you're okay with your cost-per passing being more than you would ever make on customer ARPU, then sure do that. Even if labor cost was 0. And it will take YEARS longer to deploy and see a return on investment from, of ever.
It doesn't matter if there's broadband to the location if nobody at the location can afford it.
Nowhere with even just an "improved" road (i.e., gravel road, not "only" a path cleared of tall vegetation) is too low density for fiber.
Unless local conditions make you want to use aerial cable, you'll just cable plow a speed pipe and put in a small access riser every 2~3 miles.
You blow the cable in segment-by-segment, either splicing at these locations or spooling the ongoing length up before moving the blower and doing the next segment.
If the cable is damaged you measure with OTDR where the break is, walk there with a shovel, some spare speedpipe, and two speedpipe connectors.
You dig out the damage, cut it out, put good pipe in, join it to the open ends where you cut the damaged section out, and bury it while taking more care to make it last better this time.
You pull/blow out the section of cable and blow in a fresh one, splice it to the existing cable and both ends of the segment, and the connection is fixed.
AFAIK cable plow for fiber in not-very-hard ground is cheaper than planting "telegraph" poles like they did in the old days.
The only expensive parts about fiber optic Internet are the machine that allows you to splice (about 1k$, unlike the 5$ LSA tool for attaching RJ-45 sockets to Cat.5/6/7 cable; this only blocks DIYers from easily doing it) and digging up developed area with more finely controlled tools than a literal plow if you forgot to put in speed pipe the last time the ground was dug up for any infrastructure at all (say, piped water).
Oh, and arguably the optics if you expect to be cheaper than copper on distances within a building at speeds under 10 Gbit/s.
Are we talking about Mumbai or an area w/ 0.2 homes per 10sq km? Because I'm talking about how to do both. Vastly different challenges and economic viability, and I have experience doing both types of environments.
reply