Whenever someone from Qatar decides to vandalize Wikipedia, Wikipedia is forced (temporarily) to block the entire country from accessing Wikipedia. This has an adverse impact on the rest of the country.
Non-Qatari Wikipedia users also suffer, because Wikipedia makes those blocks very temporary (since they are effectively shutting off an entire country), which makes it easy for those vandals to regain access to Wikipedia quickly.
[0] This sad state of affairs is not solely due to IPv4 (incompetent/apathetic network administrators are also at fault), but it's a contributing factor.
To be fair, Qatar's situation has nothing to do with the IPv4 crisis. It's a country-wide firewall/proxy that exists due to political reasons and it wouldn't behave any differently if it was implemented in IPv6.
Qatar has a huge allocation of unused IP addresses. It's a matter of choice that their censorship proxy only exposes one IP address on the internet side.
I can assure you the US is feeling the brunt of the IPv4 crunch. I'm building something that needs massive amounts of public address space and I simply can't get IPv4 addresses for it. I've asked multiple providers. You are LUCKY if you can get a /28 at this point. Hoarding. Rolling with IPv6 instead.
I get you have an interest in the Qatar/Wikipedia situation, but it seems to be unlikely due to IP address resource management issues. Probably more like tyrannical control, if anything.
I know of multiple providers who have gotten allocations in the last month (up to /19) and will be requesting another one myself in the near future. ARIN has plenty of IPs.
Are you asking for more than you can immediately use?
Elsewhere you mentioned having difficulty getting more than a /28. That's pretty much the largest block you can get without some paperwork to justify your immediate usage. If you haven't already, take a look at the ARIN requirements in https://www.arin.net/policy/nrpm.html#four23 and make sure you have your ducks in a row before you go back at it.
How could NAT work for an entire country of people behind one IP? My understanding was that NAT allows communication over separate ports, so for example a router with ip 123.45.67.89 can run three clients by sending their data over ports 50000, 50001 and 50002. How does this work when more than 65k people are trying to connect? There don't seem to be enough ports for that to be possible.
This is a dramatic simplification, but TCP uses the quadruplet of source IP/source port/dest IP/dest port as a session identifier, so you can potentially get 2 billion people using one outgoing port to 2 billion different servers with different IPs in the absolute best case (which wouldn't happen, of course).
Is there a minister of Port forwarding? What happens when I fire up chrome and Firefox with 80 pages each? Am I limited to a number out going connections? I assume the entire country is running off a Windows 95 box with two nics but that's just the way I imagine most cg-nat.
I've exhausted a SQL server for ports, for science. Yes you are limited to the number of outgoing to ~64000 outgoing ports. Chances are your network equipment (not enterprise grade) will run out of switching contexts before you run out of ports.
How many of Qatar's 2 million people use the internet, and how many of them pay for a VPN service? (https://www.bestvpn.com/blog/6715/5-best-vpns-for-qatar/ indicates there are a lot of internet cafes whose owners are using VPNs and passing on the access to customers.) It sounds to me like a person from Qatar wouldn't have a problem editing wiki if they wanted to, nor a vandal from vandalizing if they wanted to...
You act like Wikipedia has no agency in this matter and is simply forced to block Qatar's IP address from editing.
Wikipedia has choices, and it's choosing something well within its rights to prevent abuse of its property, but they could choose to do something different.
I am curious. Which other criterion than IP address could they use to prevent anonymous users from Qatar from vandalizing?
(I guess one method could be to force everyone from this IP address to create an account, but that goes against their openness/ease-of-editing-for-new-or-casual-users principle.)
I mostly object to the phrasing that Wikipedia is "forced" to block all of Qatar, as if Wikipedia is a mindless force of nature or not in control of its own actions.
You make a case why IPv6 is good for traceability of users.
On the other hand, IPv4 and NAT are a boon to privacy. We'd be exquisitely trackable if NAT didn't exist and every single device had a unique, unchangeable, life-long IP address. That's more-or-less how IP addresses were supposed to behave and IPv6 brings that back.
We need to be thankful for NAT for the privacy it brought by accident.
IPv6 only brings that back if you want it too. By default Windows, OS X and iOS have privacy extensions enabled which will generate random IPv6 addresses which change every few minutes, which makes tracking a device more or less as difficult as with NATted IPv4.
This is an extremely US-centric article. ARIN was never in as dire straits as the other RIRs. In Europe and Asia the situation is much worse. For example, lack of IPv4 addresses delayed DigitalOcean's growth in Amsterdam, and carrier-grade NAT is already being used by some consumer ISPs in Europe and Asia.
The article also ignores the fact that running dual-stack (like Comcast) requires just as many IPv4 addresses as before. Even after ISPs get IPv6 working flawlessly, their customers still need to access IPv4 websites, and that means CGNAT or some messy kind of tunneling will still be necessary.
Then don't run dual stack. Run native IPv6 and NAT64/DNS64. Or in DO's case make IPv4 access optional for smaller droplets and charge $1/month extra for it. In reality things like DB servers or backend app servers don't need public IPv4 addresses, and this would speed up IPv6 adoption considerably.
Or you can run IPv6 only core-network, perform NAT64 on one side and NAT46 on the customer side. There is an implementation for this on android [1], which is trivial to run on Linux (I've ported it, there's nothing difficult as it doesn't use anything android specific.) By the way, this technique is called XLAT464. There has been a nice presentation about it at the IETF, and you find the slides online[2]. I believe T-Mobile USA is currently deploying this on production level and it works quite well.
I think it's very close to dual-stack, and this technique has the advantage of being extremely easy to deploy. Especially if you already have NAT64 gateways setup in your network, then you have done more that half of the work :-).
Customer applications which require IPv4 and haven't been upgraded to support IPv6 still need a v4 address, and customers probably have to support their own clients that only use v4. You still have to ship the customer both L3 protocols. So the carrier would at least need to ship a NAT'd v4 address; v6-only would basically frustrate/anger/alienate a whole lot of customers, which is dumb from a making-money-with-my-company perspective.
In reality you can't just decide for your customers that they do or don't need something - you have to ask them what they need (if you want to continue having customers). Nobody's going to re-engineer all their shit to support your wonky network if they can just go to another company that provides them what they need.
No. You provide the best service to your customers for the best price. For DO's case, default to IPv6 and charge extra for each IPv4 address used. Currently, they charge $5/month for their cheapest droplet. Change that to $4/month and charge an extra $1/month for an IPv4 address. This way if my setup is more complex than a single droplet running everything, I can save some money on VPS's that don't need public IPv4 addresses (the database servers, application servers, etc.)
For Comcast and the like, once again give me the option to either do NAT64 or a full dual stack. As a regular consumer I probably won't care. As a gamer or a developer I might.
The IPv6 transition is going to happen sooner or later. Either you are going to make it painless for your customers by providing IPv6 early and using strategies to make the transition to IPv6-only smoother or you are going to make your customers suffer.
It won't help. If they charge $4+$1, it would not be seen as "they just changed the price structure". It would be seen as "they are charging for something that used to be free". Especially as marketing folks won't miss the chance of advertising "our package is now $4" small print "( additional charges apply)". And most customers will be royally pissed at that, both by the fact that they got advertised $4 but have to pay $5 and by the fact something that used to be free now isn't. The fact that the whole package costs the same won't matter, people don't think in those terms. And then it would get the neutrality angle - they now charge extra for using specific protocols! we warned about this and now it happens! - and the whole thing would become a huge mess.
It's already happening. Most providers charge $1-2/month for every additional IPv4 address AND you have to justify why you need it. EC2's Elastic IP's also don't get added to EC2 instances automatically; you have to add them manually and newly opened accounts are limited to I believe 10 IPv4 addresses for all EC2 instances. That's what IPv4 exhaustion looks like today.
This has nothing to do with net neutrality. It's progress of technology. Your ISP doesn't give you IPX access do they? Instead this is a natural progression of technology. Want to use old tech? Pay a premium!
That's additional IPs. But one IP has been given for free (well, not in all setups, but in common packages). Unlike IPX. This may be progress for you, but for those who still need their IP setups working it's nothing but trouble. That's why it doesn't change - because people hate to change already working setups.
Ec2 instances not in a VPC get assigned a public IPv4 address when you create them. Elastic IPs are the re-assignable ones, and you only pay for them when they are unassigned. You don't pay for assigned ips on amazon
ARIN requires the demonstration of need before it allocates space. RIPE does not, it's just first come, first serve. I'm not sure about APNIC or LACNIC, and since AFRINIC is modeled heavily off of ARIN, I think they require need too, but things are so fucked over there I have no idea if that policy transferred.
It's here. IP addresses are costing more and providers are less generous with them. It used to be common to get a handful with a dedicated server, now you get one or two.
I have a friend who runs a small low-cost minecraft hosting. He stopped giving his customers dedicated IPs at all. They get a range of ports and a hostname with appropriate srv records added.
That's the other result, technical work-arounds. You can point to a particular service on a particular port with an srv record, host multiple websites on one IP, even SSL-enabled ones with SNI, etc.
But the 'superior solution' is... well, 'sort of', from the economic point of view in this case: it has costs that are, for some people, higher than those of the alternative.
There is always an endless line of other things to deal with in both the economic and engineering realms. Squeaky wheels tend to get the attention in both. When the IP situation causes enough discomfort, there will be an avalanche transition, and probably not a moment sooner.
Unfortunately if you have customers using IE on XP or older Android phones, SNI doesn't work and you need to use SAN certificates. EV-certificates with SAN is another huge amount of hassle I wish on nobody.
I have noticed a change in approach, even if unconscious. Instead of predicting doom, they have started to celebrate small victories (like Google IPv6 traffic passing 3%). I think that's natural when the size of this undertaking is so great.
Unfortunately IPv6 adoption is not a matter of just providing access lanes to this wonderful new technology. IP permeates too much of the infrastructure, tooling, etc. How could it not? Some companies might find the cost/ROI of working around IPv4 limited address space to be less than migrating to IPv6.
There are also tons of little things. Here in Indonesia, I know my ISP (FirstMedia) runs IPv6 internally on some things. Their web sites are available over IPv6. But I can't connect from home over IPv6. Supposedly the consumer hardware on my side supports IPv6 but it doesn't do so very well and I haven't bothered trying directly through the cable modem yet.
That was always a question I had in the back of my mind. We have this huge address space. Why not reserve a single prefix that means "look at the lower 32-bits of this address and use it as an IPv4 address"?
Every IPv4 packet needs a source address. If the client is IPv6, what is the IPv4 source address?
You would want to assign an IPv4 address to the client transparently; and, since they're scarce, share it with several clients. That's what NAT64 does.
because as I understand routing is still done from the MSB.
So you leave the IPv4 existing infrastructure in place with no modification, but all those /32 addresses now become useful and all the existing IPv4 infrastructure and routing paths still work.
Rather than having to replace every single piece of hardware, software, nameserver and routing stack that works on IPv4. I could plug my IPv6 router into any ISP that gives a v4 address and have as many v6 addresses as could ever be needed.
It makes no sense to me why you would route IPv4 addresses on v6 LSB, it completely ignores current internet infrastructure and routing.
What you propose already exists and is called Teredo,6rd or other tunneling protocols (6to4/6rd is probably the best fit with getting a /48 per IPv4 address). Except they again map the IPv4 space into the suffix of the IPv6 address (or do no mapping depending on the protocol).
But you don't want to do that forever as you are now paying for a IPv4 header PLUS some more headers instead of just one IPv6 header if you have native IPv6.
No, I'm not talking about tunneling, I'm saying natively route at the v4 level, and the header will be less, because +160bit addresses will only be used when required.
so your routing table holds
"66/4 port 1"
,"* port 2"
instead of hundreds of millions of entries to get the same thing by having the "66" >64 bits deep into the address (or worse ::66:* port 1, which breaks everything - hell how is this even done now?).
My point is, if the v4 address was in the MSB as standard, IPv6 would be working in virtually every single IPv6 device already.
As it is, we are all still using workarounds (and VPNs).
> No, I'm not talking about tunneling, I'm saying natively route at the v4 level, and the header will be less, because 128bit addresses will only be used when required.
There are multiple issues with this. The first and probably most important one is that it doesn't address routing table fragmentation, which is pretty much solved with IPv6, because most ISPs will end up announcing on the order of 1 or 2 prefixes instead of dozens, which can't ever be aggregated (like is the case in the IPv4 world right now and will only get worse).
The second one is, that it doesn't gain you much in terms of deployment over IPv6 + tunnels.
> so your routing table holds "66/4 port 1" ,"* port 2"
> instead of hundreds of millions of entries to get the same thing by having the "66" >64 bits deep into the address (or worse ::66:* port 1, which breaks everything - hell how is this even done now?).
Ok.. I have no idea what you are talking about here (mainly your notation is leaving me confused)..
> My point is, if the v4 address was in the MSB as standard, IPv6 would be working in virtually every single IPv6 device already.
the routing table is the same as now, just that the IPv4 address becomes the network which sub routes. IPv4 hardware doesn't need to care about sub routing.
my point is with that notation, right now we have say a google address of 66.249.73.108
How does IPv6 handle retaining all the existing work and man hours that has gone in to making packets go to the 66.249.73/24 network, as quickly as possible, from anywhere in the world.
It seems to me it expects every administrator from top to bottom to start from scratch, then everyone is scratching their heads as to why that hasn't happened.
>Even in the presence of NAT?
No, and this is a good thing, only IPv6 devices which have their own IPv4 address/network can issue IPv6 addresses on that network. This is a good thing.
e.g. I get 66.249.73 to be
421D:4900:0:0 in hex.
In what universe does this need to be ditched and started from scratch, and making something:something:0042:1D49 re doing - by hand - every NS, routing table etc
Where did these hundreds of millions of higher level routing tables suddenly come from?
It keep being phrased as "what is the source address of an IPv6 host on an IPv4 network".
It seems to me the answer should simply be "the first 32 bits of the IPv6 address" - and it seems stupid it's not structured like this.
> No, and this is a good thing, only IPv6 devices which have their own IPv4 address/network can issue IPv6 addresses on that network.
So you will still need tunneling to make it work. As you will still have to run CGN to get all your customers online. Unless of course you do want to change the network infrastructure. At which point your solution gets much worse than plain and simple IPv6.
> It keep being phrased as "what is the source address of an IPv6 host on an IPv4 network". It seems to me the answer should simply be "the first 32 bits of the IPv6 address" - and it seems stupid it's not structured like this.
Sure.. that solves the routing problem (except for IPv4 routing table explosion), but doesn't solve the problem that IPv4-only hosts still can't talk to IPv5-hosts. You send [IPv4][IPv5] packet to IPv4 host. Huh? What's that IPv5 thing? Or the other way around.. IPv4 host sees AAA record (with IPv5) in DNS..
Your proposal solves nothing that can't also be done with IPv6 and tunnels if you are really keen on keeping your 5 old router running a few more months before throwing it away (which you can't do anyway, as your limited FIB size will force you to buy new hardware anyway, due to IPv4 route table fragmentation).
Thanks, very interesting read. While reading his argument about all the extensions that need to be done everywhere, I thought about this: since many protocols/etc have to be reworked, will that lead to consolidations (since some protocols will be left behind)? Can we say that currently the IPv6 Internet is a place (almost) free of legacy stuff?
I'm hoping the opposite, that people upgrading their routers to forward more than just IPv4 packets might mean we can finally start using SCTP in the real world. (It won't happen, of course, but one can dream)
> The day of reckoning still looms – it’s just been pushed out as the major Internet players have developed ingenious ways to stretch those available numbers.
To me, this indicates something either broken about IPv6 or a lessened severity of the IPv4 problem: If it's better to apply bandaids to IPv4 than to roll out IPv6, then either IPv6 is not easy and flexible enough to be a viable alternative, or the problems faced by IPv4 are not as intractable as was suggested.
It's both. IPv6 is not an easy migration, and people already have decades of experience squeezing the most they can out of IPv4, so the short-term solutions have just been to just squeeze IPv4 a little harder until everyone working on IPv6 gets it fully operational.
This is an interesting article, but it contains some rather surprising innumeracy in its cavalier comparison of 2^128 to the number of grains of sand in the earths crust. 2^128 is an enormous number, roughly 3.4e38:
2**128 = 3.4 * 10**38
grains of sand to one mile down [1] = 5.1 * 10**26
stars in the observable universe [2] = 7.0 * 10**22
estimated grains of sand [2] = 7.5 * 10**18
This means that every star can have 100 planets (equals 7e24 planets) each with 50 trillion IPv6 addresses.
The main problem the 20 years ago with the take off of the internet as a mass networking standard it was blindingly obvious that ipv6 was deeply flawed - IPv6 should have been taken out behind the woodshed back then and ipv7 or 8 done properly.
When I looked at it 19 or the 20 involved in the RFC for ipv6 where from academia plus one guy from bell labs.
Migration and Interpenetration should have been the highest priority in the design on a replacement for ip4
Maybe. Or maybe we will suffer tremendous amount of pain for some period of years after which we will have a much brighter future as opposed to an ipv7 that made too many concessions and left us with a mess that would be politically impossible to ever fix.
What you say makes sense, but what about the new thing that will have to change when we finally have our ideal IP version in place? There will always be a new hardware standard, a new language version, a new this or that, which can be done now or can be done well.
It is a balancing act, but at some point we have to cash in our chips and make things usable in the short term.
mm yes or maybe know when to let a flawed standard die when it is over taken by events.
Even 20 years ago the Internet was obviously not going to be able go on as before where you could make major changes over a long weekend during the summer at the few core university's that where the the internet.
Replaced with what? IPv6 has problems, sure, but the main problem is that IPv4 and IPv6 aren't compatible with each other. You're going to have that problem with any other replacement for IPv4 as well.
The problem is the majority of the companies who are stalling the IPv6 upgrade are in the US; which as chimeracoder stated is not going to feel the crunch as bad as other countries. People are very short sighted for one; and for two are afraid of 'breaking' what works.
I have even setup organizations with native IPv6 addresses (no tunnel) to watch them fear and lament it.
There's a thousand excuses and people need to look upon this as an opportunity; to up their skill set and mentor a new generation.
Trying to plot this it means that the crunch will probably really hit increasingly hard from about 2015 through 2016. You go from predicting two years out to one year out (two years later).
These aren't brick walls but the shortage is already being felt. I know that at Efficito, one of the reasons we switched hosting providers was that we couldn't get ipv6 connections working flawlessly to our backup connections before (meaning more ipv4 space required). I don't think we will ever hit exhaustion per se.
Truth is NAT works just fine for the vast majority of cases, and makes a layered (IE not-eggs-all-in-one-basket) approach to security much simpler.
The real problem is routing table size with BGP. As we continue to divide the internet into smaller routable blocks, this is requiring an exponential amount of memory in BGP routers. Currently, the global BGP table requires around 256mb of RAM. IPv6 makes this problem 4 times worse.
IPv6 is a failure, we don't actually _need_ everything to have a publicly routable address. There were only two real problems with IPv4: wasted space on legacy headers nobody uses, and NAT traversal. IETF thumbed their noses as NAT (not-invented-here syndrome) and instead of solving real problems using a pave-the-cowpaths-approach, they opted to design something that nobody has a real use for.
Anyway, I'm hoping a set of brilliant engineers comes forward to invent IPv5, where we still use 32 bit public address to be backward compatible with today's routing equipment, but uses some brilliant hack re-using unused IPv4 headers to allow direct address through a NAT.
Not a flame - your perspective is very typical for people that don't have a lot of experience with networking past the host or server level. (Very little experience with networking in the core, provider, or putting together network services architecture).
1. In theory the routing table with IPv6 can be smaller. The address design should be hierarchical, which means you should be able to have much fewer routes. It's too early to tell if this is actually true or not, but the addresses themselves are 4x larger - which isn't going to be the determining factor in routing table size.
2. Not everything needs to be publically routable, true. IPv6 has the idea of link local and autonomous system local addressing which IPv4 doesn't have. The RFC 1918 block was used instead. But think for a second - there's only 4 billion addresses (less when you count bogons and multicast ranges), and it's only a matter of time until those are taken up. So we can choose to do it now, 2 years from now, or 5 years from now, but devices are growing faster than ever and it's only a function of time.
3. NAT is not a security feature, is not good for the internet, and the sunk costs spent building an ALG for every protocol to work around it is a significant development sinkhole. It's a workaround often masqueraded as security, and does cause many application problems. It's just not normally the application developers that have to fix those problems - it's the network and security teams.
4. IPv6 was created in the late 90's. People have been waiting for brilliance to supercede IPv6 for a while. I'll admit it's not the easiest, but there are a certain set of problems you have when you expand the address space.
5. I'm familiar with all the IPv4 headers, and nearly all of them are used. ID is used for packet identification, particularly through network services, DSCP is used heavily, DF and other flags are used - they're just obscure. If you look at IPv6 those same headers are basically recreated, though with slightly different names. The ones that aren't included are addressable through the extension headers.
So, yeah. That's another perspective that may help you understand why IPv6 is a bit of a quagmire. The faster people understand this, the sooner we get to a place where the chicken-egg problem fades away.
I only care about one point. That NAT is not a security feature.
The original reason that I began using NAT was so that my ISP couldn't charge me per device. You just plugged in a NAT enabled router, and ran everything behind it. That became so ubiquitous that ISPs gave up on trying.
My concern about IPv6 is that ISPs will want to go back to charging per device. I didn't like that then, and I don't want it now.
From a host perspective it's a great security feature.
you have a local address means your host cannot be contacted from the outside world.
You want your host to have an IPv6 address, VPN into an IPv6 provider.
The fact that demand for this is so low, just goes to show it's not needed at the moment.
In fact, I can't think of a single reason "why" IPv6 would be needed.
I definately don't want all my devices to have a web reachable address, far from it, total security nightmare.
one entry point - a VPN on IPv4 is just great thanks, secure and easy to manage. want to access my other devices, jump on the VPN.
I that sense, you can describe IPv6 security, as configuring your VPN with no password and letting anyone connect to it.
The other way to look at it, is the successsor to IPv4 is called tor.
> one entry point - a VPN on IPv4 is just great thanks, secure and easy to manage. want to access my other devices, jump on the VPN.
There are 4 billion IPv4 addresses and 7 billion people on the planet. Before we even get into business use of IPv4 for servers and such we don't have enough addresses to do what you want.
Is that even remotely close to being true in practise? Would we expect to see it be smaller than IPv4? Given the quadrupling of address sizes, wouldn't that mean there'd need to be 1/4th the number of routes? And peering destroys the hierarchy, does it not?
I was under the impression that the hierarchical routing had an assumption that networks could renumber at will. So multiple subnets might map to the same host or something to that effect. Is that incorrect?
>3. NAT is not a security feature
Except it turns out that proper NAT is equivalent to a firewall with inbound deny, outbound allow. Which is a pretty good start for security.
>ALG for every protocol
Applications that break with NAT usually do so due to poor design (hey SIP and FTP). With a firewall with default inbound deny, programs can't just accept inbound connections without doing work anyways (UPnP or whatnot). Although sure, it makes known-two-way datagram applications easier since you start transmitting and get a flow opened. Wouldn't help TCP based applications, for instance.
> Is that even remotely close to being true in practise? Would we expect to see it be smaller than IPv4? Given the quadrupling of address sizes, wouldn't that mean there'd need to be 1/4th the number of routes? And peering destroys the hierarchy, does it not?
No.. the point is that each ISP will get only one very large prefix (/32 or bigger) instead of many small ones, which can't be aggregated like it is the case for IPv4.
Right now there are about 46k ASN's in the legacy internet announcing about 490k IPv4 routes. Best case with IPv6 you would end up with 46k routes.
In practise it looks like there are 8k ASNs in the internet announcing about 16k IPv6 routes. So while not perfect, it's still quite a lot better than for the legacy internet.
> Applications that break with NAT usually do so due to poor design
So how would you design a P2P application that has no poor design?
Might the current IPv6 numbers just reflect that a lot of people aren't peering or anything? I was under the impression that a lot of announcements were driven by the need for not relying on a single provider.
>So how would you design a P2P application that has no poor design?
SIP and FTP break even in non-P2P scenarios, so my comment was mainly directed at them. For P2P apps, NAT doesn't pose a whole lot more of a problem than a firewall with the same configuration. So you'd use UPnP or whatever protocol to get around it. At that point, it doesn't really matter, does it? The app talks to local gateway and ask for the IP and port forwarding either way.
> Might the current IPv6 numbers just reflect that a lot of people aren't peering or anything?
Peering doesn't require you to announce more routes per se, although some networks do it for traffic engineering purposes. From an BGP [1] perspective there is not that much difference between peering and transit.
> I was under the impression that a lot of announcements were driven by the need for not relying on a single provider.
Multihoming is another issue. And you can explain the difference in the number of AS [2] as networks not having deployed IPv6 yet. But the number of announced routes per network will be lower for IPv6 than it is for IPv4 (which hasn't even reached the worst case yet).
> For P2P apps, NAT doesn't pose a whole lot more of a problem than a firewall with the same configuration. So you'd use UPnP or whatever protocol to get around it. At that point, it doesn't really matter, does it? The app talks to local gateway and ask for the IP and port forwarding either way.
But that way you are still pushing more logic into the applications (namely that they have to implement UPnP). Which actually might end up requiring more code than your actual application (SAFT [3] for instance..). Now in the firewall you could just allow known-good inbound ports and be done with it.
More than likely the people in charge will not act pre-emptively by upgrading to IPv6 during the normal upgrade/replacement cycle of their network hardware. Instead, they will wait until there's a real crisis so they can ask the government to fund their next hardware upgrade.
Not a comment on the article, but IPv6 adoption relies on significant upgrades of existing hardware. Think of the size of the lookup table that a new bit of hardware has to be able to look up against and store compared to IPv4. Significantly more processing power is required, particularly if the hardware is a device that does inspection of some sort, even if basic! It isn't just a case of switching end machines to use IPv6.
IPv6 makes the global routing table much smaller (because there's enough space for the subnets to be logical parts of the network, rather than squeezing as many addresses in as possible), so the core routers for the IPv6 internet ought to need less hardware than the IPv4 ones.
It's worth noting that during the transition period the core routers will actually need more memory and processing power, since they'll have to route both IPv6 and IPv4 prefixes for most destinations.
IPv6 also deals better with packet headers, so it's easier to calculate for routers. I suspect however this will make more of a difference for high end routers with lots of traffic.
As others have mentioned, the dual-stack transition does require more resources. Then again, you can implement NAT64 (or 464xlat), which seem to become more realistic/robust/more widespread solutions (running dual-stack also means managing two layers of routes, two layers of firewalls, etc).
(Disclaimer: this is not my day job, I just find it lots of fun to play with on my local network and the Montreal local city mesh.)
Depends what they're doing; a typical home router isn't usually CPU-bound AFAIK. IPv6 packets can be larger, but conversely that means fewer packets for the same amount of data. Organizations that embrace IPv6 can probably simplify their networking by using globally-routable addresses for all devices (coupled with appropriate security measures - NAT qua NAT was never a security feature but it did give you an obvious single point at which to place a firewall). But I fear many companies will simply reproduce the complexities of their existing IPv4 networks in IPv6, at least initially.
Is that really that significant a problem? Moore's Law should have dealt with memory and processing power issues severalfold in the time since IPv6 rollout was called for.
It strikes me that it's more likely a lack of strategic investment in infrastructure.
Generally speaking even now you can't do route lookups fast enough to handle provider traffic volume with current general-purpose CPUs (although it seems to be on the edge of possible). Instead routes are loaded into special ASICs (TCAM: http://en.wikipedia.org/wiki/Content-addressable_memory) that seem to not follow Moore's Law, be it due to low volume or other limitations (perhaps not technical).
Routers using TCAM seem to max out between 1-2M routes (divided across v4, v6, MPLS, etc). More recent routers are using slightly more flexible processors so there's some hope but the router product cycle is fairly lethargic.
The growth in internet-connected devices is exponential. By the time the last blocks were issued they were going out at 1 class A (i.e. 1/256 of the entire IPv4 address space) every month. So it's not worth the effort to recover existing addresses.
Looks like they've got more than one /8. ARIN has been trying to get the holders of legacy space to sign new SLAs and to get them to give over some of the space, but considering the IP address market, why not just sell the space instead?
Entire /8s wouldn't last a single month at current demand (and demand is only going to continue to increase), while consolidating usage to free up the /8 would take many times that long.
It's simply not worth the time and cost for such a short-lived bandaid.
If only the available IPv4 address space were growing as rapidly as the available oil...
1960s: Peak oil 1995 @12.5 billion barrels/year.
Today: Peak oil 2035, current production is over 2.5X the previously predicted peak.
There's also an increasing expectation that rather than "peak oil" being a supply-side constraint, it will be a demand-side constraint as more efficient use and alternative energy sources become available - that is, we will never "run out".
peak oil is really about EROI; we'll never run out, but there are reservoirs that aren't going to be produced because it doesn't make economic/energetic sense to do so.
Quite a bit actually. Our experience at Efficito may be interesting.
We previously had backups located in Denver and production servers located in Europe, both through the same hosting provider (FDC). We never could get IPv6 working flawlessly (high packet loss, etc) intercontinentally and the problems weren't on our side.
We moved to Hetzner and have our systems spread throughout their datacenters (backups, and some backend systems on one side, customer data on the other) and have had absolutely no problems with ipv6.
What this tells me is that somewhere between the Czech Republic and Colorado there are routers which although they sort of support ipv6 don't do so in a usable way.
Were they using HE as an ISP? I ask because HE makes a big deal of IPv6, and we regularly get packet loss between LA and Denver on IPv4. Way overloaded. FDC says they use Cogent -- Cogent used to be not-so-great (they're better nowadays).
We didn't look into it too closely. mtr would have told us but we didn't go into it. They did tell us that if we upgraded our servers and switched data centers, we would get better peering. But we opted at that point to move off and go with Hetzner instead because we weren't at a point at the time when their higher end servers would have made sense.
For an example of a real victim, look at Qatar, a country which only has a single IP address for the entire country (everyone sits behind a NAT): https://en.wikinews.org/wiki/Qatari_proxy_IP_address_tempora... [0]
Whenever someone from Qatar decides to vandalize Wikipedia, Wikipedia is forced (temporarily) to block the entire country from accessing Wikipedia. This has an adverse impact on the rest of the country.
Non-Qatari Wikipedia users also suffer, because Wikipedia makes those blocks very temporary (since they are effectively shutting off an entire country), which makes it easy for those vandals to regain access to Wikipedia quickly.
[0] This sad state of affairs is not solely due to IPv4 (incompetent/apathetic network administrators are also at fault), but it's a contributing factor.