Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can you convince me that end-to-end connectivity is desirable in most cases? I certainly don't want ingress from the public Internet to devices on my home network in the general case, and I think it's kind of nice that the Internet only knows the address of my router rather than that of my physical machine (of course, there are other ways to fingerprint devices, but let's not make it easier than necessary). You definitely want a gateway to implement firewall rules, and I'm not sure whether I care if that gateway is doing NAT as well or not? What can I do with a non-NAT-ing gateway? The only thing I can think of is that cloud IP addresses aren't as carefully guarded (I don't get up-charged for an un-associated elastic IP address).


Being able to talk directly between peers has advantages. Most services today basically require some kind of arbiter who has a public IP address.

While there are ways to poke holes in NAT, it’s not really scalable. Also, you might be behind more than one level of NAT and not even realize it today. eg: When I ask a website for my IP, it shows something different than what my hotspot is assigned… and my hotspot is not reporting an RFC1918 address. This means I am already sharing ports with someone else on the public IPv4 address that the world sees. Also, no http proxy in the middle here.

As for obscurity of addresses, NAT is pretty easily guessed in most scenarios today. IPv6 has far more address space per network making it really hard to scan. That combined with privacy addresses that change constantly is a pretty compelling reason to use IPv6 over IPv4+NAT if what you care about is people not being able to guess your IP.


> Being able to talk directly between peers has advantages. Most services today basically require some kind of arbiter who has a public IP address.

Right, but you can't do this without punching holes in your firewall, and I assert that's not a desirable tradeoff, at least for consumer use cases. As far as I can tell, you still need an arbiter with a public IP address.

> While there are ways to poke holes in NAT, it’s not really scalable.

Agreed, but this is a relatively infrequent problem. It seems like there is some belief that IPv6 is going to make p2p stuff painless, but for most use case it's still going to require poking holes in something; however, for a few use cases (e.g., 2 game consoles in the same network) it will be significantly better. I definitely agree that there's some benefit to foregoing NAT, but it doesn't seem like it will improve most use cases and it certainly doesn't seem like it will deliver the painless p2p experience that many people expect.

> As for obscurity of addresses, NAT is pretty easily guessed in most scenarios today. IPv6 has far more address space per network making it really hard to scan. That combined with privacy addresses that change constantly is a pretty compelling reason to use IPv6 over IPv4+NAT if what you care about is people not being able to guess your IP.

My concern about obscuring addresses was more about making fingerprinting more difficult (some website can't just see my IP address and associate it with my identity), albeit this isn't a well-founded concern, and it could be mitigated by rotating IP addresses.


> Right, but you can't do this without punching holes in your firewall, and I assert that's not a desirable tradeoff, at least for consumer use cases. As far as I can tell, you still need an arbiter with a public IP address.

That's true, but setting up these firewall rules dynamically is way easier than setting up NAT mappings (for example, UPnP through two different NATs never works properly).

And even for consumer use cases, gateways could provide a way to allow all traffic to a specific destination, as most operating systems should provide a proper firewall. Of course, there is still work to be done, the UI for these firewalls should be made better (for example, allowing an application to request to accept incoming packets, and letting the user choose if it should only for the LAN, or for the whole Internet, etc).

Indeed IPv6 by itself will not make p2p stuff painless, but it's still a better basis than IPv4.


Well, the trusted-network ship has probably sailed, but giving up peer-to-peer in exchange for allowing sloppy endpoint security was a terrible trade.


I'm not sure what this means. Are firewalls "sloppy endpoint security"?


I think the person you replied to meant that gateway firewalls enable sloppy endpoint security (which I agree with).


I certainly don't want ingress from the public Internet to devices on my home network in the general case

This is ultimately an operating system issue. For most of the history of the web, we've used NAT routers and firewalls as a fig leaf over the operating system issue. What is it? Operating systems are extremely promiscuous about listening for traffic on a multitude of ports. Operating systems are promiscuous about including a vast number of daemons running in the background handling a variety of tasks. Operating systems are promiscuous about running a bunch of daemons that phone home all the time.

All of this stuff is completely opaque to the user. All of it occurs on a default opt-out basis. All of it requires an extraordinary amount of knowledge for the user to feasibly withdraw consent. This is the operating system problem.

In another world, I can envision computers running operating systems which are totally transparent and easily understood by their users. All running services would be opt-in and users would be fully aware of exactly what's happening on their machines. That would be the world where end-to-end internet connectivity is highly desirable.


I don't think it's just an OS issue, because people often want promiscuity within their home network, but want a moat and drawbridge keeping the rest of the world from that network. There's too much value in home / office situations where you want discoverability enabled, but only to other devices behind your gateway to the internet at large.


Not only that, but you don't need your OS handling and selectively allowing or dropping every random packet thrown at your IP either. You don't want to even have to worry about an OS inadvertently revealing info about your devices because of how they're accepting/dropping packets or screwing that up and letting in things it shouldn't. You can offload all that work to your gateway and free up your devices to only handle the traffic that they actually care about.

You can still have a DMZ, servers, and devices directly connected to the internet, but a gateway with a stateful firewall is a wonderful thing and your typical gateway with NAT helps makes things dead simple solving far more problems than it causes.


Personally, I’d prefer not to have this isolation. I’d rather be able to access my home computer, printer, and other devices from anywhere in the world, not just when I’m at home. Moats and drawbridges are an anachronism from the Middle Ages.


Right, but you don't want anyone in the world to have access to your home computer and printer, right?

You're talking about a different problem: How can I extend the concept of my "home network" to the devices that I use and trust regardless of where I am? I'd argue that this is something that suggests that VPN functionality should get built into gateway devices.

Regardless, I don't want scammers in Malaysia port-scanning my 10 year old printer that's never going to get a security update.


I want anyone in the world to have access to my home computer and printer when I authorize it. Right now, to do that I have to configure my router as well as my operating system to allow it. But what if I'm not at home? I might be on someone else's network. Now I am at their mercy to configure the router so that my computer is accessible. In all likelihood, they will refuse to help me.


You're talking about widening your attack surface as wide as physically possible (no virtual devices yet). Now you need to ensure every device that can see the internet is perfectly impenetrable. How feasible you think that is?


Think doors and keys then. Or "smart locks" and "biometric scanners" if that's still not modern enough for you. There's a cost to convenience. Yeah, it'd be really convenient if your house didn't have any walls, you could just walk into any room from anywhere else. But so could any untrusted party.

Bugs and therefore vulnerabilities are inevitable. The larger your attack surface, the more likely some rando is to find a vulnerability and exploit it. No walls is real convenient up until someone unexpected walks right in and trashes the place.


> This is ultimately an operating system issue.

It's ultimately an issue at every layer, hence "defense in depth". Every layer does its part for security, we don't punt because some other layer ought to handle it.


That’s one way. Another way is to remove some layers. You don’t need to secure nonexistent layers.


You don't need to secure non-existent layers but you probably shouldn't remove your front door to prevent people from picking its lock.


The IP layer exists whether you're using v4 or v6.


I realize the OP is about IPv6 but my comment puts the blame on operating systems. Between the operating system running on the server, through the routers of the internet and a user’s home router, through the consumer operating system running on the user’s laptop, and all of the firmware and microcode along the way, there are many, many more layers involved what is specified in the OSI model.

And so many of these layers exist for legacy reasons, business expedience, and market failure. They don’t actually make things better.


> In another world, I can envision computers running operating systems which are totally transparent and easily understood by their users. All running services would be opt-in and users would be fully aware of exactly what's happening on their machines. That would be the world where end-to-end internet connectivity is highly desirable.

Even then, you have the issue of bugs - not just in the programs themselves, but also in the kernel-mode stack and even in the hardware. As long as something is reachable from the Internet, it will get scanned and assaulted from the Internet - and the lower your attack surface is, the better.


operating systems which are totally transparent and easily understood by their users

I sort of glossed over this part so now I have a chance to elaborate. Alan Kay has put a ton of thought into this issue [1]. He firmly believes that we can build an operating system and application software with an extremely small footprint (LOC's) so that a single person can understand the whole thing.

Since he gave that talk, we've moved further and further away from Kay's vision. We've made things more and more complex, opaque, centralized, and difficult to change. We've given away our future to big tech companies. Heck, we've even given away the past. We've lost much of the freedom we had back in the 90's, let alone the 70's and 80's when Kay did so much of his work. We're going to have to work incredibly hard just to regain what we've lost.

[1] https://www.youtube.com/watch?v=oKg1hTOQXoY


No one human mind could ever grok the entirety of say, iOS or Android.

Do we just go back to the software Middle Ages?


Why would I opt for a network topology that restricts what devices/operating systems I can safely use on my network? Especially when I already have a solution that doesn't restrict me in this way?

It's like saying, if a person walking around at night gets mugged, it's a "them" problem for not carrying a weapon to defend themselves. Uh, no, let's create an environment where even a completely unprotected child is safe. Oh wait, we already have.



> Can you convince me that end-to-end connectivity is desirable in most cases? I certainly don't want ingress from the public Internet to devices on my home network in the general case […]

In the IPv4 case you have NAT and a firewall. If you have some software that you want others to connect to (communication, gaming, etc) you have to punch a whole through the firewall (via UPnP, PCP) and then the software has to use a bunch of protocols to figure out what the public IP address of your router is: see STUN, TURN, etc.

See "How NAT traversal works":

* https://tailscale.com/blog/how-nat-traversal-works/

* https://news.ycombinator.com/item?id=30707711 (2022)

* https://news.ycombinator.com/item?id=24241105 (2020)

With IPv6 you just have a firewall, which you punch a hole through when needed (UPnP, PCP) and you're done (because there's no futzing about with determining the network address). When the P2P session is done the whole is closed and you're protected again.

So if you have a 'home network', it cannot be reached from the Internet by default.

Note: you already have a device that's always on the Internet: your mobile phone. Lots of telcos are IPv6-only and you there's not NAT or firewall between it and the Internet.


In most cases no, but then again in most cases you would be perfectly happy with all your traffic going through a http(s) only proxy.

The two biggest use cases for direct p2p connections are multiplayer games and video calling. Latency is unavoidable if your traffic has to bounce around a third-party


> In most cases no, but then again in most cases you would be perfectly happy with all your traffic going through a http(s) only proxy.

Not at all--end-to-end encryption is still a very desirable property. I certainly don't want a consumer router decrypting my browser traffic even if it is re-encrypting it to send to my device. I'll tolerate HTTP proxies on the server side when I'm administering the proxy and I need layer 7 routing, but I want to avoid it wherever possible.

> The two biggest use cases for direct p2p connections are multiplayer games and video calling.

You still have to punch a hole in your firewall either way. The only advantage ipv6 has is that you can have two hosts listening on the same port (whereas port-forwarding in a NAT context only works for 1-host-per-port).

Tangentially, I was never a big fan of player-hosted games anyway because they tended to be more vulnerable to cheating and the host always had an unfair advantage (or else a dramatic penalty in the case of lag compensation). Moreover, it's much easier to send a malicious packet directly to another player than it would be to send it to the server and convince the server to proxy it through bit-for-bit (although a poorly written game server might still do just that).


>Moreover, it's much easier to send a malicious packet directly to another player than it would be to send it to the server and convince the server to proxy it through bit-for-bit

What prevents you from putting the same validation logic into the client, thus rejecting malicious packets at the destination?


I mean, correct validation logic is always ideal, but I'm positing a world in which software doesn't always get intentional validation logic. In particular, an intermediate server might prevent packets from flowing to the target client for any number of reasons which aren't intended as "validation". It's just harder to hack through an intermediary.


Ok, but I still don't see why you can't move that intermediary to the client. Spin up a docker container and run the game server there. Ta-da! You have the same security as with a remote server.

My point is that IPv6 restoring the end-to-end principle need not jeopardise the - real or perceived - security of multiplayer games.


It’s not clear to me who “you” is meant to refer to in this scenario.

If “you” refers to the user, then because the game isn’t architected to have a server running next to each client if the server binary is even distributed to users at all.

If “you” refers to the game publisher, then because they aren’t architecting it that way to begin with, because they aren’t thinking about running the server as a security feature.

Moreover, a game developer has incentives to protect its own servers; it has much less incentive to protect its end users. You might argue that it’s end users being hacked is bad for business, but most end users wouldn’t be able to attribute a hack to a particular piece of software or infrastructure if they even know they’re hacked in the first place (consider the rampant insecurity in the consumer router and iot spaces).


I love player hosted gaming. But the people I game with are looking for "fun experiences" (kinda like going to a movie as a group, but more interactive) rather than competitively climbing a leaderboard.

Different use cases beget different requirements.


> You still have to punch a hole in your firewall either way.

No you don't have to do hole punching.


You have to do hole punching, but with IPv6, you just send packets from both ends to each other and it just works.

With IPv4 it is much much harder.


How do you have a direct connection between peers without one of them allowing ingress?


> Can you convince me that end-to-end connectivity is desirable in most cases?

p2p communications can be nice for latency sensitive communications. Sometimes it's faster to communicate from user A to user B directly instead of going from user A to server Z to user B (although, sometimes it's not faster... if latency is important, you really have to try all the accessible paths and use the best one, keeping in mind that paths may have asymmetric latency, so maybe you want A to send to B directly, but B should send to A through an intermediary; and path latency isn't static, so for a long session, if it's important, you need to probe throughout and change thigns around)

But, maybe you don't want your connection to be a full peer capable of receiving as well as initiating connections, you can run a stateful firewall on your end and drop incoming initiations. You'll still benefit from having end-to-end connectivity because it means your ISP can process your packets with basically no state, so there shouldn't be problems with connection state timing out and your connections being dropped without warning. If you run your own stateful firewall, you may still have that problem, but you might have less state required for a stateful firewall instead of a NAT, so maybe you can manage more connections.


> I certainly don't want ingress from the public Internet to devices on my home network in the general case

This is a job for a firewall. NAT is not a firewall. You can easily filter incoming connections to untrusted devices when using IPv6, with the advantage that when you want to allow a certain kind of traffic in you can do so without messing around with port forwarding or dealing with multiple devices competing for access to standard port numbers on a single public IP address. That's assuming you actually get a public IP address; if you're behind CGNAT then port forwarding isn't even an option, since it would need to be configured on the ISP's side and not just in your router.

If you enable UPnP for automatic port forwarding, as most do, then NAT isn't blocking much of anything. The only difference between NAT with UPnP and IPv6 with no filter preventing incoming connections is in whether devices which open ports but don't set up forwarding can assume that incoming connections probably came from the same local network. However, it's considered poor practice to treat access to the local network as a means of authentication. (Note that with NAT alone if your router receives a packet addressed to your local network's private IP range, and not the routers public IP address, it will forward it unmodified; preventing that is a firewall function, not a NAT function.)

> and I think it's kind of nice that the Internet only knows the address of my router rather than that of my physical machine

If you use IPv6 with privacy extensions enabled then the Internet will only know your /64 network prefix, which is basically the same thing (unique per subscriber and subnet). The rest of the address will be randomly generated and short-lived, unless you choose to assign an additional long-lived address e.g. for a server.

> I'm not sure whether I care if that gateway is doing NAT as well or not? What can I do with a non-NAT-ing gateway?

Doing NAT isn't the problem, requiring NAT is. When the architecture requires NAT devices can't receive incoming connections without port forwarding even when you want them to. We've gotten rather good at working around NAT's limitations (not without cost), but with IPv6 those workarounds are unnecessary. For example, any peer-to-peer multiplayer game, video chat, or file transfer app where both sides are behind NAT depends on third-party servers for NAT traversal. (Note that the fact that this works at all without actually forwarding all data through the third-party servers shows that NAT is not a reliable system for preventing incoming UDP connections: it can be tricked into thinking a connection is already established.) With IPv6 you don't need the third-party servers as the peers can connect to each other directly.


> With IPv6 you don't need the third-party servers as the peers can connect to each other directly.

This will never happen. NAT gets replaced with a stateful gateway still doing conntrack (look at OpenWRT...) and p2p works exactly the same. UPNP, port forwarding, STUN are still relevant and work the same... Except IPv6 hexadecimal addresses are a usability disaster and dual stack will forever be a security disaster. Worst technology ever.


> NAT gets replaced with a stateful gateway still doing conntrack…

Yeah, blocking incoming connections by default is a bad habit and needs to stop. It's fine for untrusted devices or private VLANs which shouldn't be accepting direct incoming connections in the first place (like cheap IoT gadgets), and should probably be additionally filtered to prevent inter-device connections and access to arbitrary Internet sites, but a laptop, phone, or tablet is perfectly capable of deciding on its own whether to accept or reject an incoming connection, and moreover as a mobile device must assume the network could be hostile anyway.

> Except IPv6 hexadecimal addresses are a usability disaster…

How are IPv6 addresses "a usability disaster" when you never see them? Just use DNS like a sane person.

> …and dual stack will forever be a security disaster.

That's a new one to me. How is dual-stack (IPv4+IPv6) any worse security-wise than any other situation where you have multiple "upstream" Internet connections, e.g. for failover or load balancing?


Blocking incoming connections by default is what I like about the current scenario.

You don't trust "cheap IoT gadgets". I would like to be able to trust any/all my devices. But I don't.

I don't trust M$/Apple/Linux - AND any associated applications people might want to use at home(kodi, plex, screencast, NAS for example) - to be 100% perfect when it comes to "deciding on its own whether to accept or reject an incoming connection".

I see "block by default" as being a layer of security - one bit of defence in depth.

Happy to drop NAT (with it's IP<->port mapping complications) for a straight IPv6 firewall though.

EDIT: concision


If you really want to block all incoming connections by default on your own network you can. Personally I think if a reasonably capable (i.e. non-IoT) host opens up a port to accept incoming connections, and there isn't a specific rule set by the local admin to block that port or host, then incoming connections should be allowed. NAT certainly doesn't stop all incoming traffic given that UPnP is enabled by default on most routers, not to mention all the methods available for UDP NAT traversal. It just makes it more complicated.

If you've ever connected your phone or laptop to a public WiFi network (or for that matter, the cellular data network) then it's been exposed to an environment were there is no extra layer of protection from incoming connections beyond that implemented by the host itself. We generally expect that to work without major security issues. Non-mobile, "appliance"-type devices might need stronger filtering if they weren't designed to be connected directly to the Internet, but that assumption is becoming less common as more devices require authenticated connections rather than trusting the local network.


And that's the thing. With a firewall and IPv6 we can each configure for what we want without the NAT hassle/expectation.

I would aim for a default block with allowList and agree with you that a non-IoT host using a UPnP-like mechanism (does UPnP cover IPv6 firewall like scenario?) is probably ok.

Ideally I'd like some kind of notification system where I can click "allow" for the firewall. (Maybe the firewall notifies my phone?) I think UPnP as it currently stands is a bit too hands off but can understand not every user wants to deal with this.

And we agree regards mobiles being in a default hostile environment and expecting it to work. But I see that as a matter of fit-for-purpose. I don't trust every computer I have to that level.


> does UPnP cover IPv6 firewall like scenario?

The miniupnpd UPnP daemon (used e.g. by OpenWRT) includes code[0] to handle IPv6 "pinhole" requests—not port forwarding, which isn't required for IPv6, but rather just opening a port in the firewall to permit incoming connections to a certain host.

[0] https://github.com/miniupnp/miniupnp/blob/b734f94bdf6ff555a2...


Awesome reference.

I wish I could upvote you multiple times. This interaction with you has been most enlightening. Thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: