NAT-PMP, UPnP, PCP, et. all primarily exist because consumer networks that have to share a public IP face more issues than simply opening a port up to the internet. Destination port conflicts, port remapping, discovery of your public IP, are huge fucking headaches that these protocols also assist with.
Given most consumer routers these days can be configured with a mobile app, I could easily foresee a saner alternative where devices could simply ask the gateway if they could open up a port and have a notification sent to a mobile app to allow it.
But, that said, given how many devices are mobile these days I think the benefit of endpoint firewalls shouldn’t be underplayed either.
NAT gateways that utilize connection tracking are effectively stateful firewalls. Whether a separate set of ‘firewall’ rules does much good because most SNAT implementations by necessity duplicate this functionality is a bit ignorant, IMO.
Meanwhile, an IPv6 network behind your average Linux-based home router is 2-3 nftables rules to lock down in a similar fashion.
It's also trivial to roll your own version of dropbox. With IPv6 it's possible to fail to configure those nftables rules. The firewall could be turned off.
In theory you could turn off IPv4 NAT as well but in practice most ISPs will only give you a single address. That makes it functionally impossible to misconfigure. I inadvertently plugged the WAN cable directly into my LAN one time and my ISP's DHCP server promptly banned my ONT entirely.
> In theory you could turn off IPv4 NAT as well but in practice most ISPs will only give you a single address
So, I randomly discovered the other day that my ISP has given me a full /28.
But I have no idea how to actually configure my router to forward those extra IP addresses inside my network. In practice, modern routers just aren't expecting to handle this, there is no easy "turn of NAT" button.
It's possible (at least on my EdgeRouterX), but I have to configure all the routing manually, and there doesn't seem to be much documentation.
You should be able to disable the firewall from the GUI or CLI for Ubiquiti routers. If you don't want to deal with configuring static IPs for each individual device, you can keep DHCP enabled in the router but set the /28 as your lease pool.
In the US many large companies (not just ISPs) still have fairly large historic IPv4 allocations. Thus most residential ISPs will hand you a single publicly routable IPv4 regardless of if you're using IPv6 or not.
We'll probably still be writing paper checks, using magnetic stripe credit cards, and routing IPv4 well past 2050 if things go how they usually do.
Went to double check what my static IP address was, and noticed the router was displaying it as 198.51.100.48/28 (not my real IP).
I don't think the router used to show subnets like that, but it recently got a major firmware update... Or maybe I just never noticed, I've had that static IP allocation for over 5 years. My ISP gave it to me for free after I complained about their CGNAT being broken for like the 3th time.
Guess they decided it was cheaper to just gave me a free static IPv4 address rather than actually looking at the Wireshark logs I had proving their CGNAT was doing weird things again.
Not sure if they gave me a full /28 by mistake, or as some kind of apology. Guess they have plenty of IPs now thanks to CGNAT.
More like even if they looked at the logs they aren't about to replace an expensive box on the critical path when it's working well enough for 99% of their customers.
I once had my ISP respond to a technical problem on their end by sending out a tech. The service rep wasn't capable of diagnosing and refused to escalate to a network person. The tech that came out blamed the on premise equipment (without bothering to diagnose) and started blindly swapping it out. Only after that didn't fix the issue did he finally look into the network side of things. The entire thing was fairly absurd but I guess it must work out for them on average.
Did you even read the second paragraph of the (rather short) comment you're replying to? In most residential scenarios you literally can't turn off NAT and still have things work. Either you are running NAT or you are not connected. Meanwhile the same ISP is (typically) happy to hand out unlimited globally routable IPv6 addresses to you.
I agree though, being able to depend on a safe default deny configuration would more or less make switching a drop in replacement. That would be fantastic, and maybe things have improved to that level, but then again history has a tendency to repeat itself. Most stuff related to computing isn't exactly known for a good security track record at this point.
But that's getting rather off topic. The dispute was about whether or not NAT of IPv4 is of reasonable benefit to end user security in practice, not about whether or not typical IPv6 equipment provides a suitable alternative.
> But that's getting rather off topic. The dispute was about whether or not NAT of IPv4 is of reasonable benefit to end user security in practice, not about whether or not typical IPv6 equipment provides a suitable alternative.
And, my argument, is that the only substantial difference is the action of a netfilter rule being MASQUERADE instead of ALLOW.
This is what literally everyone here, including yourself, continues to miss. Dynamic source NAT is literally a set of stateful firewall rules that have an action to modify src_ip and src_port in a packet header, and add the mapping to a connecting tracking table so that return packets can be identified and then mapped on the way back.
There's no need to do address and port translation with IPv6, so the only difference to secure an IPv6 network is your masquerade rule turns into "accept established, related". That's it, that's the magic! There's no magical extra security from "NAT" - in fact, there are ways to implement SNAT that do not properly validate that traffic is coming from an established connection; which, ironically, we routinely rely on to make things like STUN/TURN work!
> Dynamic source NAT is literally a set of stateful firewall rules that have an action to modify src_ip and src_port in a packet header, and add the mapping to a connecting tracking table so that return packets can be identified and then mapped on the way back.
Yes, and that _provides security_. Thus NAT provides security. You can say "well really that's a stateful firewall providing security because that's how you implement NAT" and you would be technically correct but rather missing the point that turning NAT on has provided the user with security benefits thus being forced to turn it on is preventing a less secure configuration. Thus in common parlance, IPv4 is more secure because of NAT.
I will acknowledge that NAT is not the only player here. In a world that wasn't suffering from address exhaustion ISPs wouldn't have any particular reason to force NAT on their customers thus there would be nothing stopping you from turning it off. In that scenario consumer hardware could well ship with less secure defaults (ie NAT disabled, stateful firewall disabled). So I suppose it would not be unreasonable to observe that really it is usage of IPv4 that is providing (or rather forcing) the security here due to address exhaustion. But at the end of the day the mechanism providing that security is NAT thus being forced to use NAT is increasing security.
Suppose there were vehicles that handled buckling your seatbelt for you and those that were manual (as they are today). Someone says "auto seatbelts improve safety" and someone else objects "actually it's wearing the seatbelt that improves safety, both auto and manual are themselves equivalent". That's technically correct but (as technicalities tend to go) entirely misses the point. Owning a car with an auto seatbelt means you will be forced to wear your seatbelt at all times thus you will statistically be safer because for whatever reason the people in this analogy are pretty bad about bothering to put on their seatbelts when left to their own devices.
> in fact, there are ways to implement SNAT that do not properly validate that traffic is coming from an established connection; which, ironically, we routinely rely on to make things like STUN/TURN work!
There are ways to bypass the physical lock on my front door. Nonetheless I believe locking my deadbolt increases my physical security at least somewhat, even if not by as much as I'd like to imagine it does.
The difference is that with IPv4 you know that you have that security because there is no other way for the system to work while with the IPv6 router you need to be a network expert to make that conclusion.
Look at this nftables setup for a standard IPv4 masquerade setup
table ip global {
chain inbound-wan {
# Add rules here if external devices need to access services on the router
}
chain inbound-lan {
# Add rules here to allow local devices to access DNS, DHCP, etc, that are running on the router
}
chain input {
type filter hook input priority 0; policy drop
ct state vmap { established : accept, related : accept, invalid : drop };
iifname vmap { lo : accept, eth0 : jump inbound-wan, eth1 : jump inbound-lan };
}
chain forward {
type filter hook forward priority 0; policy drop;
iifname eth1 accept;
ct state vmap { established : accept, related : accept, invalid : drop };
}
chain inbound-nat {
type nat hook prerouting priority -100;
# DNAT port 80 and 443 to our internal web server
iifname eth0 tcp dport { 80, 443 } dnat to 192.168.100.10;
}
chain outbound-nat {
type nat hook postrouting priority 100;
ip saddr 192.168.0.0/16 oiname eth0 masquerade;
}
}
Note, we have explicit rules in the forward chain that only forward packets that either:
* Were sent to the LAN-side interface, meaning traffic from within our network that wants to go somewhere else
* Are part of an established packet flow that is tracked, that means return packets from the internet in this simple setup
Everything else is dropped. Without this rule, if I was on the same physical network segment as the WAN interface of your router, I could simply send packets to it destined to hosts on your internal network, and they would happily be forwarded on to it!
NAT itself is not providing the security here. Yes, the attack surface here is limited, because I need to be able to address this box at layer 2 (just ignore ARP, send the TCP packet with the internal dst_ip address I want addressed to the ethernet MAC of your router), but if I compromised routers from other customers on your ISP I could start fishing around quite easily.
Now, what's it look like to secure IPv6, as well?
# The vast majority of this is the same. We're using the inet table type here
# so there's only one set of rules for both IPv4 and IPv6.
table inet global {
chain inbound-wan {
# Add rules here if external devices need to access services on the router
}
chain inbound-lan {
# Add rules here to allow local devices to access DNS, DHCP, etc, that are running on the router
}
chain inbound-nat {
type nat hook prerouting priority -100;
# DNAT port 80 and 443 to our internal web server
# Note, we now only apply this rule to IPv4 traffic
meta nfproto ipv4 iifname eth0 tcp dport { 80, 443 } dnat to 192.168.100.10;
}
chain outbound-nat {
type nat hook postrouting priority 100;
# Note, we now only apply this rule to IPv4 traffic
meta nfproto ipv4 ip saddr 192.168.0.0/16 oiname eth0 masquerade;
}
chain input {
type filter hook input priority 0; policy drop
ct state vmap { established : accept, related : accept, invalid : drop };
# A new rule here to allow ICMPv6 traffic, because it's not required for IPv6 to function correctly
icmpv6 type { echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } accept;
iifname vmap { lo : accept, eth0 : jump inbound-wan, eth1 : jump inbound-lan };
}
chain forward {
type filter hook forward priority 0; policy drop;
iifname eth1 accept;
# A new rule here to allow ICMPv6 traffic, because it's not required for IPv6 to function correctly
icmpv6 type { echo-request, echo-reply, destination-unreachable, packet-too-big, time-exceeded } accept;
# We will allow access to our internal web server via IP6 even if the traffic is coming from an
# external interface
ip6 daddr 2602:dead:beef::1 tcp dport { 80, 443 } accept;
ct state vmap { established : accept, related : accept, invalid : drop };
}
}
Note, there's only three new rules added here, the other changes are just so we can use a dual-stack table so there's no duplication of the shared rules in separate ip and ip6 tables.
* 1 & 2: We allow ICMPv6 traffic in the forward and input chains. This is technically more permissive than needs to be, we could block echo-request traffic coming from outside our network if desired. destination-unreachable, packet-too-big, and time-exceeded are mandatory for IPv6 to work correctly.
* 3: Since we don't need NAT, we just add a rule to the forward chain that allows access to our web server (2602:dead:beef::1) on port 80 and 443 regardless of what interface the traffic came in on.
None of this requires being a "network expert", the only functional difference in an actually secure IPv4 SNAT configuration and a secure IPv6 firewall is...not needing a masquerade rule to handle SNAT, and you add traffic you want to let in to forwarding rules instead of DNAT rules.
Consumers would never need to see the guts like this. This is basic shit that modern consumer routers should do for you, so all you need to think about is what you want to expose (if anything) to the public internet.
With partitioning? No you don't. It gets a bit messy if you also want to partition a table by other values (like tenant id or something), since then you probably need to get into using table inheritance instead of the easier declarative partitioning - but either technique just gives you a single effective table to query.
If you are updating the parent table and the partition key is correctly defined, then an update that puts a row in a different partition is translated into a delete on the original child table and an insert on the new child table, since v11 IIRC. But this can lead to some weird results if you're using multiple inheritance so, well, don't.
I believe they were just pointing out that Postgres doesn't do in-place updates, so every update (with or without partitions) is a write followed by marking the previous tuple deleted so it can get vacuumed.
There's a huge divide between abusing rebase in horrible ways to modify published history, and using it to clean up a patch series you've been working on.
Oops, I made a mistake two commits ago, I'd really like to get some dumb print statements I added out before I send this off to get merged is perfectly valid, I just did it yesterday. A quick `git commit --fixup` followed by `git rebase -i --autosquash HEAD^3` and I had some dumb debugging code I left in stripped out.
Then, there's other perfectly valid uses of rebase, like a simple `git rebase main` in an active development branch to reparent my commits on the current HEAD instead of having my log messed up with a dozen merge commits as I try to keep the branch both current and ready to merge.
So, yes, I do think editing history is a grand idea that should be used regularly. It lets me make all the stupid "trying this" and "stupid bug" commits I want, without polluting the global history.
Or, are you telling me you've also never ended up working on two separate tasks in a branch, thinking they would be hard to separate into isolated changes, and they ended up being more discrete than you expected so you could submit them as two separate changes with a little help from `git cherry-pick` and `git rebase` too?
Editing history isn't evil. Editing history such that pulls from your repository break? That's a different story entirely.
Editing history let's people hide information, intentionally or not. You are bold to claim you know what future people need information wise better than them.
What's it matter if you have an extra commit to remove a file before merge? Perfectly valid, and doesn't hide anything.
Caring more about a "visually pleasing log" when you can care about an information rich log doesn't jive with me. Logs aren't supposed to be "clean"
If I want features in two branches, I make two branches. Cherry pick also is bad for most people, most of the time.
I care about having a commit log that's useful and easy to scan through, it's not about it being "visually pleasing". Having a dozen "oopsie" commits in the log doesn't make my life any easier down the road, all it does is increase noise in the history.
Again, once something hits `main` or a release/maintenance branch then history gets left the hell alone. But there really is no context to be gained from me fixing stupid things like typos, stripping out printf() debug statements, etc. being in the commit logs before a change gets merged.
> Editing history let's people hide information, intentionally or not. You are bold to claim you know what future people need information wise better than them.
You're already deciding what information is important to the future when you decide at which points you commit.
Reductio ad absurdum: why not commit every keystroke, including back spaces? By not including every key stroke, you are hiding information from future people!
It is used for tracking, that's the whole point of the header. "Who's sending me all of this traffic" is a useful, non-invasive thing for websites to have access to. You can use rel="noreferrer" on a link to disable the header on a specific link, as well as the `Referrer-Policy` header and `<meta name="referrer" />` to have some additional control (the 'origin-when-cross-origin' value can be useful in some cases, so destination sites can attribute what origin traffic came from, but not the specific page, while still being able to track it on your own origin - I think this is actually the default behavior in browsers these days).
"Simple" VPS providers like DigitalOcean, etc. really need to get the hell onboard with network virtualization. It's 2026, I don't want to be dealing with individual hosts just being allocated a damned /64 either. Give me a /48, attach it to a virtual network, let me split it into /64's and attach VM's to it - if I want something other than SLACC addresses (or multiple per VM) then I can deal with manually assigning them.
To be fair, the "big" cloud providers can't seem to figure this shit out, either. It's mind boggling, I'm not saying I've gone through the headache of banging out all the configuration to get FRRouting and my RouterOS gear happily doing the EVPN-VXLAN dance; but I'm also not Amazon, Google, or Microsoft...
Do you think anything other than trivial internal networking is a common requirement on DO? I’m not saying it’s not, I really don’t know— I haven’t been in the production end of things for a while and when I was, everyone was basically using AWS et. al. for non-trivial applications. They make it easy enough to set up a private ipv4 subnet to connect your internal services. Does that not satisfy you use case or are you just avoiding tooling that might be obsolete sooner than ipv6?
Do people with lower incomes have the entitlement to all benefits of the state without paying a fair share of their own?
If anything, there’s plenty of literature showing that social programs and tax exemptions on the poor make underpaying them possible to begin with. Walmart couldn’t pay $12/hr. if tax exemptions and SNAP and other aid didn’t fill the gap.
We don't have go go to the extremes of employers that pay what is effectively poverty wages relative to cost of living.
The household that brings home $80K/yr would always spend a larger percentage of their income on taxable consumption, than an executive that takes home multiple million per year. Progressive income tax brackets are a better tool for making sure those who are able to pay a larger share of the common good, do so.
Unfortunately, we still have not come up with a realistic way to deal with the hoarding of wealth - both by individuals, as well as corporations like Apple with massive warchests. Even some more broadly accepted ideas like a LVT have some issues if the future really does trend towards "AI" displacing people from their jobs.
One way or another, the reality is that the tools we have right now have persisted because they do their job well when politicians act in good faith and don't implement poor fiscal policy emphasizing short-term gains that result in long-term pain. But, they're still fundamentally flawed, and something is going to have to change if we do see dramatic changes to society in the coming decade due to developing technologies.
> The household that brings home $80K/yr would always spend a larger percentage of their income on taxable consumption, than an executive that takes home multiple million per year. Progressive income tax brackets are a better tool for making sure those who are able to pay a larger share of the common good, do so.
"Progressive income tax brackets" don't actually do this. The people with so much money they can't spend it all use various tax shelters as it is. They typically manage to not even pay tax on the amounts they do spend, because they borrow money and spend it instead of recognizing it as income first. So they would be paying more under a flat consumption tax than they do under the status quo. The "progressive income tax system" doesn't actually work the way it's claimed to.
On top of that, the problem is essentially fake. People absolutely can and do spend millions of dollars a year. Cardiologists making seven figures buy huge houses with multi-car garages full of exotic makes etc. It's spending billions of dollars a year that nobody is really going to do, but that's such a tiny percentage of people that it's ridiculous to design a tax system being imposed on everybody else on the basis of that, and those are the exact people who aren't paying the high rates under the existing system anyway.
Here's a proposal: Have a flat consumption tax, and then have an income tax where the rate is 0% up to the 99.9th percentile income and only the top 0.1% even have to file a tax return. The latter is going to be avoided in the same ways it is now, but at least then you can't say the billionaires don't have a higher nominal rate, right?
Is it though? Both social security and 401k withdrawals are taxed under the existing income tax, so they'd just be paying it as consumption tax instead.
Also, aren't people with an enormous amount of stored wealth "the rich"?
You don’t have to have an enormous amount of stored wealth to be on a livable fixed income (e.g. a municipal pension) and that income could be very lightly taxed today relative to a viable consumption tax.
Government pensions seem like the easy one. The state would be getting the revenue from when they spend the money, so they could use it to adjust the amount of the pension ("cost of living adjustment") and it would be revenue-neutral.
But also, government pensions tend to be, shall we say, unreasonably generous, because they live in that sour spot between "the legislature doesn't have to pay for this in the current year's budget" and "the union negotiates reasonable-seeming rules it knows it can game against public officials who are in their pocket or DGAF" e.g. pension is based on compensation in the last year before retirement and overtime is "awarded" based on seniority, so that people put in 80 hours of overtime every week in their last year. And then we're back to, aren't those the people we want to be taxing anyway?
Are state government pensions worse? Folks live and work for a state that includes a pension, i.e. Illinois, then retire and move out of the state, no longer contributing to that state's economy, just drawing on it. Thoughts?
> If anything, there’s plenty of literature showing that social programs and tax exemptions on the poor make underpaying them possible to begin with.
That literature is playing fast & loose with terminology to justify a preexisting conclusion.
Anyhow, we know what life was like before Great Society programs, and it wasn't higher wages for the poor, we've just forgotten because it's been so successful. That memory hole oddly works in favor of both those who promote expanding welfare and those who oppose it.
> Walmart couldn’t pay $12/hr. if tax exemptions and SNAP and other aid didn’t fill the gap.
From a basic macro economic standpoint, most welfare programs push wages up by marginally reducing the labor pool. In a free market, how would Walmart be forced to pay a "livable wage" if entitlements didn't exist? Do you really think people would just choose not to work and starve if their wages didn't cover all their expenses? Out of spite? It doesn't make sense, and it certainly doesn't comport with history. It makes even less sense when people buy this argument yet also support minimum wage laws.
The counterexample is the Earned Income Tax Credit (EITC). EITC increases as your wages increase, theoretically incentivizing work, rather than diminishing as you earn more. This would increase labor supply. What tends to happen to prices (i.e. wages--price of labor) when supply increases but not demand? Presumably the more cogent literature bemoaning Walmart's labor practices is primarily relying on EITC while hoping the reader glosses over the distinction.
> Anyhow, we know what life was like before Great Society programs, and it wasn't higher wages for the poor, we've just forgotten because it's been so successful.
That doesn't tell you the answer because the programs were instituted prior to the productivity increases in the 20th century. Are people better off now than they were before the general availability of electric light or mechanized transportation? Probably, but that doesn't mean you can trace the development of modern agriculture to the existence of SNAP.
> In a free market, how would Walmart be forced to pay a "livable wage" if entitlements didn't exist?
People frequently have choices between jobs that are easier or otherwise more pleasant and jobs that pay more. For example, long-haul truck drivers get paid significantly more than short-haul drivers, but they also sleep in their trucks and don't get to see their families most nights. Likewise, a lot of jobs require you to get a degree or certification, which can be a lot of work, which people may not be willing to do if they don't need to.
If you give them "benefits" then they take the easier job over the better paying one. Which allows the employer offering the easier job to pay less and still get applicants. It also creates a poverty trap if the benefits are contingent on not making more money, because then the compensation advantage of the higher-paying job is much smaller -- in some cases negative.
> EITC increases as your wages increase, theoretically incentivizing work, rather than diminishing as you earn more.
Except that it does diminish as you earn more, because it has an aggressive phase out. For a single person with no dependents, the phase out kicks in below federal minimum wage. If you had a minimum wage job at 30 hours a week and wanted to work 40 hours, increasing your hours would cause you to receive a smaller EITC.
There is a reason the EITC represents ~0.1% of the federal budget, and it's not because it's a bad idea, it's because it's implemented in a way that prevents people from getting much from it.
> People frequently have choices between jobs that are easier or otherwise more pleasant and jobs that pay more. For example, long-haul truck drivers get paid significantly more than short-haul drivers, but they also sleep in their trucks and don't get to see their families most nights. Likewise, a lot of jobs require you to get a degree or certification, which can be a lot of work, which people may not be willing to do if they don't need to.
That's a slight of hand. There's value in choice, and that value is being reaped by the worker precisely because poverty programs make it possible.
But let's go with that example. You're assuming the number of truckers and trucker-hours would remain constant. But they wouldn't. That's just not how dynamic systems work. There are other people for whom short-haul trucking is the less desirable choice than what they're doing now, or who work fewer hours than they're doing now. Without the welfare subsidies, the supply of short-haul trucking labor would likely increase--more people working more hours. Similarly, you're assuming the demand for short-haul trucking would remain the same at higher wages. But demand in economics is not the same thing as "I would like" or even "I need", and at higher wages the demand would likely diminish.
The whole argument is the economics equivalent of a perpetual motion machine, and it's sold by throwing contrived complexity at people and hoping they don't think it through. Like perpetual motion or free energy machines, at the most miniscule scale there are exceptions and caveats (maybe short-haul wages in particular would rise, especially after accounting for the totality of labor economy changes), but those exceptions don't scale to a systems level. That doesn't stop con artists from selling their Rube Goldberg machines, though, knowing the vast majority of people won't think it through.
What the rhetoric is trying to do is bolster support for a livable wage through radical policy changes by drumming up anti-corporate sentiment. It's in service of a normative argument (a "livable wage" is a reasonable social ask, IMO, notwithstanding its amorphous nature), but disguised as a scientific argument that can only result in failure by setting wrong expectations about how markets and policy operate, ultimately reinforcing cynicism.
> There's value in choice, and that value is being reaped by the worker precisely because of poverty programs make it possible.
It seems like you're ignoring the same thing you're objecting to: It's a dynamic system.
If long-haul trucking companies offer less desirable but higher paying jobs and easier jobs aren't paying a living wage then people would pick the harder job that lets them not starve. Which means the easier jobs would have to pay more in order to attract workers, unless those workers can get government assistance. If they can, the easier jobs can get people to work without paying more, because the assistance programs let them pick the easier job even at lower pay. In other words, the subsidies were supposed to go to the poor and instead they went to the lower-paying employers.
In a dynamic system the long-haul companies would then have to respond if it became more desirable to work somewhere the pay is low enough to get government assistance, but the phase outs give the low-paying employers another advantage.
Say the undesirability of the job is good for $15k/year in additional compensation. However, if you got paid $15k more, you'd lose $10k to government benefit phase outs and additional taxes. To actually get paid $15k more, you'd have to "get paid" $45k more. Which is to say, the employer with the low-paying job can pay you $45k less.
But it's a dynamic system, so they might "only" pay you $35k less and then hire more people. The trucking companies would then have to pay $45k more than them when it used to be $15k. Even with Walmart paying less than before, their relative advantage has increased. And there are two ways to get something a long distance over land: A long-haul truck the whole way, or a short-haul truck to the rail yard, a freight train, and then another short-haul truck. So then instead of a truck driver getting higher pay per mile over 2000 miles of driving, a different one gets lower pay per mile over 60 miles of driving twice, and a rail company gets the rest.
So the low-wage subsidies cause the amount of higher-wage labor demand to go down by making it less competitive with non-labor alternatives to perform the same function, as labor is diverted to the lower-paying jobs even while enabling them to pay even less.
> There are other people for whom short-haul trucking is the less desirable choice than what they're doing now, or who work fewer hours than they're doing now.
All of that is already baked in to the existing numbers; the long-haul drivers get paid more because fewer people want to do it.
> Like perpetual motion or free energy machines, at the most miniscule scale there are exceptions and caveats, but those exceptions don't scale to a systems level.
Only they're not exceptions. If you subsidize something you get more of it. What happens if you subsidize low-paying jobs but not higher-paying jobs?
Yes but ideas exist like FairTax which directly address this issue in some fashion. It's easy to come up with reasons why something won't work, it is a lot harder to find solutions.
Embraer has been working on their auto takeoff system, E2TS, for some time. While improved safety during a critical phase of flight is a goal, airlines are looking at the possibility that it allows increased performance (higher MTOW, shorter runways, less fuel burn.)
Given most consumer routers these days can be configured with a mobile app, I could easily foresee a saner alternative where devices could simply ask the gateway if they could open up a port and have a notification sent to a mobile app to allow it.
But, that said, given how many devices are mobile these days I think the benefit of endpoint firewalls shouldn’t be underplayed either.
reply