Hacker Newsnew | past | comments | ask | show | jobs | submit | bazfoo's commentslogin

As with all things, this is a context where the important information was on-platform and the emails were opt-in subscriptions. YMMV.

> It seems like a low cost to maintain people who subscribed but never loaded a tracker image.

On the contrary. Each send costs. This adds up. It also adds cost to the overall processes involved due to unbounded growth in possible recipients.

Reputation with service providers is another concern. Google, for instance, will punish a sender's deliverability if enough recipients never open emails. Failing to clean your list impacts the ability to deliver to active users.

> What percentage of your addresses never phone home?

These were an extreme outlier. Such that it's simpler to send an email communicating the pending removal unless they opt-in.

> So pruning them from your list would remove potentially lucrative customers.

These decisions aren't made in a vacuum. Links themselves are also tracked. Those users also weren't actively clicking emails either.

On the system I was cleaning up, something like 20% of outbound emails had _zero_ engagement.


> You know, I wonder if there's something here that a next-generation language can't get in on, some sort of help to provide to the developer who says "OK, I'd like to upgrade this package for people, could you please help me ensure that I'm not going to break anybody in the process?"

Russ has proposed a "go release" command that is intended to help with that process. It's probably simple right now, but has lots of room to grow in that direction.

See: https://research.swtch.com/vgo-cmd


On the flipside, I see a lot of tools pushed that are more complicated, less flexible, and far buggier than the old tools. And then inevitably in a few years they're deprecated in favour of something newer and shinier and the old tools are still perfectly fine (e.g. Make).


I concur with the sentiment.

The risk is still there for dependencies, but it helps that the community for the most part follows "a little copying is better than a little dependency" as an adage.


I'm in Australia and looking at a box of plain Ibuprofen. It also recommends limiting to a few days at a time unless told otherwise by a doctor.


While the joke is relatively amusing, it would be better to seem more useful solutions contributed to the ecosystem like depscheck [1] for Go.

[1]: https://github.com/divan/depscheck


I found myself doing the same thing for Ansible.

The problem I ran into was where I wanted to test service restarting in a systemd based environment. Older releases using sysvinit work perfectly fine.


This is why you should check out systemd-nspawn. It was designed especially for this use case.

Also. If you're on upstart, give lxc a shot. We currently test our ansible scripts by deploying to lxc by giving each container a static IP in a bridged network to simulate our production environment. Just swap ansible inventory files. Works like a charm.


This gist works with test-kitchen to run systemd in a centos 7-based container.

It should be simple enough to adjust to run ansible.

https://gist.github.com/glenjamin/2d04e9c2a163c7848173


This is a major problem with the now-in-vogue use of Docker for testing this sort of thing, yes. They aren't replacements for a virtual machine, and testing against something that doesn't even resemble the deployment environment is wacky to me.


I'm having trouble finding data that is using the same methodologies. The CIA World Fact-book specifically seems to be lacking data for Australia.

It would be good to see some solid data, because one thing that surprised me while travelling in the USA was the extreme poverty I saw just down the road from extremely wealthy neighborhoods.

Not to say that Australia doesn't have its own set of problems, particularly in Indigenous Communities, but this seems to operate at a wholly different level.


I don't disagree. I've spent a lot of time in Canada and although Canada has a poverty rate not that different from the US, you just don't see the ghettos like you do in the US.

I'm not sure why that is. My initial impression is that Canadian cities just don't allow certain sections of their cities to stagnate and crumble (maybe funding?). I remember reading that at one point in time Philadelphia had 16,000 abandoned vehicles in the city. In Canada, if a car looks like it's abandoned on a public street, you'd be lucky if 2-3 weeks passed before it was towed. Maybe that extends to other efforts as well?


And you can have something fun like the following in your bashrc to allow attaching to your running emacs daemon any time you need in the terminal:

    function semacs() {
      emacsclient -t -a "" "/sudo::$(realpath $@)"
    }


Even more fun is using tramp to edit files as root on a machine via ssh. The whole installed editor canard is somewhat a nonissue from the emacs side.


Are there any good pointers to where the amendments have actually expanded surveillance capabilities? I've been going through the amendments and comparing them to the original bill, but so far I'm finding a lot of rewording.

For instance, the definition of computer[1] that is being suggested to allow monitoring of the entire internet is:

  computer means all or part of:
    (a)  one or more computers; or
    (b)  one or more computer systems; or
    (c)  one or more computer networks;
    (d)  any combination of the above.
Whereas the old definition[2] was:

  computer means a computer, a computer system or part of a computer system.
Both of these seem equivalent in my eyes. If so, the horse seems to have already bolted years ago.

Frankly I have no idea where to go from here. How does one talk to your local MP when the details of the proposed legislation are so muddy?

[1]: http://parlinfo.aph.gov.au/parlInfo/search/display/display.w...

[2]: http://www.comlaw.gov.au/Details/C2014C00613/Html/Text#_Toc3...


This is similar to the other metadata legislation where they neglected to provide an exact definition of what metadata is.

Computer system I think could reasonably be though of as a local network. Whereas in [2] they are being more explicit to head off any issues with something wider.

There is really no pressure on them to limit the scope of this, it doesn't surprise me that they would go for the widest possible definition and then reign it in if there is an resistance.


Those seem in no way equivalent. The old definition pretty clearly refers to 1 computer. You could certainly argue that several hosts represented 1 computer, but a judge would throw it out.

The new definition plainly allows "one or more computer networks" which means literally the entire internet since it's just "one or more computer networks".


It seems to me that the explicit separation (and vagueness) of 'compter' and 'computer system' seems intended to have the latter cover networks. But perhaps you're right.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: