Hacker Newsnew | past | comments | ask | show | jobs | submit | gcd883's commentslogin

> the genetic exposure risk is pretty directly about aboriginal and islander communities

Is it, though? Families and communities share much more than genes with each other, including big lifestyle choices. So we can't deduce that easily that genes are to blame.


There has been discovered partial link between biome and obesity. Likely it is not based on single gene but rather total sum of factors.



I agree that poor lifestyle choices should be somehow disincentivized.

Leaving aside the fact that BMI isn't a great metric (as mentioned in another comment); we can probably come up with a better one.

80% of healthcare costs are driven by 15 conditions, which in turn are mostly affected by 8 behaviors [1]. I bet that the bulk of the cost is due to completely changeable lifestyle behaviors (diet, drinking, smoking, physical activity).

Alcohol and tobacco should be more heavily taxed (fast-food is trickier, as we need to make sure that there's healthy, equally-accessible food options out there). Education around healthy lifestyle choices should be more available, as well as gym memberships/fitness equipment/trainers. Sure they do cost money, but we're already paying so much through non-preventive healthcare.

Any other ideas how to disincentivized unhealthy lifestyle choices?

[1] http://www.aon.com/attachments/human-capital-consulting/2012...


Be careful what you ask for. High taxes on alcohol would be like Prohibition "lite". Smuggling and corruption would increase. Drinkers would go blind from poorly distilled moonshine.


I really enjoyed "Deep C Secrets" as an intermediate C book:

Expert C Programming: Deep C Secrets https://www.amazon.com/dp/0131774298/

A little dated, still lots of relevant knowledge though.


What I liked about this book is that it's very upfront about how messed up C is. I read about half of it and it made me never want to write any C ever again.

An especially memorable part for me was how it took 3 pages of language lawyering to explain why this doesn't compile:

    foo(const char **p) { }

    main(int argc, char **argv)
    {
    foo(argv);
    }


I don't see this as a good example of "how messed up C is". It doesn't compile because you're violating const correctness, and any other language with similarly sound correctness requirements would flag it in a similar way. If anything, this is one of those rare cases where C chooses correctness over convenience.

It takes quite a bit to explain because the "common sense" is that if one level of pointer indirection allows you to pass non-const where const is expected, then two levels shouldn't be any different. But common sense is wrong, and the compiler is right. And it doesn't require any language lawyering, either - all you need to do is slightly tweak the example to show why exactly it is unsafe.


A better example of messed up in that example is how it's not a compile-time error (but is very nasty undefined behavior) for the function without an explicit return type (so defaulting to int) to end without returning anything.


Compilers can - and, indeed, do - diagnose UB as compile time errors, or at least warnings (which you can then turn into errors if you want) all the time.

Now, it is not undefined behavior for the function to not return anything despite having a return type. It is UB for the caller to try to use the returned value, but in this case it's not actually used.

The implicit int feature is really very much deprecated (in fact, it was already removed in C99, almost 20 years ago!). If, for some mysterious reason, you're trying to compile code like that, it's probably very old code dating to before C was an ANSI standard, and void return type was a thing. In such code, it would be pretty common for functions to not return anything, because semantically they don't - it was just a quirk of the language that there was no notion to express a non-value-returning function back then, and so returning an (undefined) int became idiomatic. In C89, this entire behavior was retained largely because backwards compatibility was necessary. C99 finally fixed it.


One of my all time C favorites.


To provide a slightly different perspective: I recently came across a great blogpost/newsletter: http://mailchi.mp/ribbonfarm/how-to-ride-your-brain-bicycle

It argues that the main reason we don't achieve our goals is not external factors, or the lack of an effective productivity system; it's a commitment failure. We have subconscious second thoughts whether our goals are actually important to us, we lack a sense of purpose.

"There is no point being focused, with a finely tuned productivity system, and maniacal discipline against distractions, if you're not sure what you're doing is worth doing"

So sure, our attention span is decreasing and we're becoming more easily distracted, but is that all there is to blame for the alleged productivity loss?


Exactly. And once the fear of consequences of missing a deadline kicks in, we gain a purpose - avoidance of the negative consequences. So we procrastinate until it’s almost too late.


The article even takes that a notch further: for things that really matter to you, once you find your true calling, you wouldn't even procrastinate in the first place. You get immersed in omnivorous curiosity around it, you get lost. There's no social media/tv/external factors that would distract you from it.


Farnam Street Blog also has their comprehensive list of mental models: https://www.farnamstreetblog.com/mental-models/


heck, they could even tell by sniffing wifi traffic


location + time + song = exact match

Of course, you'd have to trust Shazam to provide accurate data.


I wonder how many people actually end up connecting to the store wifi.


I think they could just get your MAC address if you have your WiFi turned on, without you actually connecting.


iOS doesn’t allow this any longer.


Or you install a DAS in the store and monitor the traffic of phones connected to that.


Wow. The article says that she was wearing daily disposables, which are supposed to be much safer. Did AK really settle within <24hrs time?


I really wonder this too. I've known people who used daily contacts but would leave them in for multiple days at a time (and one was a swimmer). I've swam a lot with my contacts before as I'm mostly blind without my contacts and don't want prescription goggles. This article makes me a little more reluctant to go swimming with contacts but I throw them out every night.


I wear monthlies, but when swimming I swap for dailies, chuck them right after and wear glasses for the rest of the day. I feel safer that way. But I don't take any kind of precaution in the shower. I didn't even realise there was a risk there.

One more thing to live in fear of besides climate change, cancer, drug resistant bacteria, nuclear war, terrorism and the singularity. Yay.

Should I feel better or worse that it's far less likely to affect me than any of those things?


Additionally in the U.K. hot water may not be as safe as it is in the US. Older houses tended to be built with a hot water tank in the attic (loft) and apparently some were somewhat open to the elements and animals. A legacy of that is the dual taps you find in bathroom sinks/tubs.


The contact lens cleaner solution always insists that you wash your hands with it instead of tap water. I think I'll do that from now on when changing contacts


I think that's a little overboard. If you just wash your hands with soap and water and dry them with a non-bacteria infected towel I'm guessing you shouldn't be at risk.


Why not use swimming goggles? Presumably those would reduce the chance of water getting into your eyes :)


I wear goggles.. If you are a swimmer you know googles won't keep your eyes dry.


Yes, they will if you wear the proper size


After you are done swimming, you are likely to remove the goggles before you completely dried yourself. Water can run from your hair down to your eyes.

Also, after swimming for an hour and taking a break, I like to remove them for a bit. It has happened that I got water in my eye in this way. Never had an infection because of it though!


They messed up and overbooked the flight, sure. But why on earth would they forcefully drag people out of the plane, while they could just find volunteers?

They could offer cash/miles to whoever volunteered, increasing the offer until someone accepted. I've seen other airlines do this on several occasions. They couldn't have handled it worse than they did.


They made offers -

1) $400

2) $800

3 and final) Leave the airplane or we'll beat the shit out of you.

Glad I made the decision to not fly United 4 years ago.


They don't even need to offer more money. Offer $1000 and two free return tickets to anywhere in the US, or miles.

I find it hard to believe no one volunteered, even for a small amount, unless they were already delayed


When they do this, it's usually a shittier offer than it sounds. They're not offering cash, but value in tickets, which has an expiration a short time in the future, and can only be applied to certain flights.

It'd be interesting to see statistics on how often these offers are even redeemed.


The few times I've taken the offer, the tickets were good for a year. But for someone who doesn't fly a lot, that is a relatively short amount of time.


They did, they went up to $800 and nobody volunteered. They were too cheap to offer any more.


This is the problem here. If they would have offered $5000 they would have had four seats immediately. They're going to pay a lot more than $20,000 for this disaster.


Why offer $5000 when you can just beat people's faces in and drag them away?


What are they legally required to offer before they can just pick people?


If they pick someone they are entitled to 4 x ticket capped at $1350.


Which is why they "randomly" pick the people who paid the least for their ticket.


Hmm. So for Chicago to Louisville, $800 could plausibly have been 4 x ticket. So that could be the max that United was required to offer.

Note the weasel wording. I don't actually know.


$1350


> They messed up and overbooked the flight, sure.

To be clear, every flight is overbooked.

The airlines have run the numbers, and it is clearly more profitable to sell (for example) 105% of the plane, and then if more than 100% show up, pay people off to take a different flight.


> The airlines have run the numbers, and it is clearly more profitable to sell (for example) 105% of the plane, and then if more than 100% show up, pay people off to take a different flight.

I guess the op's question is, why was it not done here?

(Sadly) Everyone has a price for everything. Obviously that guy's price was not met. As you say, the airline has done the math... does their math include the cost of dragging a passenger bruised and bleeding off of their plane? Probably... Does it include the cost of cameras capturing the whole thing? Probably not.


I imagine that the there was effectively a "CAN'T HAPPEN" comment on what to do when nobody bit at $800. I can imagine the people writing the policy imagining that their employees might collude with friends on board to pocket the money if they let the offer get too high.


That's a problem with the processes in place in the airline. They have to do their own due diligence.

Who's to say that this man's price was the lowest price? Turn it into a bidding process, the lowest 5 bids on the plane get paid, and the airline gets their seats.

The potential for misuse is not a good excuse for accepting negative actions.


Oh, yes, I totally agree. When you have a "CAN'T HAPPEN" in a code comment that's a sign of laziness or bad design and the same applies to company procedures. Hopefully United will go back and seriously rethink how they go about this now. It's just sad it had to come to this to make it happen.


totally agree


That's a horrible reason in my opinion. What other industry could get away with doing this? If someone sold 105% capacity for a concert or sports game and just told the last 5% who arrived "sorry, no more room" people would be extremely upset.


The Telecoms have been selling > capacity for... ever(?). When a major event occurs & everyone picks up a phone to call in/out you get "all circuits are busy". Ever notice the hit to your inet speeds on holidays when all your neighborhood is home & idle?

On one hand, infrastructure costs for idle capacity would/could become cost prohibitive. On the other hand, the provider should be held aaccountable for their failing to provide reasonable uptime/service.


That's not really a fair analogy. It's difficult (or impossible) to predict major events like terrorist attacks or weather phenomena that cause phone circuits to overload. And when these events occur, telecom companies lose money.

But flights fill up every day. When they do, airlines maximize profit.


Agreed, major phenomena are exceptions to the norm, did not intend to convey any judgement on the practice. Those same phenomena affect airlines too. In fact, weather delays had hampered United's ops leading up to the ejection event in the news. My point was airlines are not the only ones who sell > capacity.


Hotels do it routinely. I've personally been moved to another hotel because we arrived later than enough of the checkins before us.


Don't hotels do the same thing too?


The problem is that's not what happened. They kicked this man off to prioritize their own employees. An overbooked flight would never be boarded before.


In United's public statement they said had to remove the guy cause he wasn't "willingly volunteering" to leave.

Hmmmmm....


The idea of offering cash is specifically mentioned in the article.


Except, it would be nice to keep a few of the (recent) older kernels, in case things go awry with the new update.


This already happens: apt autoremove won't remove the package for the running kernel. It'll clean up "old" (N-1 and lower) kernels, but installing kernel N+1 won't allow kernel N to be deleted as long as kernel N is still executing.

Once you reboot/kexec into the N+1 kernel, it'll let you remove the N (now N-1) kernel, bringing you down to one. But at that point you've proven the new kernel works—at least well enough to get to a shell you can run apt autoremove from.

This is why autoremove isn't so auto: if it happened automatically after reboot, it might be running on a now-wedged system (e.g. one that can't bring up the display manager), removing the last-known-good kernel and leaving you with only the broken one.

I think the right middle-ground solution would just be for installing kernel updates to touch a file, and for Desktop Environments to notice that file and trigger a dialog prompt of "you've just rebooted into a new kernel. Everything good?"—where answering "yes" runs apt autoremove. On a wedged system, you can't answer the prompt, so the system won't drop the old kernel. (In other words, just copy the "your display settings were changed. Can you read this?" prompt. It's a great design!)


Fedora/RHEL yum has a much better solution: installonly_limit, defaulting to 3. Kernels which have been updated will only be kept up to this depth. The excess are automatically trimmed during update.


Wouldn't a good solution then be to run autoremove before installing a new kernel?

That way, you have kernel N running, first autoremove wipes kernels N-1 and older, then it installs kernel N+1, so that when you reboot into N+1, you'll always have known-good kernel N if it doesn't work.

It's a very similar solution to how a good programmer solves an off-by-one error, doing a shift/rotate shuffle on a for/while loop.


What happens when you have a high-uptime system where you repeatedly "apt dist-upgrade" and end up installing packages for kernels N+1, N+2, N+3, etc., all without rebooting into any of them?

I agree that if the user manually runs an apt [dist-]upgrade—or really any manual apt command—that that's a good time to do apt maintenance work. (Homebrew does maintenance work whenever you invoke it and there haven't been any complaints so far.) But kernels usually get installed automatically, so it can't just run then.

Now, if there was a specific concept of a "last-known good kernel" (imagine, say, the grub package generating+installing a virtual package when you run grub-install, that depends on whatever kernel you specified as your recovery kernel, ensuring it remains around), then your approach could work—you'd always have two kernels, the LKG for a recovery boot, and the newest for a regular boot.


Exactly what happens on Fedora.


I agree.

I'm running Ubuntu 16.10 currently. A kernel upgrade hosed my setup yesterday, and having an older kernel available saved my butt. I was able to do another `apt-get update` and things eventually worked with the latest kernel.


Relevant: "No, I have no side code projects to show you" https://www.linkedin.com/pulse/i-have-side-code-projects-sho...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: