Hacker Newsnew | past | comments | ask | show | jobs | submit | nezumi's commentslogin

I'd love to make a small change to the English language. When speaking of speculation and rent-seeking activities, don't say "make money", say "obtain money". Reserve the term "making money" for activities which create value.

Just try it: how successful and glamorous does your rich banker friend seem when you describe him as an obtainer, rather than as a creator of wealth?


Speculation actually does deliver value. Whoever sold to the speculator gained value - they got rid of some risk, and gained enough cash to make it worthwhile. The speculator might also deliver value to whoever they sell to, particularly if the speculator lost money on the deal. Plus they provided liquidity, which sounds like BS until you need that liquidity (eg, try selling a house in a slow market)

Banks deliver value too, unless you prefer to do all your transactions in cash that you pull from under your bed and you never need a loan for anything.

Rent-seeking is indefensible pretty much by definition, although I think people tend to perceive many things to be rent-seeking that actually aren't.


Speculation can create value by injecting capital into the right projects/resources that otherwise would not be funded.


Not all banking is rent-seeking; some of it creates some value


Terrible reporting. Improving health outcomes by AI analysis of patient data is a much bigger prize - morally and commercially - than anything which could be achieved through ad targeting. Google is far too smart to squander such an opportunity by abusing patients' trust.


The big problem is the precedent it sets for data access.

What are the criteria for who gets access? What are the constraints of that access?

This story covers the latter being blown apart, the constraints were poorly defined and implemented and thus even if the criteria is well defined access to far more data was made possible.

I'm sure that few patients desire an end to research, or would argue that such access isn't a good thing... but what of the insurance industry? Should they have access? Would the NHS be able to define and enforce those constraints?

Perhaps that's an obvious no.

What then of an insurer partnering with a medical research company, from the viewpoint of "This costs insurance a lot of money, we'd like to fund a way to reduce that financial exposure".

The grey areas emerge immediately.

If we cannot control access to patient data, data that would be trivial to either strip anonymity or just to have in aggregate enough to still produce net-negatives (i.e. correlated by post code would reveal enough with little extra work)... and if we cannot define and enforce the constraints of access... then we really shouldn't be sharing what is highly sensitive and personal information that was originally only disclosed between a patient and a Doctor under the premise that what is shared is covered by the explicit and implicit confidentiality of that conversation.

It's always worth remembering:

Data was acquired under doctor patient confidentiality.

If we considered that data to have a licence, it is the most restrictive licence possible. One could consider what has happened here as a re-licensing without permission. Such an act could have a chilling effect on the relationship between the doctor and patient.


You are making some implicit assumptions that they data access isn't highly controlled.

I have seen a few of these sorts of deals killed because of data access concerns, and/or computation requirements ("you can have access to anonymized data, but you have to run your code in a sandbox on our health servers").

And, this is why we have legislation.


Less implicit, from the originally linked article:

> The scale of the sharing program was apparently misrepresented to the public, originally announced as an app to help hospitals monitor patients with kidney disease with real-time alerts and analytics. But since those patients don't have their own separate dataset, Google has argued it needs access to all patient data from the participating hospitals.

No assumption there, they didn't have a separate dataset and so granted access to all patient data.


"so granted access to all patient data"

Yes, but under what conditions? Many privacy laws apply here, and treating Google as some monolithic entity where everyone working there can now read anyone's personal health history is inaccurate.


Its psuedononymous data the NHS has previously admitted can be deanonymized given sufficient effort but such deanonymization carries criminal and civil penalties.


Nope. To set a precedent it would have to precede. Giving de-identified medical records to researchers is a long-standing, well-established and regulated process. The only interesting thing here is that it's Google and not some PhD's university lab.

Here's HHS on what HIPAA has to say about this: [0]

[0] http://www.hhs.gov/hipaa/for-professionals/privacy/special-t...


It so happens Google has the perfect means at its disposal for de-anonymizing large swaths of such data: trillions of user location records, calendar appointments, emails, and texts. It's not too hard to put all that together to match a specific encounter record, for example.


which would both violate their contract and also be illegal.


And Google would never break the law or breach a contract. Especially a contract they signed with the UK Government.

I mean, other than that time just a few years ago[0] where Google broke the law and then breached the contract they signed with the UK Government.

[0] http://www.bbc.com/news/technology-19014206


I do not trust Google and I am not being given a choice.


not sure if you saw my comment downstream? would encourage you to read original piece for a more nuanced presentation of the information - https://www.newscientist.com/article/2086454-revealed-google...

happy to address criticism


Hi Hal I thought the article should have compared and contrasted with other government run large data sharing programs such as CMS qualified entity or AHRQ HCUP.


thanks for the comment. This would be interesting, but not sure it would have made sense to pack it into one article that is already heavy with data terminology for a lay reader.

Will definitely be looking into healthcare data more, as this story has resulted in some interesting leads


Google is not a monolithic entity. You'd have to trust the individual researchers who have access to the data, and we don't even know who they are. And the NHS didn't ask their patients for permission before handing this data over, so whether you trust them or not is irrelevant, they get your data anyway.


That seems incredibly naive. They utilize every other bit of information they collect, why wouldn't they utilize this data?

Google is a corporation. It can't have good intentions of its own. It's the thousands of employees who will potentially be working with and handling the data that you need to worry about.


Google places the most comprehensive controls on PII of any place I've seen, including hospital environments subject to HIPAA. (Mostly because, unlike the hospitals, they have the technical clue how to enforce it properly. The hospitals... are still learning about computer security, and it's not their forte: http://arstechnica.com/security/2016/03/two-more-healthcare-... )

Getting access to private information in Google is hard - my experience as a researcher here is that there's a strong incentive to find an open-source or non-PII dataset before touching user data. I'll go through my year here without ever touching even the most innocuous PII data.

It's very unlikely to me that thousands of people will have access to this data. It's much more likely that a small handful will, and that they'll be supported by others with no access whatsoever. From the article, in fact:

"The agreement clearly states that Google cannot use the data in any other part of its business. The data itself will be stored in the UK by a third party contracted by Google, not in DeepMind’s offices. DeepMind is also obliged to delete its copy of the data when the agreement expires at the end of September 2017."

From an incentive perspective, the potential value-add of abusing the data is tiny compared to the potential costs and loss of user trust. Google's very aware of how important it is to maintain user trust -- http://www.techradar.com/us/news/internet/google-we-have-a-c...

Corporations don't have brains, but they have cultures, and Google's culture -- composed of those thousands of engineers -- is quite fanatical about protecting user privacy. It's one of the non-technical things that's impressed me most during my time here.

The risk with a company like Google is if the economic winds and culture changes, but that's a long-term process, and is also the reason for legally-binding contracts to do things like delete the data (see above).

tl;dr: Google has the technical means to protect the confidential data better than almost any other agency, including from its own employees. The most important question to ask is whether the NHS structured the data sharing in a way that provides for long-term protection, and (IANAL!) it sounds like it from the article.

Source: I'm a professor who deals with our IRB occasionally, have colleagues doing joint CS-medical research, pushed patients around a hospital in a younger life, and am on sabbatical for the year at Google.


You have to account for competition.

Suppose there are two companies with the same business model, operating in the same countries etc. One files taxes "fairly" and one minimizes its tax burden as far as the law will allow. Which one do you think is going to be in business for longer? Which one would you buy shares in?

You can get as angry as you like at the companies, or the legislators, but in the end it's the system you need to be scrutinizing.


This is clearly nonsense, what serious competition does Google have in Europe? Or Apple? Apple are making billions in profits, they have no competitive need for lower prices!

Also this is an advantage only massive multinationals have, so again, complete nonsense as they're already past that stage.


If I were in this position I would find the appropriate internal legal counsel and cc them on the email, including the words 'client attorney privileged and confidential' at the top. This affords some protection against discovery were the information to become relevant in legal proceedings. Taking that additional caution on behalf of the company shows professionalism on your part and will be appreciated by management who will see you're trying to contain and redress the situation rather than put the company at risk, in which case you may be seen instead as the risk.

Another thought- it's possible that your supervisor's manager is aware of the action being taken against you. You might be able to get better advice talking to someone in a different reporting chain if you can find them.


An employee is not the client of a company's legal counsel. The company is the client. So there is no expectation of confidentiality. Copying their legal team might make them more likely to act on the problem. But it won't protect the employee.


I got the impression that nezumi knew that, and was suggesting the marking as a way of protecting the current employer, to demonstrate to any more senior management who might become involved that the OP is not just trying to make trouble.

Whether that would actually help here and whether such markings have any weight in whatever legal system the OP is operating within are different questions, of course.


You misunderstood. CC'ing the counsel is a gesture of good faith towards the company, insuring that the company is protected and emphasizing that you aren't looking to start legal action.


The last time I had a problem with ATT I had to make multiple calls and spent over an hour in total on hold. How would you charge for that?

I'd have gladly paid for someone to take that pain away. There's probably a viable business model in there somewhere if someone can independently put a competent customer service layer in front of companies like ATT.


There's a service that will negotiate with them to reduce your bill. I remember reading about them last year, it's some Berkeley MBA who likes to practice negotiation, he might handle this too:

https://www.cabletipster.com/


This looked quite interesting and useful right up until it asked me if I /really/ wanted to close the page.

That's very poor negotiation in my books.


This would actually make a lot of sense if you could detect when a call came off hold. A small pool of operators could deal with calls as soon as the operator on the other end becomes available.


And if they're all busy, they could put ATT on hold. That I would love to do, even if they just immediately went on to the next call.


This would be perfectly fair if there were a reliable way for ad-supported sites to restrict access to only clients which aren't running an ad blocker.


I don't live in SF but I work for a Bay Area company and have often wondered, while waiting to get onto one of those shiny buses, why a city with such a high concentration of rich, smart people is failing its least fortunate residents so miserably and obviously. Why are the technological elite so helpless in the face of this problem on their own doorstep?

Reading the comments here it seems that there is a massive gap in understanding, empathy and plain old data - are there no studies answering the questions being speculated on here?

If you could find a way to address that gap of understanding - provide the technologists with data, and fully describe the constraints - perhaps there can be a useful discussion in the technology community.


I think the mistake you are making is when you talk about it as failing "its least fortunate residents," without regard for the fact that those residents may only be there because of how well it does treat them.


Reading the follow-up article he talks more about the 'folk mythology of progress' and speaks of the cycle of nature (intelligent species and civilizations) as a more fundamental truth. Here's a counter proposition: yes, the processes of nature are supremely powerful and yes, humans are apt to make life difficult for ourselves and yes, progress isn't a given. What we call progress is really just humanity fulfilling our ecological destiny of adapting to new niches, just like every other species. But by destroying our environment, we are creating the very evolutionary pressure necessary to force our own adaptation - which we will continue to rightly term 'progress'.


Some features of the quoted 'Austrian School's theory' seem remarkably close to Bitcoin: it "arises out of an unplanned, decentralized process. This takes time. It takes a lot of time. It spreads slowly, as new people discover it as a tool of production..." and "becomes widely used as money as a result of innumerable transactions within the economy".

It's arguable that the 'unplanned' part is not a necessary feature of a currency. In fact, no fiat currency in circulation today has survived without a significant amount of planning.

In contrast, the planning which went into Bitcoin seems to have made it closely fit the theory's description of money. Arguing that its planned nature negates its 'moneyhood' (to coin a phrase... sorry...) seems a little like arguing that an artificial organ won't work due to not having been grown within the host, or that a genetically engineered organism will fail due to not having gone through an evolutionary process.


How about if you could opt-in to cloud processing in reward for no ads?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: