- Google can't be trusted on anything to do with building responsible AI (they violated ACM Code of Ethics and their own AI at Google Principles).
- Google has no authority to talk about ethical use of technology and human resources. The main manager responsible for this kerfuffle brands himself as promoting diversity and responsible use of technology.
- Google can't lay claim to being a transparent company, both to its users and outsiders, and to its employees and insiders. Even Larry Page was blissfully unaware of this controversial project that directly goes against his motivations for leaving China in the first place.
- When you go work for Google, you'll have colleagues and managers that won't speak up if they get to work on another unethical project. That will eschew core values for making their stock options grow. That want to build their own empire and positive performance reviews at all cost (even if this costs Google dearly in PR and culture damage).
- Google can't be trusted to be self-regulating, putting the user first, and to clean up any damage done from a top-level ethics violation. There is no objective ethics commission or employee Ombudsman to keep the bulls in check.
- There are more than a few rotten apples in the upper echelons of Google. Perhaps such $$-eyes behavior is rewarded by growing the ranks and internal opposition is seen as a necessary evil to be managed.
- Regardless of all this, most people will still use Google Search & GMail for critical information and gain most of their entertainment and research from YouTube. They will also click on Google ads and watch videos on YouTube without ad-blockers. They'll use Google Fi or Nexus phones, will use Android OS on other hardware, and Chromebooks.
In other words, all of those things you comment on are true but ultimately meaningless. Google is not going to change because the users do not care. They care only about their own lives - when Google harms them, they may make a token effort to move away from the platform, but they'll come back.
Quoting from a book on information risk[0] I'm reading:
"... we have to remember how this form of loss [reputation damage] typically materializes for a commercial company - reduced market share, reduced stock price (if publicly traded), increased cost of capital, and increased cost of acquiring and retaining employees."
Iff we want Google to change, we should figure out a way to translate moral outrage into threatening some of the things mentioned in the quote.
--
[0] -- Measuring and Managing Information Risk: A FAIR Approach 1st Edition, p. 138.
not necessarily. Asymmetric warfare works as current geopolitical events show. leaking, doxxing etc are all methods that could (and probably will be) utilized if they don't listen to the good guys within their company.
I don’t think people are particularly crazy about google. Google is the default search engine of all major browsers (ie not Edge!), I can only think of one other major video hosting website (dailymotion). And you have to either buy a google powered phone or a $1500 luxury Apple product. And if you started using gmail 10y ago, it’s pretty painful to switch to a new email address, even more so than to switch banks.
The deterioration of their reputation will probably rather cost them in term of regulations and political pressure than market share. Think of what happened to banks. We start seing this hostility to big tech companies in congress and in every other countries.
Actually, if you don’t mind going back a couple of generations (and considering that typically Apple gives 5 years support compared to a typical 2-3 for Android manufacturers that’s not awful) then you can happily get an iPhone with a pretty competitive CPU for $450: https://www.apple.com/shop/buy-iphone/iphone-7
Way to miss the point :) many Android phones people use over here are several times cheaper. Many people just don't even ever think about buying Apple because of its price.
I make decent money (like most on HN, I assume) and my phone costs under $300, the improvements for phones above $300 are relatively minor besides the camera.
The biggest damage to google in my eyes was the recent redesign of gmail. At this point, I don't interact with anything that they offer anymore outside of search, because they have forced me into using another mail client.
Same with me; it was one of the straws that broke the camels back for me. The redesign was clunky, slow, and loads a dozen iframes in the background. Combined with all the privacy and security issues, I figured I'd just do the switch. Turns out it has been far less painful than I anticipated. Glad to have jumped ship.
Where are the fixes? What solutions do you need for Google to put in place? Submitting bugs and knowing what the bug list looks like is one thing.
Once you start thinking/working on that you realize these issues are part of any large org (full of internal competition and ambition) and Google still handles them better than all the others I have worked at.
- A mea culpa. Drop Dragonfly, admit that the way it was managed goes against AI guidelines, have a third party, like ACM evaluate missteps, and show your commitment to thought leadership on responsible use for AI tech in the future. Or stop the hypocritical canvassing (outwards promotion of "do no evil"), scrap or de-emphasize those guidelines (which currently unfairly attract idealist AI researchers).
- Fire Beaumont.
- Make secrecy and shutting out privacy and security teams against process and punish violators. Inform key decision makers, such as Larry Page, of controversial projects, and punish violators. Make lying to/obscuring your employees at an all hands meeting a fireable offence.
- Align incentives and OKRs. Promote and reward core values and those that make it sticky. Protect whistle blowers and conscientious objectors. Periodically review (and have subordinates review) managers and people in key positions, not for the profit or projects they launched, but for creating an inclusive collaborative working environment. Put less focus in hiring for skill and more focus in core value alignment.
- Appoint an employee ombudsman and objective ethics audit team. Give them enough authority, visibility, and power to make changes for the better. Make sure concerns of lower level engineers make it to the top. Make management justify putting profit over user safety.
- Offer a few golden handshakes to people high up in management, that are directly or indirectly responsible for the public erosion of Google's core values. Be wise to the fact that the best and most productive/profitable leaders are also prone to shrewd and unethical behavior, and guard against this.
See "AI Applications We Will Not Pursue" at https://ai.google/principles. The Dragonfly project alone seem to violate 3/4 or 4/4 (depending how you choose to interpret "Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.").
This will remain a matter of interpretation and, while my reasoning may be sound, your interpretation may differ (much like those employees that pose that designing and developing a censored and spying China search engine app is consistent with "organize the world's information...").
First off, some premises:
- Information Retrieval, Ranking, Spam filtering, etc. are part of AI. Dragonfly applies to these principles.
- Publishing the AI at Google Principles and packaging it the way they did, allows me as an outsider to hold Google accountable to these principles, question their leadership, and critique them if they apparently skirt these principles.
- China's government spying on political dissidents violates international norms on surveillance.
- Google shut out privacy and security teams from evaluating project Dragonfly.
- Shutting out security teams makes it harder to build projects designed and tested for user security.
- Shutting out privacy teams makes it harder to build projects designed and tested for user privacy. Censored search terms are not transparent. You don't control which data of yours get shared with the government.
- Sundar Pichai lied to employees when he said the project was just an innocent proof-of-concept. There was no room for many voices in that conversation, because people lacked moral authority to form an opinion on the matter (they were kept in the dark).
- A fully operational Dragonfly project would make it impossible for Chinese users to use Google to find information about this very controversy (AKA: Google and its behavior itself becomes part of censorship)
- Human Right Organizations were correct in denouncing Dragonfly for its potential to do damage to Human Rights.
- Getting in trouble with the government over search terms that may denote a political preference opposed to the government causes an unjust impact.
- Censored search terms (without showing a notice: "Some results may have been censored it accordance with Chinese law") remove control from humans without any recourse or opportunity for feedback (or choosing another company). By facilitating a censored search engine Google can't point at a government and say: It was entirely their fault.
- A Chinese Google Search Engine which leaks user data to the government is easily adaptable to harmful usage (with little power for Google to push back/notice/monitor once deployed).
- A Chinese Google Search Engine will have significant impact.
- Google is deeply involved, making a custom solution, which enlarges their duties and responsibilities.
- The (user) benefits do not substantially outweight the potential for grave harm
- A censored and spying government-controlled search engine can be viewed as an information warfare weapon.
With these premises in mind, I see them violating all the principles, but one.
> The international norm (Microsoft, Apple, every other big company) is to obey China's command.
No that's what companies without AI guidelines do. Violating international norms is when the US, Germany, and Japan would complain when their governments would surveil as much and as invasive as China is doing.
China's surveillance apparatus is NOT the international norm!
> From the article it sounds like that happened in 2017, before the AI principles existed.
Yes. So Sundar Pichai introduced those guidelines, knewing full well that they were dead in the water.
> It wouldn't make it impossible, it's already impossible.
One of the scariest conclusions. Really pause and take it in. How significant is this?
> What grave harm would be caused that doesn't already exist?
This is a weird reasoning for me. It reads to me as similar to: People are going to die anyway, what grave harm would be caused by murdering them by your own hands?
Maybe if the workers had a say instead of only unlimited desire for large sums of capital and power having a say, Google would be in a more trustworthy, ethical, & moral place.
Welcome to reality. The whole "google is ethical" is a mantra they've always said to attract naive programmers like you. The point is for anything unethical but profitable in any way, someone else will end up doing it anyway so why not do it yourself.
Morals only work at the scale of a closed society (be it a full country), not at the scale of the world.
To be fair, once upon a Time they did make the decision to pull out of a very profitable and growing China. And their competitors didn't, they are still there helping evil. Once upon a time, Google genuinely did put values above profit - maybe not perfectly, but certainly with significant
and meaningful impact to their business. Once upon a time anyway.
- Google can't be trusted on anything to do with building responsible AI (they violated ACM Code of Ethics and their own AI at Google Principles).
- Google has no authority to talk about ethical use of technology and human resources. The main manager responsible for this kerfuffle brands himself as promoting diversity and responsible use of technology.
- Google can't lay claim to being a transparent company, both to its users and outsiders, and to its employees and insiders. Even Larry Page was blissfully unaware of this controversial project that directly goes against his motivations for leaving China in the first place.
- When you go work for Google, you'll have colleagues and managers that won't speak up if they get to work on another unethical project. That will eschew core values for making their stock options grow. That want to build their own empire and positive performance reviews at all cost (even if this costs Google dearly in PR and culture damage).
- Google can't be trusted to be self-regulating, putting the user first, and to clean up any damage done from a top-level ethics violation. There is no objective ethics commission or employee Ombudsman to keep the bulls in check.
- There are more than a few rotten apples in the upper echelons of Google. Perhaps such $$-eyes behavior is rewarded by growing the ranks and internal opposition is seen as a necessary evil to be managed.