Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Your Android Phone Is a Security Key (blog.google)
198 points by arusahni on April 10, 2019 | hide | past | favorite | 138 comments


I like the idea behind it in principle, since it will simplify 2FA for the masses and may lead more people to adopt it.

But, apart from that: 1. It's only on Chrome (for now(?)) 2. It's only for Google products (for now (?)) 3. It's only on Android that Google fully controls remotely (and probably it will stay there).

All these give even more power to Google at the expense of convenience and allows a single company to define "how things should be done".

And I should clarify that I am not against this tech specifically. If anything, I think Google has some of the brightest minds, so technically I'm sure that the product will be great.

The problem is that with great power comes something that Google (as a company) seems to be lacking lately.


Before the announcement I've seen references to a 'caBLE'/ 'cloud assisted Bluetooth Low Energy' transport on the Web Authentication mailing list and W3C spec GitHub issues, this is exactly that. The VentureBeat article[0] confirms:

> “Under the covers, however, the phone and computer are communicating with the FIDO CTAP protocol over Bluetooth and the website and computer are communicating with the WebAuthn protocol and this adds the phishing-resistance. [...] But for now at least, the feature can only be used for 2FA on Google accounts. Google has submitted caBLE to FIDO and it’s under review by the working group.

[0]: https://venturebeat.com/2019/04/10/you-can-now-use-your-andr...


The problem isn't even Google, it's just lack of actual support for services that need it. You have to have the right client, the right device, and every website has to implement it.

Government websites won't support it, nor most financial services, your gym, school, job, etc. Sensitive records like your SSN will be kept in walled gardens accessible by a simple user and password, and maybe a security question. Most people will still be at significant risk of exposure because the real targets of information theft aren't logged into by end-users.

But your Gmail account will be locked down.


Admittedly, that's not a minor benefit. If someone has access to your Gmail account, there's all sorts of information there they can use to engineer access to other services.

The naive optimist in me wants to think that making security keys accessible to more users, and getting them used to them, will lead to pressure for other services to follow suit. But then I think about how long people have been criticizing banks for ridiculous password policies that are seemingly universal in the industry, and I know better. Or bad security practices at organizations in general. I'd very much like to be proven wrong.


I mean the problems I have with banks are 1) you type your username and password on different pages 2) password limits of like 20 chars.

We're in a day and age where people should be using password managers (even my tech illiterate parents use them). So why are character limits set so low? I want my bank password locked down. And we're in an day and age where (1) shouldn't even be an issue. I'm not sure what other issues people face, but I've seen these patterns with multiple banks.

I think banks get lip more because they are a clear case where security should be VERY high. Just like you should protect your email strongly (note google does (1)[0]).

[0] https://screenshots.firefox.com/v9AjmrW7jtr1aGg3/accounts.go...

[0'] If someone is an actual security expert, I'd like to know why (1) is an acceptable practice. (My bank does it, but they pass you to the password page no matter what you type in, which seems safer than what google is doing)


>If someone is an actual security expert, I'd like to know why (1) is an acceptable practice.

So your issue here is that Google tells you whether it's a valid email address before you enter in a password?

You could validate email addresses yourself by sending out a ton of emails to different permutations of *@gmail.com and seeing which ones come back as undeliverable. An email address on its own isn't inherently private so this doesn't seem to be a security risk to me unless I'm missing something.


I interpreted the parent's complaint in (1) as the login form having the username/password entries split across two screens, not as a complaint that it tells you the account doesn't exist.

AIUI, splitting the entry across two screens like that breaks a lot of password managers, as they can't handle it. This hampers the adoption of password managers, which would largely help the average Joe's security.

Google supports external auth in some cases¹, and to know whether they need to redirect to that auth, they first need your username / email. Then, you're either redirected or you're shown the password entry prompt.

I don't know of any banks that do this, so this might not be applicable to them. (Theirs might just be bad design.)

¹GSuite, not consumer GMail, but I assume the flows are the same


You both addressed different parts to my complaint, so thank you both.

I'm definitely dumb enough to not realize that email login might be a special case because you can check username validity another way (sending emails). And I didn't know that GSuite had external auth.

These split pages don't actually break lastpass, at least for me. One field is still called username and another is called password, so they fill properly.


I'm not sure this is the right perspective. Is it really such a big deal if SSNs get leaked?

On the other hand, insecure e-mail has been at the center of massive political upheavals. I think securing e-mail is way more important than even banking information.


For a random subset of people, not necessarily.

For anyone who has or will have people interested in their particular accounts, not just bulk breakin of simple passwords that should be forbidden in most sites, it becomes a much worse risk profile, as you often can't turn off places letting you use that data point as one part of verification.


Well, that depends where you live. Google is available for everyone, not just technologically-backward places.

Here millions are currently checking their pre-filled tax returns, which they have accessed using smartcard authentication, so Gmail ain't that far ahead.


Browser support is pretty good: https://caniuse.com/#feat=webauthn - and Google submitted the caBLE transport to the WebAuthn working group. Give it some time.

And don't count out governments just yet, U2F got some love: https://www.yubico.com/why-yubico/for-business/authenticatio...


In principle I don't see any reason why this couldn't be made to work with any browser/site through the WebAuthn standard. In fact, it wouldn't surprise me if that's exactly what Google's doing under the hood.


This is 2FA for Google accounts so of course it's on Google products?


Google Authenticator works with any service that uses the TOTP or HOTP standards.

https://en.wikipedia.org/wiki/Google_Authenticator


This post is not about Google authenticator. It's about making a different 2FA solution for their accounts in particular.


Or you can just use https://krypt.co/ which uses the same FIDO standard, is open source, works on both android and iOS (where it actually uses the secure enclave) and both in Chrome and Firefox


An important caveat with Krypton is that while it is open source, the published source is essentially useless because it is not licensed under a free license.


That should be sufficient to audit the code, and verify that the binaries distributed via app stores are actually compiled from it, no? For a security app, it's pretty useful.


You can see the code, but can't do anything with it? That'd be source-available, not open source.

Having a source publicly available is one of the prerequisites for something to be considered open source, but it's far from being the only prerequisite.


I've been using this for the last month or so and have no complaints. Works well.


> Now, you have one more option—and it’s already in your pocket. Starting today in beta, your phone can be your security key—it’s built into devices running Android 7.0+.

You know, it's nice they phrase this as an "option", but in my experience Google has the habit of forcing me to have my phone on me when I login from a new location / new device, something I never asked for and apparently cannot disable.[0] This has locked me out of my Google account more than once which also locks me out of anything that sends 2FA to my Gmail or Gvoice. I guess I'm thankful that I've learned this in non-emergency scenarios, as I'm now prepping to degoogleify myself, but it's a user-hostile in my opinion. Security always has convenience trade-offs, but let the user decide where they want to draw that line.

[0] https://pbs.twimg.com/media/D3WJ0UdXkAASs_O.png


This happened with some of my friends, and locked them out of their gmail accounts(2FA disabled accounts). Google won't let them login to their accounts after providing correct password & SMS OTP.

Remaining options include: 1. give date(month year) of email sign-up, which most don't remember

2. pasword reset over alternate email address, which wasn't set during signup.

The only way for free gmail users to get help is support forum ran by gmail user volunteers, which didn't solve the problem. To me this approach to security, just seem super paranoic.


I think it makes sense to me, security that only works part of the time can be bypassed when it doesn't work.

However i've never encountered a TFA service that let you disable it in certain scenarios so i may be wrong


Google has always given me other options, does it really enforce having a phone now?


I don't know how they determine what options to offer, but using my phone was the only one given, despite entering a correct password. The only other option, which I either found from the "Learn more" link or after exhausting the "login with your phone" attempts, was to create a support ticket for my G-suite account which, in this case, would have been slower than returning to home a few hours later where I had left my phone.


There's an option on https://myaccount.google.com/security to turn off 2-step login.


Sorry for being thick, but I'm not seeing it. This is a G-suite account (though I'm the only user / admin) so maybe it's different.


From my G-Suite account (where I'm the only user / admin), it shows two-step verification settings here:

https://myaccount.google.com/signinoptions/two-step-verifica...


That seems to redirect me to the same page linked earlier in this thread (https://myaccount.google.com/security). Taking a look in my admin console, it looks like "Allow users to turn on 2-step verification" is unchecked, so presumably 2-step verification is not enabled for this account. That's exactly what I want, but it seems Google is failing to abide when they think I'm a "hacker". Other people have had the same frustrations[0][1] but there is apparently no way to stop Google requiring additional verification at their whim. Ultimately that means Google controls when I can and can't login to my account, so it ceases to be a usable product for me.

I appreciate your help, though!

[0] https://support.google.com/mail/forum/AAAAK7un8RUP1RC23nwRZ4

[1] https://support.google.com/mail/forum/AAAAK7un8RUZvZQQfsawrE


Did you enable 2FA from https://admin.google.com/ for your account ?

Dashboard -> select Security -> Basic Settings -> Two-Step Verification setting


I don't know how I feel about making a device so endlessly hackable a "security key".


All Android devices certified by Google will have a hardware security module which should keep the keys secure. Some cheap non-certified devices (mostly Chinese) might not have hardware backed keystores, but I doubt those devices would be able to run this Google app.


>> All Android devices certified by Google will have a hardware security module which should keep the keys secure.

Source? I understood that having a HW-backed key store is still entirely optional for the purpose of Android certification.

On top of that, I noticed some ambiguity on whether a TEE like ARM TrustZone qualifies as a hardware-grade protection mechanism in the same way a discrete and dedicated crypto processor is (I think the two technologies provide very different assurance levels).


Here's the CTS for Android 7.0 https://source.android.com/compatibility/7.0/android-7.0-cdd...

"When the device implementation supports a secure lock screen it MUST back up the keystore implementation with secure hardware and meet following requirements: MUST have hardware backed implementations of RSA, AES, ECDSA and HMAC cryptographic algorithms and MD5, SHA1, SHA-2 Family hash functions to properly support the Android Keystore system's supported algorithms. MUST perform the lock screen authentication in the secure hardware and only when successful allow the authentication-bound keys to be used. The upstream Android Open Source Project provides the Gatekeeper Hardware Abstraction Layer (HAL) that can be used to satisfy this requirement. "


Titan M. They have it built into their Pixel devices much like a tiny mobile TPM.

https://www.blog.google/products/pixel/titan-m-makes-pixel-3...

With that said, I cant find mention of this on the page so it's probably not leveraging this.


It is. If you have a pixel you can just use a button press, because the Titan m can directly sense the button state.


Not doubting you, but were did you find that information? The only thing I see is "it’s built into devices running Android 7.0+" and I found that on multiple pages.

As Android 7.0 is available to install on any device and I don't see anything about "certified android devices", I assume they mean ANY Android 7.0 device? Then again it only works with Google services atm, but I know you can sideload google play services so...

https://support.google.com/accounts/answer/6103523


I just said I doubt it, I don't actually know. The combination of the Google web service and the android keystore api probably makes it at least difficult. But if you're running some custom compiled aosp code maybe it could be hacked.


1. This functionality only works on Android devices having passed CDD (try it). 2. Titan M is leveraged on Pixel 3.



That's great, but that doesn't actually guarantee that this new feature requires that the phone is using the hardware security module, just that you'd be able to prove if it it was.

Don't get me wrong, it's still better than TOTP in many ways, but having the actual security coprocessor is a big distinction.


There is no practical difference. For people who need the security, this API can guarantee it.


If a Google app is required for this, we've already entered the "can't be trusted" category.


Aren't you already using Authenticator running on an Android device?


Google Authenticator is just Google's front end to an open standard (TOTP). Authy, 1Password, various CLI apps, they all implement this same standard.

Lots of independent websites market this form of 2FA as "Google Authenticator". It's kinda like asking people to enter their gmail address (rather than their email address).

Eitherway, TOTP is a big step away from putting all your 2FA eggs in Google's basket.


That doesn't mean I think it's a good idea.

A phone is better than nothing. A real token would be much better.


I'd argue using Authenticator is better than simply not using 2FA. Which is probably the choice for a lot of people for whom always carrying some dedicated hardware device is not really a realistic option.


absolutely. it’s a second thing to hack. just because it’s _possible_ to hack the second thing doesn’t mean it isn’t helpful.


In what way is carrying a dedicated device not realistic? My second factor lives on my keyring and is only a bit larger than a typical door key. Everyone carries keys.


I don't carry my keys when I'm traveling if I'm not driving my own car. So, no, not everyone carries keys.

ADDED: I do have other 2FA hardware as well. But I assume I'm not guaranteed to have it with me when I need it.


Ah, you have separate keyrings for your home keys and car keys? I'd say that's not usually the case.


No. I have a keypad on my door. So when I take a trip by air (which is common), I just leave my keys at home. But I agree that my scenario may not be super common.


It's common in any situation where you share a car with other people.


Which, in turn, is more common outside of US.


I don't normally drive a car (I take transit to work) so I don't carry car keys. I have an electronic lock on my door at home, so I don't carry house keys. I don't carry keys of any type.

My phone case has a slot for credit cards, so all I normally carry with me is my phone, a credit card, my work badge, and my transit pass.


It doesn’t add value for most applications for service providers.

TOTP for GMail assures Google that the same person who enrolled the account was given custody of a key.

The physical token only adds value in scenarios where phones aren’t available or you need to assure the identity of the individual.


> In what way is carrying a dedicated device not realistic?

For most people, it's so far outside what they're familiar with that it feels alien and incomprehensible. It makes absolutely no sense to them. So they're not going to do it or adopt it quickly.

User education will catch up in time, but that will take quite a long time.


>For most people, it's so far outside what they're familiar with that it feels alien and incomprehensible. It makes absolutely no sense to them.

The Google Titan Key looks like a car keyfob, so I don't really agree with you on this.


Um, yes. Plus it uses "Google Cloud".

A security key device ought not to be remotely writeable.


Looks like Google has used the open Web Bluetooth specification (that only Chrome currently supports) along with the open FIDO Bluetooth spec ( https://fidoalliance.org/specs/fido-u2f-v1.2-ps-20170411/fid... ).

A read-only, non-wireless security key like Yubikey would be even more secure, but this is an improvement over TOTP codes, which can be phished.

This is also better than SMS 2FA, which is prone to phone-number theft.

It's also better than Push notifications for 2FA, which relay on third-party servers.

This solution uses Bluetooth between your phone and the Chrome browser, offering a good balance of security and convenience.


FWIW that is a really old spec. The FIDO 2.0 Bluetooth transport is described at: https://fidoalliance.org/specs/fido-v2.0-rd-20180702/fido-cl...

But an article from VentureBeat[0] mentioned it's a new transport called 'cloud assisted Bluetooth Low Energy' or caBLE, which they've submitted to FIDO for standardization.

[0]: https://venturebeat.com/2019/04/10/you-can-now-use-your-andr...


This does not use Web Bluetooth, but rather an implementation directly in the browser through WebAuthn. A critical part of the phishing resistance of WebAuthn is that websites do not get to talk directly to the authenticator.

To make this work, we made an extension to WebAuthn+FIDO for pairing-free BLE as Rafert points out. We're already in the process of making that open together with these standards bodies, stay tuned!


I don't believe Web Bluetooth should be used with security keys (in fact, I believe communication with those is specifically blocked as a part of Web Bluetooth).

This is because the browser needs to pass the origin to the device and ensure the webpage can't impersonate another origin.


You can already use a Yubikey with Google. I have mine set up.


They don't make this very obvious but this only works in Chrome. So you'll have to use SMS codes, the Authenticator app, or backup codes everywhere else. (edit: they explicitly say so when activating it but not as clearly in the docs)


> all you need is an Android 7.0+ phone and a Bluetooth-enabled Chrome OS, macOS X or Windows 10 computer with a Chrome browser


Heh, Microsoft seems better than Google to support Linux in new products now. VS Vode is amazing, as is dotnet core.

Who would have thought 5 years ago.


I suspect they just don't want to do tech support for Bluetooth in Linux on random hardware. Can't blame them.


Of course they do. This is the “embrace” phase.


I could see an argument about WSL being that. But making your software work on the linux desktop/server? I really don't see how that lets you get into a position toward extinguishing anything.


Especially not when they have open sourced it under a liberal license.


Not really, that's because Windows Server isn't that popular and if they want developers to remain on Windows and still use MS tech for their deployment they need some sort of Linux support.


This is great. Hopefully it'll get more people using 2FA. I don't think this is the best security practice, as others are nothing how insecure Android is. But it's better than nothing. It also pushes more people to use FIDO and 2FA. This is for the average person. If you want more get an yubi key or something.


Edit after trying: It is a little disappointing that it is Chrome only.


Implementing and maintaining the code in the browser that manages the local BLE channel to a FIDO authenticator is unfortunately a significant undertaking.

We (the team behind this at Google) work actively with FIDO and the W3C on the open standards behind this so that other browsers can support this as well in the future.


I'm very happy with the work and understand it is no trivial feat. But as a FF user I can't really try it out.

> We (the team behind this at Google) work actively with FIDO and the W3C on the open standards behind this so that other browsers can support this as well in the future.

Super excited to hear this.


This is great other than the fact Google is involved. Now I will admit it's unlikely (effectively impossible) that this would come to Android without Google being involved but I am not interested in adding Google to more of my life. I'm looking to cut them out more and more wherever I can.


Most security keys don’t have app stores full of malware.


Now they do.


Your yubikey isn't owned by an evil megacorp.

Use a damn yubikey; they're practically free and don't monetize their users.


As someone who can't currently afford the two Yubikeys needed to finish securing their internal startup services...please think for just a second before you say things.


I'd have sent you one if you weren't such a dick. Even though your statement is not remotely believable.


I was slightly a dick because I didn't want anyone to offer - believe all of these statements or don't.


If you can't afford $20 for a yubikey, but have time to post snarky self-defeating comments on HN, you might consider why you don't have the $20.


> Your Android Phone Is a Security Key

no it's not. it's pretending to be, but without vendors actually maintaining and investing in their forks and the hardware having a known good security enclave, you might as well post your credentials on twitter.


We could test this claim pretty easily:

- you post your credentials on Twitter

- I'll store mine using this Android Phone

Let's see who gets hacked first!


That's a bit snarky. As other commenters mentioned, it can leverage TEEs using the Android Keystore for secure storage. And the way WebAuthn works means users are protected in case of a database breach (it contains only public keys) and the protocol protects against phishing. Both are ways better than usernames and passwords.

It got certified (at level 1[0]) too, in case that changes your mind: https://fidoalliance.org/android-now-fido2-certified-acceler...

[0]: https://fidoalliance.org/certification/authenticator-certifi...


I was intended to be a bit snarky indeed. I know TEE's and remote attestation are supposed to help out here, but I also know that even a purpose-designed chip couldn't get it right the first few times (think MIFARE NFC, TPMs and YubiKeys with Infineon cores). It's a very hard problem, and making light of it by having marketing (big assumption) play it as if a phone is now a security key seems a bit of a leap here.

At the same time, WebAuthn is better in itself but still not the silver bullet versus passwords and a password manager. We don't live in an ideal world of course, but if we are going to turn commodity multipurpose devices into soft tokens, we might as well name it as such. (but naming it that way definitely doesn't have the same ring to it: "Your Phone is a Soft Token").


Yeah I don't see how this is any different from the standard Google Authenticator style affair.

Without a security enclave (which devices are starting to include) I don't see how this is an improvement.


I think that using a soft token or push notifications are better than nothing, but a company putting a (often known vulnerable) phone at the same level as a security key seems like setting a bad precedent. Then again, at least it's not SMS-verification. Perhaps in future leaks and scrapes of end-user deviced we don't just get data dumps and passwords, but also internal key seeds and tokens...

It's the secure enclave and the purpose-built firmware that makes a security key a security key. I'm sure some specific Android devices have a safe implementation, and I'm sure that some SIM cards and perhaps the recent iPhone Secure Elements have properties that allow them to be safely used as a security key, but putting it forward that phones can be seen and used that way in general lacks that important distinction.


Android 7 added support for key attestation, making it possible to verify that a key came from a secure enclave. That's presumably why the blog post says Android 7 and above. https://source.android.com/security/keystore


It’s resistant against phishing. That’s solving the problem. OTPs are not.


Connects via Bluetooth to the device you wish to sign into.


I don't see how it's less secure than storing passwords in ~/Documents/Passwords.txt. And it's a second factor, so combined with first factor the result is pretty secure. You can't browse other people phones, even without security enclave.


Websites running javascript weren't supposed to browse other people's computers either, but we all know how that assumption went. Yes, it has gotten better the past few years, but the whole point of a security key is that it takes a purpose-designed piece of hardware and software, with a minimal attack surface. A phone is far from it.


My phone already prompts me when I login to a google account on another device.

Is this new / different?


I think the difference is that the prompt you describe still travels over the wire to get to your device, while this new thing used Bluetooth based on FIDO standards.

So the access key the client logging in passes to the server is from the local device you trust.


Ah thank you!

Sometimes google's announcements really make it hard to understand their products when there is overlap or similarities or what.


I thought that. AFAICT it's local via Bluetooth, as opposed to through Google servers and push messaging. Good idea, but better if it's open.

Of course their motivation as normal may be to have users have their wifi/bluetooth on all the time so they can reliably collect more location data.


As others pointed out, the difference is that the local connection to the phone, together with the Webauthn+FIDO protocols make this resistant to phishing.

That comes at the cost of needing Bluetooth, which isn't available on all computers. I.e. it's more secure than the prompt you have now, but that will at least work everywhere.

The right tradeoff between the security and the convenience/availability is something that is very context-dependent, and different for each user. Hence the multiple options.


Can someone explain how TFA (or any security feature that relies on my phone) works when the phone is unresponsive -- dead battery, no cell or internet reception, hardware failure.


I am interested in how frequent travelers manage these security measures (especially abroad). For SMS: quickly obtain a burner phone, log in to Chrome, something something SMS or Authenticator? For Authenticator: log in to Chrome on any machine you can locate that you can trust? For the printed backup codes, you carry them with you as you travel, and through security?

I am trying to develop a security process that I can rely on. It only has to be better than what I have now, it doesn't have to be bulletproof.


When possible, I completely avoid services that use SMS 2FA. If given the option, I always opt for authenticator apps or codes-via-email 2FA, in that order. I use SMS 2FA so infrequently that I've never encountered a situation where I needed to get a code SMSed to me while abroad.

I store my printed backup codes for most of my services in an encrypted file in my Dropbox (encrypted with a different password than the password used for Dropbox).

I then also have printed backup codes for my primary email account and for my Dropbox account that I carry with me on an unmarked piece of paper stashed deep in a semi-hidden pocket in one of my bags. I also have printed backup codes for my email and Dropbox stashed in a semi-hidden place in my home, with the thought that in a last case scenario (or I lose my bags or something like that), I can phone my roommate and have him read me the code.

It isn't perfect and I feel like it could be improved, but so far it works fine.


It won't--so you'd have to use the backup methods they made you set up, like SMS codes, Authenticator, printable backup codes, etc.


SMS and authenticator won't work on a dead phone either, so you are left with keeping a paper with backup codes.


Or a second key.

What happens when your first and only Yubikey gets dropped in a puddle? You're also back to paper and backup codes.


>so you are left with keeping a paper with backup codes.

It's not magic, there isn't any other way.


Yes. You need an alternate backup mechanism, like pre-generated one-time codes, or (shudder) SMS.


The feature described in the article will work when your phone is offline. We'll publish instructions soon on how, but it will e.g. involve manually waking the screen to trigger the local communication.

Of course it won't work if your battery is dead. :)


U2F fits into this scheme nicely


WebAuthn is the successor to U2F. This is just another transport (caBLE/"cloud assisted Bluetooth") for this standard in addition to NFC, USB and a direct connection to a Bluetooth authenticator (e.g. Feitian and Google Titan key).


Here in Iceland, there's a security key embedded in your SIM card that everybody uses as their 2FA solution. It's triggered via a GSM message to your phone, identifies what the authorization is for, and lets you enter the key's PIN code to accept.

The whole thing (except the UI) is isolated from the phone's OS so that even if your phone gets lost or compromised nobody else can auth as you.



Is this based on a hardware security module in the phone? I don't see this written anywhere in the blog post.

For something like this, especially with your phone, putting the private keys out of reach of the CPU/memory and hardened against side channel attacks is table stakes.


For phones that have a dedicated hardware module, such as the Pixel 3, yes the key material is generated and stored there. Using it requires a physical action that is hardwired to the hardware module.

We think the most pressing need right now is to protect users against phishing, which is a much larger threat than malware. Thus we think there's a lot of value in enabling this for all phone models where it's possible to run the protocols.

(I'm the TL for this at Google)


So, can the button used be changed or does it have to be standardized across devices and which is why it's the volume down button? Because the squeeze function on Pixels or assistant buttons on other devices could be used instead, though that's just me wondering out loud for an alternative user-facing implementation because the use of volume button seems strange to me.


On Pixel 3, no, it cannot be changed. Only the volume down button is wired to the Titan-M. Why that button rather than others, I don't know.


My Pixel 2 already does this. When I sign into Google I get a notification that asks if I'm signing in, I click yes, done-zo.

Is the only thing new here the UI + that it's open for all Android 7.0 phones now?


disclosure: I work at google

The thing you have now communicates that yes click over the internet. This new thing communicates through a local channel (bluetooth).

Communicating over a local channel prevents phishing.

Consider this attack: Attacker hosts googlee.com and you get tricked into going there. The login site looks exactly like the google site. You type in username/password just like normal. In that moment they take your phished credentials and pass it to the actual google server, like a man in the middle. Now you receive a prompt on your phone asking Yes/No. You click yes, okaying the attacker's login.

Now that same attack with the local channel communication - they can't take your signal and pass it on through to google


As there's very little documentation on this right now: couple of related questions (feel free to ignore them I don't want to guilt you into it)

1. It doesn't seem to be using the Titan M flow on my Pixel 3 currently

2. After reinstalling GMS on my phone to try and get the Titan M working, it stayed registered as a key, but the prompt never shows up on my device.

I guess this is more of a "flag for internal review" vs a "please provide me with answers".


Thanks for this! I'm the TL for this at Google.

Re 1: The Titan-M specific flow is still rolling out, you should see your phone switch to the volume-down UI soon.

Re 2: I've flagged this and we'll look into it.


Hey, thanks so much!


Is there any way to get this working in Firefox? :(


It probably depends on Web Bluetooth, which Firefox doesn't yet support, see https://developer.mozilla.org/en-US/docs/Web/API/Web_Bluetoo...

As an aside, I'm not sure that I want random JavaScript to be able to do Bluetooth stuff. At the least, I'd want the ability to limit it on a per-website basis.


As far as I know, Firefox doesn't support BTLE U2F devices yet (I work at Google, but not on this project)


Off topic:

This is the state of web we are in, and this is coming from Google. [1] I have literally 20% of the screen displaying useful information. The others are all useless navigation or related crap. Just seeing it nearly got me to puke.

It is one those problem in general where the web page is responsive and mobile first.

[1] https://ibb.co/fCfmW6h


Sticky elements that follow you around while scrolling a web page should be banned like the blink tag. Especially in a mobile browser.

I know where the navigation bar is, if I want to use it, I'll scroll back up and touch something on it. If I want to read related articles, I'll scroll down past your piece, which is where that kind of nonsense always is.

But it gets better: you know what's almost always sticky? Those hideous share/"post this to a social networking site" badges! As shown in that screenshot, even Google can't resist that one!

Why are you covering my viewport with everything but the content you want me to read? Are you trying to make me leave? You know I only have so many pixels on my five inch mobile screen, right??


Put this javascript into the URL field of a bookmark (thereby making it a "bookmarklet" [1]) and then select the bookmark [2] on any page that has those sticky elements. Once run, the sticky elements will be gone. Note multi-lines below, but it will all be one line in the URL field.:

    javascript:(function () {                               
      var i,elements=document.querySelectorAll('body *');   
      for (i=0;i<elements.length;i++) {                     
        if (getComputedStyle(elements[i]).position==='fixed' 
            || getComputedStyle(elements[i]).position==='sticky') {
          elements[i].parentNode.removeChild(elements[i]);
        }
      }
    })();
uBlock Origin's custom filters can also be used to delete those sticky elements as well.

[1] https://en.wikipedia.org/wiki/Bookmarklet

[2] on Firefox for Android, 'selecting' the bookmark to run on a page entails touching the URL bar, then selecting the bookmark item into which you placed the javascript from the bookmarks list that will appear.


I fully 101% agree with you in principle, but in this particular case my experience is wildly different. As soon as I scrolled, the navigation got out of the way, and all I have is text - with author's decision to keep large wide white margins. (Vanilla Chrome on Windows - no extensions/plugins/blockers, no reader mode, etc)

https://imgur.com/5DOqvj1


What happens when you scroll up a tiny amount? Do the floating dickbars come back?

In my case, I didn't opt in to running the site's javascript... so there were no floating bars. :-)


Well in my case.

Hide Related Article at the bottom wont go unless I actually click it.

The gigantic three layer navigation bar appears when I scroll up just a little.

On Safari 12.1 macOS


Looks like you're zoomed in or have a larger default font size, so it's not fair to say this is the state of the web


so the state of the web is fine as long as you don't have accessibility needs? neato.


Turning off JavaScript sometimes helps, but often just breaks the site. However, I also have a button to turn off CSS, which just flips a setting (dead simple addon, no security risk, can't break anything, and doesn't need a page refresh). I use it about once a week for articles with too low contrast, annoying fonts (this is getting better over the past year or so), or crappy sites like this.

The article after turning off CSS: https://snag.gy/RmXpxl.jpg

To my surprise, without JS the layout stayed the same, only the annoying elements on top and bottom when scrolling are no longer there. That's actually perfect in this case.

It's really weird to notice all the JavaScript still functioning despite just having transformed the site from Google to motherfuckingwebsite-style. The add-on I use: https://addons.mozilla.org/en-US/firefox/addon/css-toggler/


> The article after turning off CSS: https://snag.gy/RmXpxl.jpg

I found it amusing after you mention turning off Javascript, and then that link doesn't work without it.

The actual image is at https://i.snag.gy/RmXpxl.jpg


Heh :)

I don't browse with JS off, I only turn it off for sites that are a pain in the ass when it's on.


I see this: https://ibb.co/ZJQ7Z2h

The extra wide padding on the sides is still wasteful, but it is not as bad as your experience.

However, none of the javascript on the page is executed, because I also run NoScript in default deny mode, and for the screen shot, all the JS is blocked. It looks like the javascript is partially responsible for part of the extra you are seeing.

Yes, testing with javascript allowed, it is the javascript that is adding the fixed top and bottom banners to the page.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: