Having just gone through it today, I'm imagining getting this from my shiny new neural interface:
"Due to unusual account activity, you must change your password. Please enter 12 characters with at least three upper case and four lowercase letters, punctuation, two UTF-16 and one unprintable ANSI character.
Error: You may not use any password you've ever used (or imagined) previously. Please try again."
This is awesome - when I first read the headline I totally expected something different.
The user has a password to start or stop the BCI from decoding what they are thinking - this way they have control over what is said out loud or translated. Seems like a no brainer.
So it's still not unsettling to you they came up with something that is actually capable of reading your very private thoughts. You're aware the potentially secondary password protection isn't what made this feasible, aren't you.
So... How fast it will start being used to read thoughts nonconsensually? Military and "law enforcement" always wanted something that isn't torture but gets the information out of people.
Never. It requires several electrodes to be implanted into the patient first. Then there's an adaptation phase in which the patient trains the system. No spy network is going to be able to surreptitiously tap into your thoughts with this. Ever. The signal available outside the skull is way too weak and blurry.
What if you make people do the hard part voluntarily by making the device desirable to them? Including a receptor inside the scull. Then you just have to pick up the pieces.
> What if you make people do the hard part voluntarily by making the device desirable to them?
This. It's like if you want to collect biometric data about everyone's faces with different expressions, different angles, and how those faces change over time, you just make a mobile app where people voluntarily record themselves.
So, if the problems are:
>> It requires several electrodes to be implanted into the patient first. Then there's an adaptation phase in which the patient trains the system.
Then one possible way I can think of to make people do your work for you, is to release a nice VR videogame to the point it becomes popular, and have some features that make it nicer if you ("enhanced controls", or "your HUD shows exactly what you want just by thinking it like Ironman helmet", or whatever).
Taking an existing and popular videogame and making a mod like this would also work.
There's non-zero desire for full-dive MMORPGs, so marketing it like a step towards that would entice a non-zero amount of gamers.
Once it's normalized on niches like that you'll probably have a better time expanding outside that niche, because by then it would be "that videogame tech thingy that cool and rich streamers use" rather than "the sus mind reading stuff".
It doesn't need to be videogames, but the idea is the same, you make an "inoffensive" thing that people want to use, and then leech off the collected data.
you are wrong. tech is already here. recent advance has been application of deep learning to decode bioelectrical field of your brains. it's an ongoing telecom company side business
> When a participant imagined the password ‘Chitty-Chitty-Bang-Bang’ (the name of an English-language children’s novel) the BCI recognized it with an accuracy of more than 98%.
I wonder how difficult having a conversation about that novel (or film) would be. I imagine you would accidentally start saying your thoughts out loud.
you could set your password something like "hey siri", which essentially is a keyword to wake siri up.
not often but sometimes, siri wakes up on it's own. i guess people were concerned at early times, but nowadays it's just _another bug_ in the software.
I do not see passphrase (i think the passphrase is a better word for this feature) as a big issue at the moment.
Okay, I'll be honest, this looks very finicky. I've tried to understand the premise of this article, but it all look like just a bunch of random facts and promises, none of which could be traced or confirmed.
I can't tell 100% that the text was machine-generated. I won't be too amazed to find out that it was.
But there is no technology explaining how this thing works.
Is there any way to encrypt your brain's traffic and then handshake a decryption key to the implant to ensure that accidental activations merely result in garbage output?
And I'm sure it will improve as the electrode placement and NN is optimized. Accuracy also may improve if the speaker can learn to slow their 'speech' and perhaps add brief gaps between words.
I wonder if trying to enunciate distinctly would help?
Its potential to recognize such a large range of words is also encouraging. That implies the signal is quite rich yet deconvolvable.
Agreed. It's fascinating to think about (no pun intended) where this could go, but also I can't keep myself imagining a world where this tech is ubiquitous and everyone's wearing those casually and it's all "cloud" connected and how it can be weaponized against users by governments and TLAs.