Hacker Newsnew | past | comments | ask | show | jobs | submit | mindgam3's commentslogin

Lost me at the first sentence.

> Deep neural network (DNN) is an indispensable machine learning tool for achieving human-level performance on many learning tasks.

Not to be pedantic, but words matter. Is anyone actually claiming that deep learning achieves true “human-level performance” on any real world open-ended learning task?

Even the most state of the art computer vision/object classification algorithms still don’t generalize to weird input, like familiar objects presented at odd angles.

I get that the author is trying to write something motivating and inspirational, but it feels like claiming “near” or “quasi”-human performance, with disclaimers, would be a more intellectually honest way to introduce the subject.


> Is anyone actually claiming that deep learning achieves true “human-level performance” on any real world open-ended learning task?

No, but the text you quoted doesn't say that.

Human level performance in this context means humans perform no better than some algorithm on some specific dataset.

Incidentally, that's also how you get to claim superhuman performance on classification tasks. Just include some classes that aren't commonly known in your dataset, e.g. dog breeds, plant species, or something like that. ;)


> No, but the text you quoted doesn't say that [deep learning achieves human-level performance

Uh, it says DNNs are indispensable for achieving human level performance. That clearly implies that this level of performance is achievable, despite all evidence to the contrary.


This is a weird interpretation of that sentence. There are lots of fields where human-level performance has been achieved. See Go, for example.


Maybe you need an RNN to help parse that sentence!! :-)

Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo

etc


If you've been following the field at all (i.e. who the paper is aimed at), the sentence is obvious and non-controversial. There have been many tasks where deep learning has even exceeded human performance (a stronger claim than that sentence).

>Even the most state of the art computer vision/object classification algorithms still don’t generalize to weird input, like familiar objects presented at odd angles.

"some x are not y" does not invalidate "many x are y"


Words matter, concretely define "open ended"? Did you just add that phrase to preemptively nullify evidence to the contrary?

Deep learning has surpassed human level performance on many tasks [1][2]... (could add more you get the point).

[1] https://www.sciencedirect.com/science/article/pii/S2215017X1... [2] https://arxiv.org/pdf/1502.01852v1.pdf


Agreed. Also, nobody is, or should be, using deep neural networks, for legislation and law enforcement. Explainability should be a core design decision when making an algorithm, and not slapped on top of an inherently black box algorithm. Black boxes and even their explanations are used to launder bias and unfairness. And most of these tricks are not even explanations that can be trusted. "Oh look, the cat's head is highlighted, so that's why this picture was classified as a cat!" no insight, no justification, just hoping the network learned some higher level features like humans do, but oh no, when we flip the picture it is suddenly a dog, and when we photoshop the background to be snow, now it is suddenly a polar cat or a pinguin.

Let deep learning do what it is good at, without explaining their performance and errors to anyone: invading your privacy on social networks, helping hedge funds make more money by analyzing Elon Musks tweets, and building military surveillance.

Leave the justifications and explanations to inherently white box models (they are nearly as good in performance as black box now, at least for structured data), and hold off on firing radiologists for a few decades, even though your train set performance is overfitted to be on par with "human-level".

Somehow, somewhere, the deep learning revolution started to drink its own kool-aid and became alergic to critique or solid verifiable computer science. Explainable deep learning does not exist, since half of the time the engineer that build the system can't even explain why it works in the first place. "Strong inspectable feature engineering is hard and time-consuming, so here we shook a box of legos a million times, burned six holes in the ozon layer, and out comes a deep net optimized with gradient descent". End-to-end learning is supposed to be really end-to-end, including the explanation.


“Many learning tasks” is a wiggle term. Sure, edge cases exist, but the methods do work impressively well in many cases.



Amusingly, the transcript seems to have been generated by an "AI" tool (https://www.snackable.ai) and gets things wrong just enough to make it very annoying to read.


It labels almost every paragraph as a different speaker, which makes you wonder why they bother to try! Does their software really think the podcast has 20 different participants?


That's what I was thinking... I just read it like a dialogue between two people... made sense to me.



Let me get this straight. A company deeply embedded in the Democratic establishment (0) that has worked directly with Buttigieg — the candidate with close ties to Facebook aka the company undermining democracy since 2016 — managed to totally screw up, potentially undermining the campaign of Sanders, the anti-establishment candidate.

Yeah, democracy is fucked.

0. https://theintercept.com/2020/02/04/iowa-caucus-app-shadow-a...


If you are arguing a conspiracy of some kind it is probably best to rule out incompetence before moving on to active malfeasance. This is after all the same political party that managed rollout of the non-functioning Obamacare website.


If I were a good politician, I would definitely know hanlon's razor and make sure incompetence seemed plausible


Thats thinking in circles. If these hypothetical conspirators are so hyper-competent as to deliberately feign incompetence, there would be no evidence possible that could dissuade you, as all evidence against it would just be more evidence of a conspiracy.

Eventually you have to step back, take stock of your own experience with people, software, and their interactions, and make your own judgement. If you have no experiecne with software and people, then ask someone you trust who does. I don't see a whole lot of people who write software who are that surprised at what happened given the makeup of both the timing and the resources put into this app.


Please provide a reason why incompetence is any likelier than malfeasance.


Because incompetence is vastly more common than malfeasance?


Well, there is long and detailed thread at the start of this page that begins with a comment from someone who started a different Democratic political software company and who turned down working on this app.

If you want a tl;dr, though, you might consider that (a) there's a paper trail, (b) the app was always intended to be optional and some precincts weren't using it in the first place, and especially (c) caucuses are not secret votes. That last one is how different campaigns had their own estimates before the official counts started coming out in haphazard fashion. Whatever bad things one can say about the idea of caucusing in general -- and we're hearing an awful lot of bad things -- they're actually pretty damn difficult to rig.

Buttigieg overperformed polling because polling only captures people's first choices. A lot of voters whose first choice candidates didn't meet the 15% qualifying point in the first round switched allegiance to him in the second, and the net effect was that a lot of moderates ended up in his camp rather than Biden's. This end result is probably an "only in a caucus" thing, but it doesn't require nefarious intent.


If you're to sabotage the caucuses and you're writing the software that tracks it, there's a thousand far more effective ways to do so and many of them wouldn't immediately shine a spotlight on you directly.


Because for malfeasance you need to be smart and clever and for incompetence you need to be dumb or just tired that day and write a typo.


For malfeasance to remain undiscovered you need to be smart and clever.

For the kind of malfeasance that concatenates an SQL query with user input and then shows the whole thing to the user in an error message, you don't need that.


Chances are whatever software developers are used by the rnc also work for trump or whichever Rs campaign. If you need something done, you will probably be asking your buddy what they did to do that rather than digging through the local classifieds for someone with zero connection to your personal network at all. If your boss tells you to find a dev tomorrow, you are probably just gonna text your fellow intern buddy what company their group went with and go with that to make your life easier.

That being said, it's important to grasp what actually happened with the app rather than give in to misinformation and political apathy surrounding the news. Iowa voted by paper, so we are all just sitting tight while the hand count occurs as is traditionally done.


Never let facts or reality get in the way of a good old conspiracy. Everything you just said is extremely exaggerated.


I'm pretty sure caucus votes are public anyway, it's not a secret ballot.

You can't really rig votes even with an app. Hanlon's razor probably applies here; never attribute to malice that which is adequately explained by stupidity


“In countries like the United States, it's being built by corporations in order to influence our buying behavior, and is incidentally used by the government.”

No, he’s not right about it being “incidental”. The use of facial recognition by police is widespread and widely reported (0). Unlike other forms of digital surveillance, this one has serious and potentially deadly consequences when misused. Banning it at least temporarily is a major win for civil liberties and I would wager that anyone thinking this is a step backward is privileged enough to never have experienced police brutality.

0. https://www.nytimes.com/2020/01/18/technology/clearview-priv...


Facial recognition for Black people has been in play for decades. Generally if you’re Black you fit the description. Cameras actually are a huge net positive for this group.


I think nobody says that banning facial recognition is a step backwards. It just doesn’t solve the underlying issue and might even distract from the real problems.

Now politicians can say "but we banned facial recognition, isn’t that good enough?".


But this would be "perfect is the enemy of the good" situation. Banning things piece by piece is easier than trying to ban everything at once.

I mean, we could curtail most of the surveillance tech development overnight if we banned most forms of advertising and burned the adtech industry to the ground. But good luck pushing that through politics.


> Now politicians can say "but we banned facial recognition, isn’t that good enough?".

And we answer "No!" and continue to push them further.


PSA, Landmark Forum was banned as a cult in France after the release of an investigative documentary [0] which they tried vigorously to suppress.

The group is a magnet for vulnerable people who respond to authoritarian style leadership and coercive psychological techniques including public shaming and gaslighting. They strongly discourage participants from taking notes. Anyone with a history of trauma, abuse or adverse childhood experiences (ACEs) should know that attending this kind of group can cause severe psychological harm.

There is a way to do deep personal transformation safely, but this group isn’t it.

0. https://wikileaks.org/wiki/Suppressed_French_documentary_on_...


Some do claim that it's a cult. But that's not my experience. Course leaders are generally highly charismatic, but are not typically authoritarian.

There is a heavy emphasis on participants sharing experience with the group, and being coached to see their machinery. But it's a stretch to call that "public shaming and gaslighting".

However, people are sometimes very attached to their interpretations, and it can get very intense. But you can leave, at any time. And if you leave early enough, they'll refund your payment.

Edit: This is representative, based on my experience: https://praxis.fortelabs.co/a-skeptic-goes-to-the-landmark-f...


I don't know anything about this particular case, but superficially what you're describing is not atypical of well-established cults, rather it is typical.

The authoritarianism increases as one moves up levels and more "secrets" are revealed. At the ground floor, it's welcoming and communal, but as you are further inculcated the number of "secrets" or "mysteries" revealed, the amount of potential personal exposure (read: blackmail and/or punishment for treachery) and the amount of authoritarianism rises steeply.


What you're describing sounds a lot like what I've read about Scientology.

But not at all like what I experienced in Landmark. Seriously, there are no "secrets" or "mysteries". It's all basically laid out in the Forum. The rest is all practice.


If by “some” you mean the government of France, then yeah.

Another little known fact: Landmark was based on the IP from Werner Erhard’s Est seminars in the 70s, which in turn incorporated specific techniques from Scientology [1]. Some of these fun Scientology influences still exist in today’s forum trainings.

1. http://www.skepdic.com/est.html


Of course it's based on est. I first did the Forum just after the name change. And the Six Day was still 100% est.

And yes, Werner did take some stuff from Scientology. But not the bits about ghosts of aliens nuked on Earth, millions of years ago ;)

Scientology is still probably behind many attacks on Landmark. And I wouldn't be surprised if they were part of the mess in France.


for the record- Landmark was not banned in France or anywhere else. The group is not a magnet for vulnerable people infact they have a 6 page form that participants sign in an attempt to screen people who should be treated by mental health professionals.


The entire idea of an “AI arms race” that we are losing to China is fear-mongering by those with a vested interest in defense spending, ie military industrial complex.

This is a human rights issue more than an arms race. The fact that SF is banning facial recognition tech while the Chinese state is going all in (as another commenter notes) is a win.


I think furthermore, not only is it fearmongering, it's actually wrong.

What the article calls AI is just machine learning. And America leads the way on this when it comes to cutting edge. Look at self driving cars.

It seems the article hinges on implicitly defining AI as adopting mass surveillance/freedom restricting tech.

In reality if America cares about winning the 'tech development war' (I think a better goal than the nebulous 'ai war') with China, it should be worried about improving it's education system. And working on reducing corruption (both in government spending and in private industry such as banking and health care) - In the end, it was education, freedom and efficiency that allowed the west to beat out totalitarian governments. Not the adoption of totalitarian systems of oppression.

Imagine the US trying to adopt the USSR's system to 'obtaining and classifying information' on dissidents since it was part of 'information technology'. I find the article to have a borderline fascist/anti-western-ideals of freedom undertone. Some people think in a way that seems to be completely lacking in the ability to learn from history.


"In the end, it was education, freedom and efficiency that allowed the west to beat out totalitarian governments."

What about the massive catastrophe that killed off tens of millions in the Soviet Union and devastated the country, while the US was left completely unscathed by comparison?

In many ways education in the Soviet Union was far ahead of the United States, particularly in mathematics.

Women were also far more equal to men in the Soviet Union, so in a way this is an example where there was more freedom in the Soviet Union than in the United States, since the roles for women in the US were far more restrictive and curtailed their potential to a far greater degree. The US was also one of the last countries in the world to outlaw slavery, and the lack of freedom that black people were suffered under segregation in the US had no equal in the Soviet Union at the time (though the USSR also had their own racism and discrimination against Jewish people).

The USSR suffered not just from a lack of freedom, but crucially from the concentration of power in to the hands of a highly paranoid and ruthless elite and secret police who killed tens of millions of their own citizens, along with a callousness towards the deaths of millions more in the redistribution of resources and the overhaul of society in a race towards modernization.

The USSR also had to face the efforts of a far wealthier and equally paranoid adversary that was determined to see it fail.

If there had been cooperation and mutual aid instead, if the USSR had suffered no worse than the US during WW2, and if it hadn't been saddled with bloodthirsty paranoid tyrants for leaders, the outcome might have been quite different.


If ... if ... if ... might

3 ifs and one might. Let's see: If my grandmother was male and if she was catholic, she might be the pope. I only had to use two ifs to get to that one.

I'm really not sure what your point is.

Are you seriously arguing that overall there was more freedom in the USSR than in America? I just want to be totally sure I get where you are coming from. Because my post was the general freedom as in the literal definition of it: "the power or right to act, speak, or think as one wants without hindrance or restraint"


No. I'm saying it's not black and white, and the post I was replying to was overly simplistic and misleading.

It's interesting that your response was laser focused on freedom and utterly and completely ignored every other point I brought up.


I said the greater freedom in America helped it win the cold war. Of course it is more nuanced than that. But that can literally be applied to everything and anything ever said - if someone said being outside jail is good or not being addicted to heroin is good... well its more nuanced... maybe someone would benefit from being in jail or from being a heroin addict... sure, but at some point you aren't really increasing understanding. You are just pedantically noting things that are obvious in a way that detracts from meaningful conversation.

It seemed to me you were arguing against my freedom point by trying to say America wasn't much more free than USSR. Since such a position seems so utterly disconnected from reality and history, I asked you to clarify your position, maybe I misunderstood.

I also asked what your point was, since I honestly can't see what you are trying to get at in the context of the conversation: should the US have more anti-freedom ML technology applied to mass surveillance and social control? Do you think that will help? Read the FA and opine, I'd be happy to hear a smart analysis. You seem to be able to do that, you seem quite smart. But picking at the edges of arguments without actually participating is kinda... detracting from the goal of conversation and moving towards ego boosting.

Also, even if you are smart, if I understood correctly that you honestly believe the USSR to be more free than the US in any significant manner based on the definition of freedom, then I'm not going to participate in this line of thought.

I've had a conversation once with someone I had just met. He mentioned 'dinosaur bones were placed there by the devil to trick us'. I asked if he meant it. With a straight face he said yes. You could say I laser focused on that, because after it, I never went beyond 'hows the weather' with him. He has every right to see it that way, I and many others have every right to think of him as slightly less 'there' and therefore avoid getting tangled with what we see as incoherent.


If you apply this logic to nuclear weapons, the inevitable conclusion is unilateral disarmament, followed by being conquered by those who didn't disarm. This kumbaya pacifism doesn't work in the real world and it's incredibly irresponsible to advocate it as a matter of policy. There absolutely is an AI arms race and we're losing it in part because of the naive utopianism of west coast tech activists who think that if we ban "bad" tech, nobody will use it.


No it’s not. It’s a huge deal. You’re just not being imaginative enough.


From a pure cold war mentality AI is absolutely terrifying to me. We're right in the uncanny valley where AI for weapons systems is starting to get to the point where it can feasibly make human soldiers in some positions obsolete. Why do we want fighter pilots when AI can vastly outperform a human? AI doesn't break a sweat in extremely long mission durations, AI doesn't need a massive heavy cockpit and canopy to fly the plane, AI doesn't pass out at high G loads and can take negative Gs and lateral Gs just fine. AI can push jets right to the brink of what the airframe is capable of.

We already have drones that have dramatically lowered the costs of waging war. We don't need to put boots on the ground in a lot of cases where drone strikes are feasible. What happens when it's not just a reaper and we can put tanks and guns on the ground while only putting actual soldiers inside of some small maintenance and supply base to support the machines that are actually on the front lines? Would the American people care even less than they already do about e.g. Iraq and Afghanistan?


> From a pure cold war mentality AI is absolutely terrifying to me. We're right in the uncanny valley where AI for weapons systems is starting to get to the point where it can feasibly make human soldiers in some positions obsolete. Why do we want fighter pilots when AI can vastly outperform a human? AI doesn't break a sweat in extremely long mission durations, AI doesn't need a massive heavy cockpit and canopy to fly the plane, AI doesn't pass out at high G loads and can take negative Gs and lateral Gs just fine. AI can push jets right to the brink of what the airframe is capable of.

Yeah, that's why our plane systems are being designed as such.

We don't call "AI Fighters" "AI Fighters". We call them surface-to-air missiles, air-to-air missiles, and drones. For the case where a human needs to be close to support our "AIs" (aka: missiles and drones), we're creating F35 as nearby stealth supporter. Thats why the F35 isn't good at dogfights, its assumed that drones / missiles will take care of that sort of stuff in the future.

The dogfight race has been lost to air-to-air missiles. Humans can't take the kinds of Gs that a missile can do, you can't outrun something like that in a "fair" circumstance (outside of Blackbird-style "too high / too fast" situations).

-----------

> We already have drones that have dramatically lowered the costs of waging war. We don't need to put boots on the ground in a lot of cases where drone strikes are feasible. What happens when it's not just a reaper and we can put tanks and guns on the ground while only putting actual soldiers inside of some small maintenance and supply base to support the machines that are actually on the front lines? Would the American people care even less than they already do about e.g. Iraq and Afghanistan?

It seems odd to say that war requires human cost in order to be important. Perhaps the Iraq and Afghanistan wars were considered failures not because of their (relatively low) human costs, but because the politics didn't work out in their favor.


The US has basically 2 geopolitical threats - the EU and China. Destabilising the middle east might have inconvenienced them and denied access to the oil reserves there.

Apart from that silver lining, the US's adventures in that region have been expensive failures that have presumably spawned a generation of hatred and fear, and provided a cover of distraction from issues that might actually matter (like dealing with a debt burden that is on par with the World-War II response, or the dissolution of civil liberties in response to a threat that was extremely mild by-the-numbers). And the sheer futility and pointlessness of all the death, maiming and redrawing of maps is just breathtaking.

Imagine a world where political issues were dealt with starting with the largest and working to smallest. In this world, every debate would be mentioning the fact that entire countries are being persecuted for no particular gain. Mysteriously, this issue is not one of the most hotly debated issues (although it does get attention). Eg, when Trump talked about withdrawing the last few troops from Syria that was considered controversial. Would that more important people had courted more controversy before Afghanistan and Iraq.

All that is the long into to the point: I think GP meant that more American voters need to be exposed to war to generate the appropriate political response. The whole last 20 years of American military action turned out to be no-brainer bad ideas and people are still acting like they were defensible in some sense. The major anti-war voice in the Democrat primaries seems to be a veteran, which suggests that exposure to the situation on the ground helps form anti-war sentiment. If everything is automated, more idiots will think that the last 20 years of military activity are somehow appropriate and not crazy and less sane people will have the needed exposure to argue with them.


Your examples aren't arguing AI, they are arguing software.

> Would the American people care even less than they already do

I don't think that's as much about the advancement of technology so fewer civilians know/think about the war. I think more the psychological component of the military (propaganda/PR, crafting messaging and wording that legislators use, choice to assist movies which glorify the US military or not assist those which are critical, choice not to publish images of US coffins in Dover[1]) and how politicians and media symbiotically weaponize the "us versus them" to drum up support, FUD, and urgency for non-necessary "wars" (in quotes because we haven't declared war since WW2... Korea, Vietnam, Kosovo, Iraq 1, Afghanistan, Iraq 2, et al were "police action"s or military "use of force" actions -- which further proves my point).

I think we need to make parents sign their daughters up for "Selective Service" (again with the propagandized terminology) if the ultimate goal is to get voting civilians to care about ending unnecessary wars.

[1] https://www.npr.org/templates/story/story.php?storyId=101137...


> The Equal Employment Opportunity Commission “found reasonable cause to believe that Uber permitted a culture of sexual harassment and retaliation against individuals who complained about such harassment.”

> Uber agreed to a settlement by establishing a $4.4 million fund to pay current and former employees who were sexually harassed at work.

Wow, a $4M fine for a $50B company with a corrosively toxic culture. That'll teach 'em a lesson! /s


Former world-ranked youth chess player here. 100% agree with the desire to reduce draws and get people out of opening theory. Not sure though why Kramnik felt the need to invent a new variant when Chess960 already exists, is thriving, and changes the game far more drastically than no-castling rule.

The post reads more as an submarine advertisement for DeepMind (0) than a serious article for chess players.

[0] http://www.paulgraham.com/submarine.html


Kramnik briefly addresses this question in the article:

> Fischer Random is an interesting format, but it has its drawbacks. In particular, the nontraditional starting positions make it difficult for many amateurs to enjoy the game until more familiar positions are achieved. The same is true for world-class players, as many have confessed to me privately. Finally, it also seems to lack an aesthetic quality found in traditional chess, which makes it less appealing for both players and viewers, even if it does occasionally result in an exciting game.

(I have no idea if his view would be persuasive to you.)


Yeah, I don't find these arguments particularly convincing.

> In particular, the nontraditional starting positions make it difficult for many amateurs to enjoy the game until more familiar positions are achieved.

Actually the nontraditional start is part of what makes it fun. It's fresh and forces you to start using your brain from move one. I don't buy the claim that the main enjoyment of chess is due to familiar positions.

> The same is true for world-class players, as many have confessed to me privately.

I can't speak to Kramnik's private conversations, but as a serious player what I can say is that my competitive advantage over an amateur diminishes considerably in Chess960. Perhaps what Kramnik's friends are complaining about is the fact that it's harder for them to win at Chess960 than standard.

> Finally, it also seems to lack an aesthetic quality found in traditional chess, which makes it less appealing for both players and viewers, even if it does occasionally result in an exciting game.

Yeah, there's a difference in aesthetics for sure. But who is to say that it is "worse"? Some piece configurations do feel a bit "ugly" but to me this is more than balanced out by the beauty of new patterns emerging where they weren't expected.


He does briefly address this as complicated for beginners to enter (which is true). Given that no-castling actually makes the game simpler, its worthwhile to consider it as an alternative.


Sure, if you're pitching this as "a simpler way to learn chess", I can get behind that. Anything to make chess accessible to more people is great in my book.

But he's selling it as a way to save late-stage classical chess from the era of boring draws and endless opening theory. We already have a better solution for that in Chess960.


Part of any games popularity is to have an exciting competitive scene and also have easy entrance for new players. Chess960 solves one of those, no-castling seems like a better solution on both fronts.


> Mr. Thiel has argued that Facebook should stick to its controversial decision... to continue accepting [political ads with no fact checking]

Okay, so just to recap:

1. Thiel is the investor who made billions off of Facebook before becoming a high-profile Trump supporter.

2. Trump is the guy who became president thanks in no small part to misinformation campaigns at scale made possible by Facebook's ad platform.

3. Thiel is now one of the strongest voices arguing that Facebook should remain in the political misinformation-for-profit business.

I couldn't read the entire article due to paywall, but forgive me for having doubts that Thiel's convictions are based on a desire to do what's right for democracy.


ignoring the totally insubstantiable claim that facebook ads got trump elected, the alternative is making facebook the gatekeeper of political truth. this scares me much more than six figure ad spends in a presidential election.


> the totally insubstantiable claim that facebook ads got trump elected

https://www.washingtonpost.com/news/politics/wp/2018/03/22/a...

> the alternative is making facebook the gatekeeper of political truth

This argument is so played out. If it cared enough, or was forced to by regulation, Facebook could deploy enough fact checkers to greatly reduce the problem.


> forced to by regulation

So now you want the government to tell Facebook what is truthful?

Government approved media is propaganda.

Do you not see how flawed this is!?

The argument is not played out. I do NOT want Facebook telling me what is the truth. Period.

If you want to limit their targeting tools for political ads, that's an okay route for me.


would you trust fox news' fact checkers? do you think fox news viewers would trust yours? any fact checking apparatus would immediately become politically contested and unworkable in the best case scenario; in the worst case, we have a privatized ministry of truth that acts unilaterally. why would you trust facebook with anything approaching control of political discourse after the experience of the last election? are you naive to think that the 'fact checkers' will always be on your side? if you are, do you think your opponents will just sit there and let them control discourse?


Not sure if those apps works as well as advertised.

https://www.zdnet.com/article/stingray-detector-apps-andorid...

Edit: fully removing amp from link



Fixed, thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: