Grover Norquist wants to cut government spending. It's difficult to do that directly because people often like government services. Instead he suggests cutting taxes. This is easier to sell to the public because no one likes paying taxes. This will drive the government into debt, forcing spending cuts due to the risk of default. This plan is called "starving the beast".
For this plan to work, people have to really hate paying taxes. So, despite the fact that Norquist and his allies often talk about things like "reducing the burden on the taypayer", they have in fact acted to make paying taxes as unpleasant an experience as possible. This means deliberately underfunding the IRS and ensuring filing taxes is slow, complicated, and expensive.
He is also the architect of the Taxpayer Protection Pledge, which is endorsed by the vast majority of Republican politicians currently in office. The pledge prohibits them from supporting any legislation that would increase taxes on people or corporations. The idea of "starving the beast" has been endorsed in as many words by a number of politicians, including George W. Bush.
Any time a Republican politician starts talking about reducing the deficit, know that they are lying. Over 95% of them have publicly signed a pledge meant to deliberately increase the deficit. This isn't a conspiracy theory. The plans are public.
His stated reasons are so fantastically stupid that I can't imagine them being legitimate. Return-free filing is the best tool we have to achieve his claimed goals of reducing the complexity and confusion of tax season. Can you think of an innocent explanation for his opposition?
> Grover Norquist wants to cut government spending. It's difficult to do that directly because people often like government services.
Reminds me of Grafton where the population didn't care about cutting government services and only cared about lower taxes. Liberals all over the country moved there and it ended with bear attacks because everyone was disposing their trash incorrectly and the only policeman in town had a broken down car.
The SAT has been demonstrated to be effective at predicting success in university. We have almost no evidence about the computer industry's hiring practices. It is completely unscientific. Interviews operate on folklore, not statistics.
This is something your HR department should be very concerned about. If the questions you ask during your interview are not useful in finding a good candidate why are you asking. This isn't just about time either, interviews have some strong laws around them so asking the wrong question could get you in court.
I know when we wanted to do a coding test they told use we need to spend 6 months of giving everyone a coding test, have it independently graded by someone not involved in the hiring process. Then after people have worked here for 6 months we examine our actual results from those we hired and see if the tests at all predicted something useful. (or something like that - there is room in the scientific process for some variation)
The bar below which HR has to be worried is not "we've scientifically determined that our interview questions lead to good on-the-job performance". There has to be some reasonable sense in which you could argue the interview filters for good candidates, but no one is requiring you run studies.
Google once did a retrospective study and found that interview scores for people we ended up hiring were not correlated at all with people's on-the-job performance. I'm pretty sure nothing really changed as a result of this. I think it's a combination of the industry, especially FAANG, being kind of "stuck" on these kinds of interviews, and a lack of clearly better alternatives (I think there are better alternatives but it's not like I can point to studies backing me up).
> I know when we wanted to do a coding test they told use we need to spend 6 months of giving everyone a coding test, have it independently graded by someone not involved in the hiring process. Then after people have worked here for 6 months we examine our actual results from those we hired and see if the tests at all predicted something useful.
This is interesting but also way heavier weight than anything I've ever heard of. OOC where do you work? (Like vague description of kind of company, if you're not comfortable sharing the specific name).
> Google once did a retrospective study and found that interview scores for people we ended up hiring were not correlated at all with people's on-the-job performance.
This sounds like an unsound result. If you select based on a criteria the correlation with the criteria is usually diminished and sometimes even reversed in the selected sub-population.
Like if you select only very strong people to move furniture then measure their performance. Because they're all strong, you won't observe that weak people are bad at it-- plus you'll still have some people who were otherwise inferior candidates who were only selected because they were very strong, resulting in a reverse result. But if you dropped the strength test you'd get many unsuitable hires (and suddenly find strength was strongly correlated to performance in the people you hired).
This is actually confirmed with real world data on this for professional football with player weight and professional basketball with player height.
For Offensive Linemen in the NFL, there is no correlation between weight (which range from 300-360 pounds) and overall performance. A "heavy" 350 pound player is not more likely to do better than a "light" 310 player. But nobody who weighs a mere 250 pounds could realistically make the cut or perform well at the highest level.
For basketball players there is no correlation between height and performance, and there are several standouts examples of players below six feet so there's no cutoff. But if you compare the distribution of the subpopulation versus the general population, you'll see an extremely strong height bias.
> This sounds like an unsound result. If you select based on a criteria the correlation with the criteria is usually diminished and sometimes even reversed in the selected sub-population.
Yeah that's very true and I think was part of why they maybe didn't react to it too much. What you really want is to find the people you rejected and see how well they're doing, but we don't have that data.
Still though, naively I think I would have thought that someone who gets great marks across the board should be able to be more successful at Google than someone who barely squeezes by, and I do think it's kinda telling that that's not the case. But I'm maybe just injecting my own biases around the interview process.
edit: This reminds me a lot of this informal study that found that verbal and math scores on SATs were inversely correlated, which seemed surprising, until people realized they were only ever looking at samples all from a single school. Since people at any given school generally probably had ~similar SAT scores (if they were lower they wouldn't have gotten in, if they were higher they would have gone to a more selective school), the variation you see within a given school will be inverse (the higher you do on math, the lower you must have had to do on verbal to have gotten the "target" score for that school).
At google's scale, if they had an alternative basis for hiring people they could judge candidates by both and hire randomly use one method or the other method to make some of their hires, then compare their performance over time and at least say if there is a significant difference or not.
But as you note, the lack of obvious good alternatives is an issue... and we can't pretend that there isn't an enormous difference among candidates. If we though that unfiltered candidates were broadly similar then "hire at random, dismiss after N months based on performance" would be a great criteria, but I don't think anyone who has done much interviewing thinks that would be remotely viable.
(Though perhaps the differences between candidates are less than we might assume based on interviewing since interviewees should be worse than employment pool in general, since bad candidates interview more due to leaving jobs more often and taking longer to get hired)
>If we though that unfiltered candidates were broadly similar then "hire at random, dismiss after N months based on performance" would be a great criteria, but I don't think anyone who has done much interviewing thinks that would be remotely viable.
I know a fair number of companies that do essentially that. They hire contractors for 6 months, at the end of 6 months the good ones are offered a full time position. The contractor company probably does some form of interview, but they are more interested in their 6 months of overhead from the contractor than quality candidates.
> since bad candidates interview more due to leaving jobs more often and taking longer to get hired
But there are also great people who interview badly.
The debate is about the quality and predictive value of the tests. Opponents claimed that the tests had a cultural bias so students from some backgrounds would do better than others, that students who had a good education before university would be better prepared, and that studying for tests or taking tests repeatedly has been shown to improve scores but is only accessible to people who can afford it. These are all claims that the tests are not good at predicting aptitude.
The arguments against these tests are, of course, awful. Objective tests are the best way we know of to remove human bias. Aptitude tests (basically IQ tests) are the best way we know of to measure someone's natural ability (determined in early childhood) with little influence from their experience. Since their arguments make so little sense, it is reasonable to wonder about the psychology of opponents of standardized testing. But their arguments are, at least on the surface, about predictive value.
> it is reasonable to wonder about the psychology of opponents of standardized testing
It is, at its core, a fear that testing largely reproduces the status quo. If one accepts the idea that there is an intellectual elite who constitute the highest strata of society, and that their gifts are innate and heritable rather than trained, it follows that social mobility is pretty much dead. It is a bleak vision.
Personally I think there are different problems that are much bigger and woollier which keep people from non-elite backgrounds down, regardless of test outcomes. The structure of the education sector and employment more widely. Expectations about life and the distribution of rewards etc. We rarely have good quality, nonpartisan discussions about these things which I think pushes people to take views which are instrumental rather than informed.
>it follows that social mobility is pretty much dead. It is a bleak vision.
I have always found the idea of social mobility depressing. It assumes that we will always have a hierarchy, with some people who are powerful and prestigious and others who are poor and always feel inadequate. It assumes that we will always have an underclass but at least people can leave it.
The kind of social mobility that SAT has some influence on is not really about "power and prestige", which I also think of as generally pathological dynamics. It's literally about how competent and professional you want to be, and how well you can perform your work duties. It's social mobility within the 'working' class, not really away from it.
Yes. The old saying among Labour party socialists in the UK was "rise with your class, not above it". They were in favour of a high floor on living standards and a low ceiling on wealth. It isn't a stretch to think that a more even playing field would be a better substitute for mobility.
>What's surprising though, is that APs and similar exams are not enough.
That isn't what they said. They said that access to those tests is not universal. Students from high schools that don't offer AP classes would have a hard time taking AP exams. This would exclude people from rural or impoverished areas.
This is why the SAT and ACT are useful: they are meant to be aptitude tests. They are IQ tests in disguise. If properly designed, they will measure intelligence with minimal influence from education or cultural background. Theoretically something like these tests could be administered to elementary school students and still be useful for predicting success in college a decade later.
> They said that access to those tests is not universal. Students from high schools that don't offer AP classes would have a hard time taking AP exams.
Yeah, I wish they'd just flat out told me "we expect AP courses" before I applied for MIT back in the day. Would have saved me a lot of hassle that just resulted in "sorry, we wanted AP credits" in the end.
I passed the AP calc exam without a class. But that had a lot more to do with motivation and interest and a sense of entitlement than with aptitude. I wish everyone had my sense of entitlement, but they don't, and classes do seem to make a passable substitute.
Take anything Rasmussen says with a grain of salt. They are not out-and-out fraudulent, but they are generally the least accurate of all major pollsters.
They weren't great in years past, but endorsing a coup d'état is pretty bad in my eyes. I didn't know this previously; I'll have to adjust my opinion of them.
> Take anything Rasmussen says with a grain of salt. They are not out-and-out fraudulent, but they are generally the least accurate of all major pollsters.
>They weren't great in years past, but endorsing a coup d'état is pretty bad in my eyes. I didn't know this previously; I'll have to adjust my opinion of them.
How did you get the impression that they were "endorsing a coup d'état"? As your article said, they outlined a scenario where that could happen, but I couldn't find anything that's an endorsement of it. If anything, the fact that they quoted stalin by name makes me think they're against it.
First off, because everything they said about the law is bullshit. The vice president has no authority to throw out votes. This is a complete fabrication. Either the writer legitimately believe this is legal, in which case they are clueless, or they know it is illegal, in which case they are lying. Either is bad.
Second, because of this:
>If they are (as more than 70% of Republicans believe) certificates from non-electors appointed via voter fraud, why should he open & count them?"
"Many people are saying": the classic weasel words. They can endorse this with an appeal to the authority of common knowledge and discuss it as if it's a reasonable hypothetical, all the while maintaining plausible deniability that they're just repeating what "everyone knows". Well, I don't find it plausible.
Read the replies to the tweets. The fascist types definitely interpreted it as Rasmussen agreeing with their position,. The non-fascist types interpreted it the same way. People do not communicate with formal logic. The meaning of speech is the meaning people take from it, not the meaning you would get from dissecting it on a whiteboard.
AI at Tesla is a scam, so it probably doesn't say what you think it says.
It's a shame that Karpathy actually seems to know what he's talking about. Maybe he originally bought in to the Tesla vision and doesn't have the integrity to admit it's failed. Maybe he's like von Braun, doing great evil because he wants funding. Either way we lost a seemingly talented researcher.
Never the less, if someone that works in AI gets impressed by an AI I will take it seriously. I don't use TikTok but the recommendation engine must be outstanding. I think I'll stay away. I don't want to lose any more time to online apps.
No, nor have I reported Deepak Chopra or Amway or Gwyneth Paltrow. There are blatant scams all around us. A lack of interest from the feds doesn't mean much.
Besides, the SEC has enough on their plate when it comes to Tesla.
>So it does say what he thinks it says.
Doctor Oz is, by all accounts, a brilliant surgeon. Would you follow the medical advice on his show?
Over the years, they repeatedly pushed this narrative to sell cars. Tesla promised on several occasions to have millions of fully autonomous "robotaxis" on the road in 2020, making their owners in excess of $30,000 a year. Currently there are, of course, zero.
Take a look at this video for what Tesla describes as a beta for their "Full Self Driving" system. A beta is, of course, meant to be feature complete software in final testing. Does this look like it's nearly ready to be pushed out to every car on the road?
They continue to work on this project despite knowing it is fraudulent, which makes them complicit.
Besides, I don't see that they're doing solid work. It would be solid work if they ditched pure vision and moved to a system that works. Instead they are putting in heroic efforts on a dead-end technology. That is impressive in the same way that getting Doom running on a TI-84 would be impressive.
I don't know, it depends on what your goal is. If you want to emulate human driving, then pure vision is a viable way to go. After all, humans drive also using pure vision.
But your car has to drive a lot more cautious than with more sensors, like lidar. Can't be 100% sure this white blob is not a truck? Then you have to slow down. With lidar, you can be a bit more robust, but you still need vision to identify objects.
And if your goal is to build a non-human driver, that is one that drives "perfectly" and pushes the speed envelope perfectly, then I think lidar is also only a stop-gap solution. What you'd want is active components in the street and in other cars. In other words, a virtual rail. In that case, you can accelerate and brake as aggressively as the humans inside would tolerate. You could accelerate together with the car in front of you in traffic jams, etc.
Not really. Why are you blatantly drinking that marketing BS cool aid coming from a wack twittering ceo? That so called "pure vision" is backed by our brains trained over decades. This is exactly why you have to be 18+ to be able to drive without any supervision.
Even if we gave them more time, it seems that it still doesn't work and is still completely unsafe for Level 5. Compared to other competitors when tested, this is all Tesla has to show for progress? [0]
A very long way to go for the Level 5 'robo-taxi' readiness claim by Tesla.
Obviously they've made some progress. It would be hard for them to have accomplished nothing at all after all they've invested. It's still nowhere near safe for public roads and there's no reason to think it ever will be.
Even their driver assistance features work worse than the competition.
> I think you misunderstand OP. The purpose of citations is to prove facts.
Those cases are covered by b)
But the person you're replying to pointed out other, non-adversarial reasons to ask for citations (which depending on the tone of the question, may be misinterpreted)