Hacker Newsnew | past | comments | ask | show | jobs | submit | boomzilla's commentslogin

sorry could not resist the pun


Two words: plausible deniability


how would failover work?


Computer: Here you take over...


With latency


Peter's principle and whatnot, but I think there is something deeper. The manager positions are designed, by definition of the word, to manage, and the top goal is to extract values from workers. Managers (and product managers in tech companies) are encouraged to create a `healthy` tension with line workers (software engineers included) in work estimation and commitments. This is supposed to make the work challenging enough, but not so demanding that burn out the workers. The best managers can do that by providing the intellectual challenges and motivational goals. Most resort to processes and plain OKRs though (reflected in the worst ever software tool, JIRA. Also any line manager who's got any clue, meaning who can provide technical/business directions, would be quickly promoted to directors (where they are supposed to direct :-).

Protip for frontline managers: The percentage of time you spend on JIRA is negatively correlated to the chance of being promoted to the director level.


I am of the opinion that `true AI` is the science/engineering of understanding and replicating human intelligence. Why are we able to come up with abstract concepts from the surrounding physical environments? Why do we look at the stars and wonder what they are (and why)? How are we able to communicate with one another through pictures, words, writings, snapchat. Is that something special about our brains, our collective society, or something else, that enables such remarkable different behaviors from other any animal on earth? I don't know which direction we can start to go down to answer these questions, but collecting good data sets is probably as good as anything. Maybe we'll get the `quantity` of smarter specialized systems first, and once we get the `quantity`, maybe the `quality` will follow?


I agree. I think the fields of "computational cognitive science" and developmental psychology are the ones to look into to make progress towards the "hard fundamental problems". Some of the leading labs working on this are MIT CBMM (https://cbmm.mit.edu/, they have a nice youtube channel) and Berkeley Cocosci (https://cocosci.berkeley.edu/index.php).

Google Brain/DeepMind are also pushing some of those ideas. They must be, since they aggressively poach all the top researchers from those labs...

Ng approach is different: he wants a world powered by Deep Learning, so his goal is to make applied deep learning thrive. His strategy to do that: give those data-hungry models even more data, which is completely reasonable.

Those two approaches - fundamental research and applied deep learning - are often referred to as AI, causing much confusion.


Well then the OP ignores the biggest advertiser of them all: the communist party.


Most of that CP did, at least where I grew up, was not advertisement but slogans. Like "in God we trust" in US. Nobody really takes it as an ad for God.


> Nobody really takes it as an ad for God.

Are you sure?

Seems like the right-wing evangelicals of the US definitely do.


Could you quote a couple of them saying so? Because it doesn't match what I know about evangelicals, they don't exactly consider God something you can buy.


I didn't mean it as a literal object you could buy.


The AirBnB story about `literally a month from being homeless` is total BS. Both Chesky and Gebbia worked for a few years before starting the company and the other guy went to Harvard.


You can read the comments (and the linked papers) first. This is an advanced algorithm that could take days (or weeks) to fully internalize the details. One can't expect to just read the code and build a mental model of the program in one parse, no matter how expressive the variable names are.


Don't think there is much rationale behind all these models. It's more like P(would buy a vacuum | bought a vacuum) > P(would buy X | bought a vacuum) where X is a single product. Now P(would buy a vacuum | bought a vacuum) < sum(P (would buy X | bought a vacuum)) for X that is not a vacuum, but what would be the recommendation? Hey, you bought a vacuum, come back and buy some non-vacuum stuff?

For most recommendation UIs, you would need a hero item that make people want to click on. It might turn out that another vacuum is probably the best item for some people to click on, and go on to buy other stuff once they are on the site.


The reason you see such an obvious false positive in this case isn't because people who bought vacuums are likely to buy another, but rather that people who look at vacuums are likely to buy a vacuum, and the model hasn't accounted for whether you've already bought one.

A different recommended might use different types of conditionals (items bought instead of items looked at, for example), and also have success in different areas (like recommending iPhone cases for iPhone owners). In order to converge the models in a Bayesian framework you'd have to deal with the combinatorial explosion of products and event conditionals which might be pretty gnarly. But some convergence work would be better than none, otherwise you end up with 20 different recommender widgets on a page.

Overall I don't think amazon's approach to date has been bad...it's just time to clean up a bit.


Be frugal. Don't buy a house unless you have at least 6 months of payments in the bank. Don't buy a car if you need to take a loan. Save as much you can. Build and maintain a strong network outside work (family/friends/professional contacts).

Every time I got into a difficult situation at work, I take a deep breath and tell myself: "Don't worry, give it your best shot to resolve this. And if that's not good enough, you know you can walk out that door and take a break for some time". It's been working well for me.


There's a lot of truth to this, but if you've never worked this kind of job, you won't be prepared. Working this kind of job is what made me really understand the value of being able to say "FUCK YOU" and quit/not worry about getting fired. It drove me to build savings and now I insist on having no debt and at least $10k in the bank. It also drove me to move to a cheaper city, find a cheap apartment there, etc. $10k isn't a huge amount to some of you, but it's enough to pay the bills through a decent job search if your fixed expenses are low. Once I paid off debts and had that buffer, my soul-sucking horrible job seemed way better since I no longer feared being fired. It even allowed me to quit that job abruptly and enjoy life for a few weeks before going back to work.

We're pretty smart in some ways, but too many of us live essentially paycheck-to-paycheck. In an industry where jobs aren't hard to come by, and the pay's pretty good, it's easy to fall into that trap. If you're doing this, stop! Do whatever it takes to break the cycle. Sell your car. Sell your unnecessary stuff if that helps to start saving. Stop acting as if your future paychecks are guaranteed. You have no idea what a weight off your shoulders it is to not fear losing your job.


It's well-meaning advice, but understand that, in situations like these, certain personality types don't think as rationally as you would expect. One size does not fit all when it comes to handling adversity.


He lived in Pittsburgh about 35-40 miles outside San Francisco. Maybe a hour drive with no traffic 2 with, or a 1.5 hour Bart train ride and I think there's a transfer now so it maybe longer. Anyway he didn't go out and buy a mansion the bay area is so expensive even with his pay he lived far from the city where house prices are only crazy...not insane


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: