It's ok to wait longer for a product to make sure it's safe instead of the ol' "move fast and break things". Having ever new "interesting" stuff to play with to feed our endless boredom is not the only thing worth caring about.
It's a race dynamic. Can you truly imagine any one of them stopping without the others agreeing? How would they tell that the others really have stopped. I think they do believe that it's dangerous what they're doing but that they would rather be the ones to build it than let somebody else get there first because who knows what they'll do.
It's all a matter of incentives and people can easily act recklessly given the right ones. They keep going because they just can't stop.
I don't really see an argument made by Ng as to why they're not dangerous. I hardly ever see arguments, we're completely drowned in biases.
I know that he often said that we're very far away from building a superintelligence and this is the relevant question. This is what is dangerous, something that is playing every game of life like AlphaZero is playing Go after learning it for a day or so, namely better that any human ever could. Better than thousands of years of human culture around it with passed on insights and experience.
It's so weird, I'm scared shitless but at the same time I really want to see it happen in my lifetime hoping naively that it will be a nice one.
I think he said extinction risk. Obviously these tools can be dangerous.
The upcoming generation doesn’t know a world where the government’s role isn’t to take extreme measures to “keep us safe” from our neighbors at home rather than just foreign adversaries. It’ll be interesting to see how that plays out with mounting ethnic conflict as Boomer-defined coalitions fall apart.
Ironically AI’s place in this broader safety culture is probably the biggest foreseeable risk.
Do you think AIs are safe? I'd bet that if you would have a convincing argument that they are, then there wouldn't be a need for regulations. If you just assume that it can't possibly be that bad you should really read what the critics have to say. I don't see a way around regulations and I'm hoping that they'll get them right because a mistake here will likely cost us everything
I like jobs too but what about the risks of AI? Some people I respect a lot are arguing - convincingly in my opinion - that this tech might just end human civilization. Should we roll the die on this?
It has Go style concurrency, look for `core.async`. Maybe it could have been said that it wasn't lightweight enough and that would have been due to the JVM not providing the primitives but they're here now under the name "Virtual threads".
IIRC somebody in this thread said it's not preemptive and doesn't have enough functionality. Quickly looking at it, it seems to be fully opt-in / cooperative parallelism solution which is IMO not good enough.
It's...close-ish? It could really use default non-blocking IO, but honestly the reason no one's made one integrated solution yet is that giving a callback that pushes to a channel is mostly fine. I'd like something a little more robust obviously, but it's by no means a toy.
> tests are much better at giving you a glimpse of what the code even does
Types do that, they tell you the expected input and expected output. No type nor test is all-encompassing of course, but it gives at least some information.
No test will guarantee that a function will never return a number. Types, if valid and without prototypal shenanigans, can.
It's ok to wait longer for a product to make sure it's safe instead of the ol' "move fast and break things". Having ever new "interesting" stuff to play with to feed our endless boredom is not the only thing worth caring about.