Hacker Newsnew | past | comments | ask | show | jobs | submit | mochomocha's commentslogin

I know nothing about photography, but I'll just comment on this point:

> (I'm guessing this is a CM4/CM5) is a disaster for a camera board. Nobody wants a 20s boot every time you want to take a picture, cameras need to be near instantaneous.

You can boot an RPI in a couple hundred milliseconds.


The article started really well, and I was looking forward to the empirical argument.

Truly mind-boggling times where "here is the empirical proof" means "here is what chatGPT says" to some people.


its Vitalik what do you expect? Do you think Bernie Madoff was speaking objectively when talking to his potential clients?


VB is a genius, no sweat (are you familiar with his work?). Madoff isn't in the same league at all, and it's disingenuous to imply otherwise.


Better than "according to Google" (pre ai) which I saw cited too many times.

I have a feeling that people who have such absolute trust in AI models have never hit regen and seen how much truth can vary.


In no way is it better than "according to Google".


Maybe when google actually did searches. A coworker today was unable to find a very straightforward quoted text on google, on duckduckgo the first few hits were exactly what we were looking for.


On the other hand, Groq seems pretty successful.


Ha! I have spent the last 2 years on this idea as a pet research project and have recently found a way of learning the wiring in a scalable fashion (arbitrary number of input bits, arbitray number of output bits). Would love to chat with someone also obsessed with this idea.


Also very interested. Do you have any code on github?


I also I'm very interested. I had played around a lot with Differentiable Logic Networks a couple of months ago and how to make the learned wiring scale to bigger number of gates. I had a couple of ideas that seemed to worked in a smaller scale, but that had trouble converging with deeper networks.


What is so bad about free trade?

Isn't competition in free markets something Republicans believe in anymore? Because forcing Americans to buy inferior locally-made products at a premium through artificial restrictions surely isn't that.

Free trade and globalization are also a pacifying force, by creating mutual dependencies between countries.

Protectionism doesn't work.


[flagged]


Breitbart is not a reputable source.


No but the point is valid. Say country A decides to protect its environment and hence imposes costly pollution control measures on its manufacturers. Country B meanwhile pollutes to the max. Country B's products are going to be cheaper than country A's. Therefore country A imposing a balancing tarrif on Country B (until they stop polluting) seems at least potentially reasonable.


Nothing substantive to add to the discussion, but to praise Min's blog posts which I have found very well written and instructive.


11. notice that there's a unicode rendering error ("'" for apostrophe) on kernel_initializer and bias_initializer default arguments in the documentation, and wonder why on earth for such a high-level API one would want to expose lora_rank as a first class construct. Also, 3 out of the 5 links in the "Used in the guide" links point to TF1 to TF2 migration articles - TF2 was released 5 years ago.


Yep in Netflix case they pack bare-metal instances with a very large amount of containers and oversubscribe them (similar to what Borg reports: hundreds of containers per VM is common), so there are always more runnable threads than CPUs and your runqueues fill up.


I'm curious as to the capacity of the bare metal hosts you operate such that you can oversubscribe CPU without exhausting memory first or forcing processes to swap (which leads to significantly worse latency than typical scheduling delays). My experience is that most machines end up being memory bound because modern software—especially Java workloads, which I know Netflix runs a lot of—can be profligate memory consumers.


If you're min-maxing cost it seems doable? 1TB+ RAM servers aren't that expensive.


Workloads tend to average out if you pack dozens or hundreds into one host. Some need more CPU and some need more memory, but some average ratio emerges ... I like 4GB/core.


Yep. In Netflix case each Titus host can run hundreds of containers per bare-metal instance at any given time. One advantage of running a multi-tenant platform like this is that you get better observability on multi-tenancy issues since you're doing the scheduling yourself and know who is collocated with who. It's much harder to debug noisy-neighbor issues when it's happening on the cloud provider side and your caches get thrashed by random other AWS customers.

One thing I was pitching internally when advocating for this platform is that when you have the scale to run it for the economics to make sense, you can reclaim some of AWS margins instead of having your cold tiny VMs subsidize other AWS customers higher perf. If you run the multi-tenant platform yourself, you can oversubscribe every app in a way that makes sense for your business and trade latency or throughput of software for $ on a per-container basis, so you can make much more granular and optimal decisions globally. VS having each team individually right-size their own app deployed on VMs and sharing CPU caches with randos.

I remember once at Netflix we investigated a weird latency issue on a random load balancer instance and got AWS involved: it turned out to be a noisy-neighbor on the underlying VM that gets chopped up into multiple customer-facing LB instances.


Aside: Is titus still being developed?

GitHub repo says it was archived 2 years ago: https://github.com/Netflix/titus


> Government is controlled by the highest bidder.

While this might be true for the governments you have personally experienced, this is far from being an aphorism.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: