Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> While they could use a huge model on the cloud, that would introduce a lot of latency.

Will all the recent work to make gen. AI faster (see groq for LLM & fal.ai for stable diffusion), I wonder if the latency will become low enough to make this a non-issue or at least good enough



If AI/ML home systems become significantly common for consumers before the onboard technology is capable, I could see home cacheing appliances for LLMs.

Like something that sits next to your router (or more likely, routers that come stock with it).


Does a robot that moves things in a home need this? The challenging decisions are (off the top of my head):

1. what am i picking up? - this can be AI in the cloud as it does not need to be real time

2. how do i pick it up? - this can be AI in the cloud as it does not need to be real time - the robot can take its time picking the object up

3. after pickup, where do i put the object? localization while moving probably needs to be done locally but identifying where to put down can be done via cloud, again, no rush

4. how do put the object down? again, the robot can take its time

You can see in the video the robot pauses before performing the actions after finding the object in its POV, so real time isn't a hard req for a lot of these




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: