Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 1) Move to the edge. Specially for AI, there is really no need for a central public cloud due to latency, privacy, and dedicated hardware chips. I.e. most of AI traffic is inference traffic which should be done on the edge.

Inferencing is done at the edge, but training must be done centrally.



Right now, the only market participant I see doing some inferencing at the edge is Apple with its photo analysis stuff that runs on the phone itself.

Anyone else is busy building little dumb cubes with microphones and speakers that send sound bites into clouds and receive sound bites to play back (heck, even Apple does it this way with Siri). Or other dumb cubes that get plugged into a wall socket and that can switch lights that you plug into them by receiving commands from a cloud (even if the origin of the command is in the same room). Or dumb bulbs that get RGB values from a cloud server which inferred somehow that the owner must have come home recently and which then set the brightness of their RGB LEDs accordingly. Or software that lets you record sounds bites, send them into the cloud and receive transcripts back. Or software that sends all your photos to a cloud library where it is scanned and tagged so you can search for "bikes" or whatever in your photos.

No matter what you look at in all that stuff that makes up what consumers currently consider to be "AI", it does inference (if it even does anything like that at all) on some cloud server. I don't like that development myself, but unfortunately that's how it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: