Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPUs are [effectively] irrelevant for many use cases (IoT, embedded, most servers, etc)


On Raspberry Pi, the GPU is the only thing that makes a responsive GUI or web-browser feasible, and is the primary reason most people use the HDMI LCD screens for games etc. It also took a large effort to bring up a v4l2 kernel driver for the camera modules etc.

For example, on the CPU one may pin all cores to stream a USB camera or software decode h264. With the SoC GPU decoding or streaming with the v4l2 interface might take up 30% on one core (mainly to handle the network traffic.)

The Raspberry Pi are not the fastest or "best" option (most focus on h264 or MJPEG hardware codecs), but the software/kernel ecosystem provides real value. Also, the foundation doesn't EOL their hardware often, or abandon software support after a single OS release.

A cheap RISC-V SBC is great, but ISA versions are generally so fractured (copied the worst ideas of ARM6)... few OS will likely waste resources targeting a platform that will have 5 variants a year, and proprietary drivers.

A Standard doesn't even need to be good, but must be consistent to succeed. =3


the title says "... AI projects". now, maybe our definitions are different, but you probably want some hardware acceleration.


Most likely comming in vector, matrix instructions or NPU like chipsets, not necessarly GPUs.


The chip (KY X1) comes with AI acceleration...


Low-power processors rarely have the AI accelerated instructions in the GPU, instead opting either for dedicated matrix/tensor cores, or as is used in the case, adding the acceleration instructions directly to the CPU core.

This results in a higher performance per Watt, but doesn't scale well to higher-power applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: