Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

does anybody know GPU instances can be of any aid for building full text indices (inverted lists) or other non-floating point workfloads ? I was skimming through the title of a recent paper presenting an sorting algorithm exploiting GPUs, but still I'm in the mental model of treating GPU workloads as having to do with floating point operations.


There's a little work in this area, but it looks like people have just scratched the surface. http://news.ycombinator.com/item?id=1149800


I'm not a GPGPU expert, but one thing that is easy to forget is that floating point types have a well-functioning integer subset. For example, on 32-bit computers it can be beneficial to use 64-bit doubles for extended precision in integer calculations. That said, when I've looked into potentially using GPGPU, the problem has been that branchy code is not a good fit.


Modern GPUs do have a full set of integer instructions, they just don't run as fast as floating point instructions.

Depending on how well a problem maps to the massively parallel architecture of a GPU, this may not matter.


This is wrong. The rate of integer and logical instructions that can be run per clock is equal to or higher than floating point instructions. For example, AMD GPUs 5-way VLIW units can execute, per clock: 5 integer/logical op, or 5 single precision flop, or 1 double precision flop. Nvidia GT200 GPU streaming processors can execute, per (shader) clock: 1 integer/logical op, or 1 single precision flop, or 0.5 double precision flop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: