does anybody know GPU instances can be of any aid for building full text indices (inverted lists) or other non-floating point workfloads ? I was skimming through the title of a recent paper presenting an sorting algorithm exploiting GPUs, but still I'm in the mental model of treating GPU workloads as having to do with floating point operations.
I'm not a GPGPU expert, but one thing that is easy to forget is that floating point types have a well-functioning integer subset. For example, on 32-bit computers it can be beneficial to use 64-bit doubles for extended precision in integer calculations. That said, when I've looked into potentially using GPGPU, the problem has been that branchy code is not a good fit.
This is wrong. The rate of integer and logical instructions that can be run per clock is equal to or higher than floating point instructions. For example, AMD GPUs 5-way VLIW units can execute, per clock: 5 integer/logical op, or 5 single precision flop, or 1 double precision flop. Nvidia GT200 GPU streaming processors can execute, per (shader) clock: 1 integer/logical op, or 1 single precision flop, or 0.5 double precision flop.