Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes it is :). Also, I've had mixed results with managed languages and loading 1.5 TB text indices into RAM (not even to say anything about how much of NCBI's nt database will fit into 1.5TB when using different runtimes), so using manual memory management seems like the smart move here. Also, C++ had many libraries we could lean on for manipulating very large genomics datasets, and Rust is starting to grow a little ecosystem as well. I'm not familiar with a comparable availability of open source tools in C#, although I don't have any experience in it so maybe that lack of exposure isn't reflective of the state of things.


What I can disclose is that they mostly use R, Java and C#.

C++ is used by research algorithms used in HPC context or by the device drivers for the readers.

In this case the data sets are around 1GB, but I don't know what they are actually loading into memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: