Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It feels generally a bit dangerous to use an AI product to work on research when (1) it's free and (2) the company hosting it makes money by shipping productized research


I am not so skeptical about AI usage for paper writing as the paper will be often public days after anyways (pre-print servers such as arXiv).

So yes, you use it to write the paper but soon it is public knowledge anyway.

I am not sure if there is much to learn from the draft of the authors.


I think the goal is to capture high quality training data to eventually create an automated research product. I could see the value of having drafts, comments, and collaboration discussions as a pattern to train the LLMs to emulate.


Why do you think these points would make the usage dangerous?


They have to monetize somehow...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: