Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
jwan584
10 months ago
|
parent
|
context
|
favorite
| on:
The impact of competition and DeepSeek on Nvidia
The point about using FP32 for training is wrong. Mixed precision (FP16 multiplies, FP32 accumulates) has been use for years – the original paper came out in 2017.
eigenvalue
10 months ago
[–]
Fair enough, but that still uses a lot more memory during training than what DeepSeek is doing.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: