Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
agnosticmantis
on Oct 7, 2021
|
parent
|
context
|
favorite
| on:
How to train large deep learning models as a start...
Isn’t the current best practice to train highly over-parametrized models to zero training error? That’d be a global optima, no?
Unless we’re talking about the optima of test error.
aabaker99
on Oct 7, 2021
[–]
If you find a zero in a non negative function, I would call that a global minima, yes.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
Unless we’re talking about the optima of test error.