Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's predicated on the assumption that a random discovery from a zero-comprehension state is more likely to get you to a goal than an evolution from a state that has at least some correctness.

More generally, it disingenuously disregards the fact that the definition of the problem brings with it an enormous set of preconceptions. Reductio ad absurdum, you should just start training a model on completely random data in search of some unexpected but useful outcome.

Obviously we don't do this; by setting a goal and a context we have already applied constraints, and so this really just devolves into a quantitative argument about the set of initial conditions.

(This is the entire point of the Minsky / Sussman koan.)



> from a zero-comprehension state is more likely to get you to a goal than an evolution from a state that has at least some correctness.

I get that starting from a point with "some correctness" makes sense if you want to use such information (e.g. a standard starting point). However, such information is a preconceived solution to the problem, which might not be that useful after all. The fact is that you indeed might not at all need such information to find an optimal solution to a given problem.

> by setting a goal and a context we have already applied constraints.

I might be missing your point here since the goal and constraints must come from the real world problem to solve which is independent from the method to solve the problem. Unless you're describing p-value hacking your wait out, which is a broader problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: