Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let me suggest something that will turn everything upside down...

How much of our “logical reasoning” about the world is just really strong correlations?

Look, AlphaGo Zero beat the best chess programs by just precomputing MCTS. What does that say about finding the optimal solution to a problem?

It seems MCTS beats even AlphaBeta search, by far.

So when you talk about applying rules, you’re using a system that tried to reduce the system to a few rules and maybe recursively use them. That may work for some things. Certainly it seems to work amazingly for physics. But that’s the unreasonable effectiveness of mathematics.

Ultimately what if I suggested that “understanding something” with huge vectors or monte carlo search is actually far deeper understanding than a few rules?

I am talking about the actual territory not the map. Yes the model is neat and tidy but the model is just simple enough so that humans can understand!

What if an AI can no more explain its solutions to a human than a human can explain human decisions to a cat?



In general, I agree that for avery complicated task that a machine intelligence performs far better than any human being, human-like reasoning might be just impossible. The logical formula might be too long to be reasonable to any human.

But from another side, current AI has a hard time to perform ordinary human reasoning at all. Those reasoning mostly contains a few logical clauses. Why?

Human like logic usually hide the common sense or context in other words. There is no absolute logic like FOL when we talk about human like logic. The traditional rule based AI system fails because of that.

On the other hand, deep learning are very strong at representing these hidden common sense and context. Look at the word2vec.

We now are at a point to combine these two approaches together not to build AlphaGo, but enable ordinary human like reasoning.

At least that is my hope.


That's like saying that causation doesn't matter; correlation will suffice. But the causal directionality of separating subject from object in a relation is essential to forming or sharing an idea usefully. In the synthetic game worlds you mention, causality is decided by play sequence. But at no point does the agent ever have to employ abstraction, or 'think about thinking', which diminishes the needed depth of thought greatly.

No, probability alone simply isn't expressive enough to support higher cognition. At best it will deliver a purely reactive agent, no more cognizant than Kahnemann's knee-jerk level of thinking.


Funny that you say that. David Hume said that you can’t meaningfully prove causation, only describe things and speak about about correlation.

Also time is a local phenemenon. You can’t simply compare A and B time if A and B are really far apart. When you get to really small scales you would be hard pressed to find which event happened before the other, so you get uncaused events like Virtual Particles in quantum mechanics.

And when you have tons of complexity then what do you mean by cause and effect instead of correlation?

https://www.quantamagazine.org/omnigenic-model-suggests-that...


I won't challenge Hume; I'm referring only to the fact that a causal relation implies necessity: B can't happen unless A acted to cause it. Probability can't provide this insight, it can only encourage or discourage it. The agent must formulate a mental model of the relation in order to hypothecate the presence of necessity / causality between a pair of events. A model that lacks that ability (e.g. probability) can lead to all sorts of nonsense in interpreting events, like proposing causality. Or at least meaning by connecting a baseball pitcher's hat color with the weather on game day.

BTW, precision isn't necessary for relative time to have great value in a huge fraction of possible world events. Obviously all you need is relative temporal sequence to disprove causality - since the batter can't hit until the pitcher throws.

In a great many event pairs, a lot of temporal precision Is unnecessary. Often it's only approximate relative time that matters, since the mental task intuits all the temporal constraints needed for understanding event coupling, like knowing the parent child relation implies the relative ages of each and much more latent info that may or not be relevant to a given interpretive task at hand.


And what I am saying is that when you say A implies B you’re making a ton of assumptions. If those assumptions are violated (eg all swans are white, all men are taller than all women) then suddenly you have caveats and hedges.

So what you actually use is not Propositional Logic / Boolean Algebra but Bayes’ Theorem. You only use rules of inference because you make assumptions.

I am saying that may work sometimes but those models are not exact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: