So how does it work anyways?

In developing a model, or algorithms for cognition he identified 4 challenges.

The challenges of complexity, brittleness, semantics and grounding.

NP problems - you don’t have more time then the time in the universe to solve a problem.

Brittle - Rules are so rigid anything slightly different breaks the algorithm.

Semantics - Defining the rules.

Grounding - Recognizing absurd conclusions.

The idea of a minds eye, a focus point through which we view the world. Our minds can only have between 5 to 9 things in the short term area going on at once. We don’t consciously perceive or record all the the inputs coming in, just the things we flag as important. Then that goes into our long term storage which is practically infinite.

It is like placing yourself in position to catch a thrown football, you keep your eye on it and match your angle to the ball so you and the ball meet. You don’t track the other 100 variables such as the release speed, the calculation of the parabolic arc, the effect of friction, wind. Just keep your eye on the ball.

Our learned knowledge is learned from real world experience but only after filtering.

Define a scene with t1 … t20 variables.

Define independently quantified expressions (IQE) that match the seen.

Such as there exists an elephant in the scene, that elephant likes peanuts.

Then use a learning algorithm such as the winnow to remove redundant IQE files. And store the valid IQEs into a long term memory.

These are the Ecorithms: learning, reasoning, and reasoning from learned data.

Cognitive concepts are computational in that they have to be acquired by some sort of learning algorithm.

Can the human mind be based on such a simple model? Leslie Valiant thinks so.

Adaboost - a method to use weak learning repeatedly to make it strong learning.

If a database of typical examples can be produced then machine learning can be much more effective.