The speaker provided an explanation of how human vision makes good perceptual guesses about objects using bayesian influence graphs. To this end, he decomposed an object
and its image
into world/object properties and features.The resulting probability distribution was then represented using a graph where nodes represent random variables and links represent the influences. He then illustrated this approach using examples of discounting and cue integration. In the second half of his talk the speaker concentrated
on V1 and LOC mechanisms involved in perception. His experiments revealed that perceptual organization correlated with reduced V1 activity and increases LOC activity. He showed that V1 activity can predict percept on the time-scale of behaviour. The decrease in V1 activity could mean two things 1) Predictive Coding 2) Sparsification. In predictive Coding high-level object models project back predictions of the incoming data. In this case a good fit implies a low activity at the lower areas due to subtraction. (The "shut-up" theory). In sparsification, a good high-level fit tells the lower areas to 'stop gossiping". This essentially amplifies the activity for features belong to the object and suppress the rest. Since predictive coding and sparsification have the same empirical fMRI observation, the experiments were inconclusive in deciding which of the two mechanisms (shut-up or stop gossiping) reduces the V1 activity when a higher level (LOC) has a good explanation for an object.
Back to the
main index
for Inference and Prediction in Neocortical Circuits.