Perplexed monkeys and temporal learning of invariances..

Posted on September 26, 2010 by dileep

I read a very interesting recent paper from Jim Dicarlo’s lab at MIT that gives another strong piece of experimental evidence for temporal slowness based learning of invariant representations. The paper, authored by Nuo Li and DiCarlo found that size tolerance in the visual cortex can be altered by changing the temporal statistics of the visual world. Compared to their earlier studies that showed the same thing for position tolerance, the new study shows that size tolerance reshaping can be induced under naturally occurring dynamic visual experience, even without eye movements.

If we live in a world where whenever we see a cat we see a dog right after,  we will say that cats and dogs are the same things! I am sure that Nuo’s and Jim’s experiments have left a lot of monkeys similarly confused.

Unsupervised Natural Visual Experience Rapidly Reshapes Size-Invariant Object Representation in Inferior Temporal Cortex
Nuo Li and James J. DiCarlo
, McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology,

spacer

Powered by Qumana

This entry was posted in Uncategorized. Bookmark the permalink.

5 Responses to Perplexed monkeys and temporal learning of invariances..

  1. spacer bet365 says:
    September 27, 2010 at 6:13 am

    how are you!This was a really terrific post!
    I come from milan, I was fortunate to find your topic in yahoo
    Also I learn much in your theme really thanks very much i will come every day

    Reply
  2. spacer Dave says:
    September 29, 2010 at 10:44 am

    Hi Dileep,

    Thanks for the post, as usual. I have a pretty un-related question regarding HTMs:

    Say I wanted to train an HTM to do two things – say tell the difference between a dog and a cat and to tell the difference between running and walking. Does the HTM have some sort of intrinsic way of separating out knowledge of different types, or would I just have to use two different HTMs?

    I guess I am really wondering whether or not neurons have a way of differentiating themselves between different types of ideas – a mechanism for forming new connections, I suppose, instead of just modifying old ones -and if this is implemented somehow in HTMs.

    I apologize if this question reflects an ignorance about cerebral function and HTMs, because it surely does spacer .

    Thanks,
    Dave

    Reply
    • spacer dileep says:
      October 3, 2010 at 1:24 am

      Dave, I don’t think HTM (and other similar algorithms) have any intrinsic way of separating out knowledge of different types as you describe. This will have to be conveyed to the network somehow. I think recognizing actions like walking and running would require a different hierarchy with different parameters compared to the object recognition hierarchy.

      Reply
      • spacer Dave says:
        October 4, 2010 at 1:02 pm

        Thanks for the reply, Dileep! Just a thought – do you think that language might have something to do with this – i.e., language allows us to think more deeply about things, by providing a means of classification itself?

        Sean and I are discussing this over at his blog -

        htmwatch.blogspot.com/2010/10/complexity-of-problem-faced-by-numenta.html

        Reply
    • spacer dileep says:
      October 27, 2010 at 3:13 pm

      Dave,

      Sorry for my late reply. No there is intrinsic way for the same HTM to learn to distinguish between running and walking and dog and cat. I think you would have to use two HTMs with different parameter settings. Your question is a really good one — you are thinking about the right things.

      Enjoyed reading the post at htmwatch

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

*

*

You may use these HTML tags and attributes: <a class="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>