On using the Grossberg learning rule in CALM

12.23.2012 | Author: adriaan | Comments Off
Tags: calm, grossberg, neuralnetworks

Even though I have been working in software engineering for a large part of the past decade, I am still periodically checking out what goes on in the field of Artificial Neural Networks. The other day, I bumped into an article that describes a variant of the so-called Categorization and Learning Module. This type of self-organizing neural network architecture was first proposed by Jaap Murre in “Learning and Categorization in Modular Neural Networks”. I have written an introduction to CALM that can be used to quickly read what this neural network is about.

The article that I found, “Self-Organizing Neural Networks for Signal Recognition” by Koutnik & Snorek in the Proceedings of ICANN 2006, replaced the learning rule by a more simpler

    Δwij(t+1) = μtai(aj-wij(t))

Compared to the original:

    Δwij(t+1) = μtai([Kmax-wij(t)]aj - L(wij(t)-Kmin)∑wif(t)af) where fj in the summation

It’s fine if you have orthogonal input vectors or at least with very little overlap. However, the basic principle of the Grossberg learning rule is that when calculating the new weight, you need to take into account the total excitation of the receiving node. This makes it possible to discriminate between different inputs so you can even map non-orthogonal inputs into separate categories. This can be demonstrated by training a two-node CALM module to categorize 100 input vectors of length 2 randomly set to a value between 0.0 and 1.0. The most optimal categorization is always one where the fist node is the most active and one where the second node is the most active. I.e. when an input would be [1, 0] versus [0, 1]. Nature rarely is that black and white, so a neural network must be able to get something useful out of a fuzzy sample.

I ran two versions of CALM 20 times, one with the Grossberg learning rule and one with the Koutnik & Snorek version. I trained the network for just one pass over 100 random input patterns and then plotted the response for all possible values of [x, y] in steps of 0.005. This gives us so-called convergence maps in which red is node 1, blue is node 2 and black is non-convergence. In the latter case, the nodes oscillate without ever converging. The first image below is that of the network using Grossberg and the second image is the other network.

Grossberg:

spacer

Koutnik & Snorek:

spacer

Eyeballing these convergence maps tells us that CALM networks using Grossberg learning rule always manage to map the input space to categories. In most cases it even manages to approach the optimal categorization. The networks using the Koutnik & Snorek variant depend too much on what the first few input patterns are. When those are not overlapping, it finds a good categorization. But the network adjusts weights too strongly. It then either favors one node or fails to distinguish between overlapping inputs (as is quite apparent in the first convergence map).

Again, it is not a big issue if you stick to orthogonal patterns. But more often that not sensory inputs are not. In those cases, it matters what activation the other nodes have, i.e. how much competition between nodes there still is and as such only slightly adapt weights. That way the network can correct quicker with the growth of the input space.

I was quite pleased, though, that Koutnik & Snorek used the Gaussian function to calculate the learning rate that I introduced in “Connection Science 2003, Vol. 15, No. 1“. It means my research has not been that useless.

Look ma, no passwords!

01.31.2012 | Author: adriaan | Comments Off
Tags: 10.7, mercurial, python

Here’s a quick and easy way to handle authenticated Mercurial repositories on 10.6 and later. First, install the mercurial_keyring python package using pip as in `sudo pip install mercurial_keyring`. Then you can update ~/.hgrc as follows:

[ui]
username = Foo Barred <[email protected]>
[extensions]
mercurial_keyring =
[auth]
foo.prefix = https://hg.bar.com
foo.username = someusername

You’ll then have to enter the password one more time, but hg will then stop bothering you. However, if the authenticated Mercurial repository is also using an unverified certificate, then it’s a good idea to also execute the following commands so that you won’t have to stare at certificate warning messages from hg:

openssl req -new -x509 -extensions v3_ca -keyout /dev/null -out dummycert.pem -days 3650
sudo cp dummycert.pem /etc/hg-dummy-cert.pem

Once executed, add the following to the ~/.hgrc file:

[web]
cacerts = /etc/hg-dummy-cert.pem

Environment

10.24.2011 | Author: adriaan | Comments Off
Tags: erlang, macosx, ssh

Really just a note to myself, but I bet someone else might run into this issue I had.

I’m revisiting the disco project, a nifty Python/erlang based map-reduce framework that I toyed with a few years back. It’s a few versions ahead now with many improvements and sporting a distributed key-value storage system. Alas, it wouldn’t run straight out of the box for me, failing to connect to an erlang instance. As it happened, erlang was installed using homebrew, which means the binary was not in a default path. Running a command using “ssh localhost“, as disco does, will bypass the user’s bash profile. Either you need to symlink the erl executable to /usr/bin/erl or you create an .ssh/environment file with a proper PATH definition (chmod 644). To get the latter to work, /etc/sshd_config must be edited so that the line containing “PermitUserEnvironment” is uncommented and set to ‘yes’.

« Older Entries
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.