Papers

  • 2006: A Neural Model for Context-dependent Sequence Learning

    A novel neural network model is described that implements context-dependent learning of complex sequences. The model utilises leaky integrate-and-fire neurons to extract timing information from its input and modifies its weights using a learning rule with synaptic noise. Learning and recall phases are seamlessly integrated so that the network can grad- ually shift from learning to predicting its input. Experimental results using data from the real-world problem domain demonstrate that the use of context has three important benefits: (a) it prevents catastrophic interference during learning of multiple overlapping sequences, (b) it enables the completion of sequences from missing or noisy patterns, and (c) it provides a mechanism to selectively explore the space of learned sequences during free recall.

    Manuscript co-authored with
    Luc Berthouze.
    Published in Neural Processing Letters, Volume 23, Number 1, Feb. 2006, pp: 27รขโ‚ฌโ€œ45 [download]

  • 2004: Sequential Information Processing using Time-Delay Connections in Ontogenic CALM networks

    A variant of the Categorization-And-Learning-Module network is presented that is not only capable of categorizing sequential information with feedback, but can adapt its resources to the current training set. In other words, the modules of the network may grow or shrink depending on the complexity of the presented sequence-set. In the original CALM algorithm, modules did not have access to activations from earlier stimulus presentations. To bypass this limitation, we introduced time-delay connections in CALM. These connections allow for a delayed propagation of activation, such that information at a given time will be available to a module at a later timestep. In addition, modules can autonomously add and remove resources depending on the structure and complexity of the task domain. The performance of this ontogenic CALM network with time-delay connections is demonstrated and analyzed using a sample set of overlapping sequences from an existing problem domain.

    Published in IEEE Transactions on Neural Networks, Volume 16, Issue 1, Jan. 2005, pp: 145 – 159 [download]

  • 2003: Improved Self-Organization with an Alternative Inhibition Gradient in CALMMap Networks

    Phaf, Den Dulk, Tijsseling & Lebert (2001) described a self-organizing variant of the CALM modular neural network model (Murre, Phaf, & Wolters, 1992), which has been coined CALMMap. The self-organization mechanism is driven by a specific inhibition gradient formula. The disadvantage of this formula is that it introduces new parameters and is not optimally adjusted to the size of a module. This research note provides an alternative for this inhibition gradient formula with the benefits that it performs autonomously. In other words, no extra parameters are introduced and the strength of inhibition adapts to the size of a module. Simulations are described to show the effectiveness of the alternative inhibition gradient function.

    Published in
    Connection Science, Volume 15, Number 1 / March 2003, pp. 63 – 71 [download]

  • 2002: Acquiring ontological categories through interaction

    We present an experimental framework for the study of a computational agent’s acquisition of ontological categories through interaction with a human agent. This interaction takes the form of joint attention in a shared visual environment. Through a game of guessing words, the computational agent adapts its categorization system – a CALM-based neural architecture with a set of Gabor filters – and develops a shared lexicon with the human agent. This paper reports initial experimental results.

    Manuscript co-authored with
    Luc Berthouze.
    Published in The Journal of Three Dimensional Images, 16(4), December 2002, pp. 141-147. Also featured at the Fifth International Conference on Humans and Computers, Aizu, Japan, September 11-14, 2002 [download]

  • 2001: Embodiment is meaningless without adequate neural dynamics

    Traditionally, cognition has been considered as a mental processes only domain. Recently, however, there is a growing conscensus that cognition should be embodied, i.e. it emerges from physical interaction with the world through a body with given perceptual and motoric abilities. Terms like emergence, enaction, grounding, and situatedness are often used, but little attention is being paid to actually understanding the neural dynamics correlates of an emergence of cognition. Nor is hardly being investigated how the structure of the body-environment coupling is perceived and manipulated by our brain. It is as if talking about neural dynamics would somehow throw us backwards to the old cognitivist approach. In this paper we present a balanced view, in which we try to keep things in their respective place.

    Manuscript co-authored with
    Luc Berthouze.
    PUblished in DECO-2001 conference paper, May, 31st, 2001 [download]

  • 2001: A neural network for temporal sequential information

    We report a neural network model that is capable of learning arbitrary input sequences quickly and online. It is also capable of completing a sequence upon cueing from any point in the sequence and in the presence of background noise. The architecture of the neural network utilizes sigmoid-pulse generating spiking neurons together with a Hebbian learning rule with synaptic noise.

    Manuscript co-authored with
    Luc Berthouze.
    Published in Proceedings of the July, 11th, 2001 ICONIP2001 conference in Shanghai, China. [download]

  • 2002: A Connectionist Approach to Processing Dimensional Interaction

    The difference between the integral and separable interaction of dimensions is a classic problem in cognitive psychology (Garner, 1970; Shepard, 1964) and remains an essential component of most current experimental and theoretical analyses of category learning (e.g. Ashby & Maddox, 1994; Goldstone, 1994; Kruschke, 1993; Melara, Marks, & Potts, 1993; Nosofsky, 1992). So far, however, the problem has been addressed through post-hoc analysis in which empirical evidence of integral and separable processing is used to fit human data, showing how the impact of a pair of dimensions interacting in an integral or separable manner enters into later learning processes. In this paper, we argue that a mechanistic connectionist explanation for variations in dimensional interactions is possible by exploring how similarities between stimuli are transformed from physical to psychological space when learning to identify, discriminate, and categorize them. We substantiate this claim by demonstrating how even a simple backprop auto-encoder combined with an image-filtering input layer already provides limited but a clear potential to process monochromatic stimuli that are composed of integral pairs of dimensions differently from monochromatic stimuli that are composed of separable pairs of dimensions. In addition, we show that adding a basic attention-mechanism to backprop will give it the ability to selectively attend to relevant dimensions.

    Manuscript co-authored with
    Mark Gluck.
    Published in
    Connection Science, 14(1), 2002, pp. 1-48. [download]

  • 1998: Connectionist Models of Categorization: A Dynamical View of Cognition

    spacer
    The functional role of altered similarity structure in categorization is analyzed. “Categorical Perception” (CP) occurs when equal-sized physical differences in the signals arriving at our sensory receptors are perceived as smaller within categories and larger between categories (Harnad, 1987). Our hypothesis is that it is by modifying the similarity between internal representations that successful categorization is achieved. This effect depends in part on the iconicity of the inputs, which induces a similarity preserving structure in the internal representations. Categorizations based on the similarity between stimuli are easier to learn than contra-iconic categorization; it is mainly to modify the latter in the service of categorization that the characteristic compression/separation of CP occurs.

        This hypothesis was tested in a series of neural net simulations of studies on category learning in human subject. The nets are first pre-exposed to the inputs and then given feedback on their performance. The behavior of the resulting networks was then analyzed and compared to human performance.

        Before it is given feedback, the network discriminates and categorizes input based on the inherent similarity of the input structure. With corrective feedback the net moves its internal representations away from category boundaries. The effect is that similarity of patterns that belong to different categories is decreased, while similarity of patterns from the same category is increased (CP). Neural net simulations make it possible to look inside a hypothetical black box of how categorization may be accomplished; it is shown how increased attention to one or more dimensions in the input and the salience of input features affect category learning.

        Moreover, the observed warping of similarity space in the service of categorization can provide useful functionality by creating compact, bounded chunks with category names that can then be combined into higher-order categories described by the symbol strings of natural language and the language of thought. The dynamic models of categorization of the kind analyzed here can be extended to make them powerful models of neuro-symbolic processing and a fruitful territory for future research.

    Ph.D. thesis conducted under supervision of
    Stevan Harnad and examined by Jaap Murre and Itiel Dror. [download thesis and color figures]

  • 1998: Do Features Arise out of Nothing?

    This commentary questions the validity of the claim that new features can apparently be constructed out of nothing during categorization. Rather, it is argued that a minimal set of fixed features based on what human beings are able to detect is sufficient for categorization.

    Commentary on Schyns, Goldstone, & Thibaut (1998), “The Development of Features in Object Concepts“,
    Behavioral and Brain Sciences. [download]

  • 2001: Novelty-dependent learning and topological mapping

    Unsupervised topological ordering, similar to Kohonen’s (1982, Biological Cybernetics, 43: 59-69) self-organizing feature map, was achieved in a connectionist module for competitive learning (a CALM Map) by internally regulating the learning rate and the size of the active neighbourhood on the basis of input novelty. In this module, winner-take-all competition and the ‘activity bubble’ are due tograded lateral inhibition between units. It tends to separate representations as far apart as possible, which leads to interpolation abilities and an absence of catastrophic interference when the interfering set of patterns forms an interpolated set of the initial data set. More than the Kohonen maps, these maps provide an opportunity for building psychologically and neurophysiologically motivated multimodular connectionist models. As an example, the dual pathway connectionist model for fear conditioning by Armony et al. (1997, Trends in Cognitive Science, 1: 28-34) was rebuilt and extended with CALM Maps. If the detection of novelty enhances memory encoding in a canonical circuit, such as the CALM Map, this could explain the finding of large distributed networks for novelty detection (e.g. Knight and Scabini, 1998, Journal of Clinical Neurophysiology, 15: 3-13) in the brain.

    Manuscript co-authored with Hans Phaf, Paul Den Dulk, and Ed Lebert.
    Published in
    Connection Science, 13(4), 2001, pp. 293-321. [download]

  • 1998: Movement in a Solid Sphere

    Margolis argues that a solid-spheres version of the Tycho model of planetary orbits is indeed possible, and only appears impossible because of a cognitive illusion. This is not quite the case, as can be clearly seen from an animated version of the Tychonic system.

    Psycoloquy commentary co-authored with
    Dr. Marcus Munafo, Oxford University, England. [download]

  • 1998: Categories as Attractors

    Poster presented at the Second International Conference On Cognitive And Neural Systems, Boston University, May 27-30, 1998.

    Co-authored with Rachel Pevtzow, Mike Casey, and
    Stevan Harnad. [download]

  • 1997: Warping similarity space in category learning by backprop nets

    This paper describes a series of simulations, which are basically reproductions of the simulations by Harnad, Hanson, & Lubin (1991, 1994), but subjected to more detailed analyses and also extended to cover more complex input patterns. The findings suggest a possible mechanism for categorical perception based on altering interstimulus similarity.

    Manuscript co-authored with
    Stevan Harnad.
    Published in Proceedings of the Interdisciplinary Workshop on Similarity and Categorisation, University of Edinburgh, November 1997. [download]

Tweet this article...

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.