Archive for December, 2012

I am taking a holiday

Monday, December 24th, 2012

Over the holidays and through January I will be taking a break from posting to Thoughts on Thoughts This is because I need a break after four and a half years, I will be moving house, and my internet connect will be interrupted for a period sometime in January. I will begin posting again, with renewed enthusiasm in February.

Merry Christmas and a happy New Year to all my visitors. See you in Feb.

Posted in Uncategorized | 3 Comments »

Attention is not a simple thing

Saturday, December 22nd, 2012

A recent paper (citation below) by a Canadian group led by J. Kam has looked at the effects of mind wandering on motor adjustments during a task. Among other interesting results, they indicate that the top-down control of attention is complex and not a single process. Nothing is ever as simple as it first appears.

 

In their conclusions, they write:

In particular, mind wandering is a phenomenon that spans an extended period of time (i.e., fluctuations of 10–15 s) exceeding a given single event, whereas attentional lapses tend to occur during a much narrower time window capturing the lapse at a single event level. Several recent theoretical and empirical papers have supported and validated these two related models of attention. Specifically, at a theoretical level, Dosenbach and colleagues have suggested there are multiple controlling systems operating at multiple scales of time. Further, in terms of empirical evidence, the findings of Esterman and colleagues suggested the occurrence of two attentional states—one tied to the default mode network (reflective of mind wandering) that is more stable and less error prone in terms of behavioral measures, and a second one tied to the dorsal attention network (reflective of attentional lapses) that requires more effortful processing. That the effects of mind wandering appear to parallel effects of attentional lapses actually lends support to the notion that task-related attention (or mind wandering) and selective attention (or attentional lapses) may exert similar forms of top–down attentional control on other neurocognitive processes. In the case of attentional control of sensory response, it has been suggested that there are at least two distinct control systems operating in parallel—one associated with rapid shifts of selective visual attention and another one associated with slower fluctuations in task-related attention . In the case of behavioral control, that Weissman and colleagues have demonstrated that attentional lapses impair goal-directed behavior and are associated with reduced pre-stimulus activation in the anterior cingulate cortex and that we found impaired adjustment of behavioral control are consistent with the idea that varying attentional control systems appear to have similar impact on various neurocognitive processes. Taken together, mind wandering and attentional lapses do appear to be related conceptually, but future work needs to be done to disentangle the overlaying attentional influences linked to dissociable neural systems.

 

Here is the abstract:

Mind wandering episodes have been construed as periods of “stimulus-independent” thought, where our minds are decoupled from the external sensory environment. In two experiments, we used behavioral and event-related potential (ERP) measures to determine whether mind wandering episodes can also be considered as periods of “response-independent” thought, with our minds disengaged from adjusting our behavioral outputs. In the first experiment, participants performed a motor tracking task and were occasionally prompted to report whether their attention was “on-task” or “mind wandering.” We found greater tracking error in periods prior to mind wandering vs. on-task reports. To ascertain whether this finding was due to attenuation in visual perception per se vs. a disruptive effect of mind wandering on performance monitoring, we conducted a second experiment in which participants completed a time-estimation task. They were given feedback on the accuracy of their estimations while we recorded their EEG, and were also occasionally asked to report their attention state. We found that the sensitivity of behavior and the P3 ERP component to feedback signals were significantly reduced just prior to mind wandering vs. on-task attentional reports. Moreover, these effects co-occurred with decreases in the error-related negativity elicited by feedback signals (fERN), a direct measure of behavioral feedback assessment in cortex. Our findings suggest that the functional consequences of mind wandering are not limited to just the processing of incoming stimulation per se, but extend as well to the control and adjustment of behavior.

spacer

Kam, J., Dao, E., Blinn, P., Krigolson, O., Boyd, L., & Handy, T. (2012). Mind wandering and motor control: off-task thinking disrupts the online adjustment of behavior Frontiers in Human Neuroscience, 6 DOI: 10.3389/fnhum.2012.00329

spacer

Posted in attention | 1 Comment »

Memory and retrieval compared

Wednesday, December 19th, 2012

ScienceDaily has an item (here) on memory retrieval by K. K. Taylor and others in Current Biology, Reactivation of Neural Ensembles during the Retrieval of Recent and Remote Memory.

It has been assumed that the hippocampus stores an event by storing a key to the group of cortical neurons that are active during the event. Retrieving a memory is then the hippocampus reactivating that particular set of neurons using its key. But, this was difficult to demonstrate experimentally. This research has a method for showing this picture of memory.

Tayler used a genetically modified mouse that carries a gene for a modified green fluorescent protein. When nerve cells in the mouse are activated, they produce a long-lived green fluorescence that persists for weeks, as well as a short-lived red fluorescence that decays in a few hours. However, the whole system can be suppressed by dosing the mouse with the antibiotic doxycycline, so Tayler and Wiltgen could manipulate the point at which they started tagging activated cells.

Using this system, they were able to track the formation of memories and their retrieval. In indeed appears to confirm the theory.

About 40 percent of the cells in the hippocampus that were tagged during initial memory formation were reactivated (during retrieval), Wiltgen said. There was also reactivation of cells in parts of the brain cortex associated with place learning and in the amygdala, which is important for emotional memory.

Here is their abstract:

Background:

Episodic memories are encoded within hippocampal and neocortical circuits. Retrieving these memories is assumed to involve reactivation of neural ensembles that were established during learning. Although it has been possible to follow the activity of individual neurons shortly after learning, it has not been possible to examine their activity weeks later during retrieval. We addressed this issue by using a stable form of GFP (H2B-GFP) to permanently tag neurons that are active during contextual fear conditioning.

Results:

H2B-GFP expression in transgenic mice was increased by learning and could be regulated by doxycycline (DOX). Using this system, we found a large network of neurons in the hippocampus, amygdala, and neocortex that were active during context fear conditioning and subsequent memory retrieval 2 days later. Reactivation was contingent on memory retrieval and was not observed when animals were trained and tested in different environments. When memory was retrieved several weeks after learning, reactivation was altered in the hippocampus and amygdala but remained unchanged in the cortex.

Conclusions:

Retrieving a recently formed context fear memory reactivates neurons in the hippocampus, amygdala, and cortex. Several weeks after learning, the degree of reactivation is altered in hippocampal and amygdala networks but remains stable in the cortex.

Posted in Uncategorized | 1 Comment »

Simulating others

Sunday, December 16th, 2012

Riken Research (text) has news of a paper by S. Suzuki and others, Learning to simulate others’ decisions, in Neuron.

‘Theory of mind’ is how the ability to predict the actions of others is known. In effect we model their thoughts. Does this simulation of thought use the same neural process as our own thoughts?

(The researchers) used functional magnetic resonance imaging to scan participants’ brains while they performed two simple decision-making tasks. In one, they were shown pairs of visual stimuli and had to choose the ‘correct’ one from each, based on randomly assigned reward values. In the second, they had to predict other people’s decisions for the same task.

The researchers confirmed that the participants’ own decision-making circuits were recruited to predict others’ decisions. The scans showed that their brains simultaneously tracked how other people behaved when presented with each pair of stimuli, and the rewards they received.

“We showed that simple simulation is not enough [to predict other peoples’ decisions], and that the simulated other’s action prediction error is used to track variations in another person’s behavior”

The two steps here appear to be (1) what would be the right decision if I was making that decision, and (2) how does the other person differ in their patterns of action from me.

Here is the abstract:

A fundamental challenge in social cognition is how humans learn another person’s values to predict their decision-making behavior. This form of learning is often assumed to require simulation of the other by direct recruitment of one’s own valuation process to model the other’s process. However, the cognitive and neural mechanism of simulation learning is not known. Using behavior, modeling, and fMRI, we show that simulation involves two learning signals in a hierarchical arrangement. A simulated-other’s reward prediction error processed in ventromedial prefrontal cortex mediated simulation by direct recruitment, being identical for valuation of the self and simulated-other. However, direct recruitment was insufficient for learning, and also required observation of the other’s choices to generate a simulated-other’s action prediction error encoded in dorsomedial/dorsolateral prefrontal cortex. These findings show that simulation uses a core prefrontal circuit for modeling the other’s valuation to generate prediction and an adjunct circuit for tracking behavioral variation to refine prediction.

Posted in Uncategorized | 1 Comment »

What is memory for anyway?

Thursday, December 13th, 2012

It is almost inconceivable that a biological function would be dedicated to the past rather than the future of an organism. The only use for knowledge of the past is to prepare for a ‘good’ future by: learning from past experience, using the past to predict the future, judging choices by past outcomes, imagining possibilities and so on. A lot of research has gone into looking at how well memory records the past. Only a little research seems to ignore that and look at how well memory provides for a successful future. A recent review by D. Schacter (see Citation) looks at the research into the future aspect of memory.

 

The article points to some older research showing that the same core regions of the default network are used for memory of the past and imagining of the future and people with some types of amnesia also have difficulty imagining novel situations. But the body of the article deals with newer research.

Specifically, we have organized the literature with respect to four key points that have emerged from research reported during the past five years: (1) it is important to distinguish between temporal and nontemporal factors when conceptualizing processes involved in remembering the past and imagining the future; (2) despite impressive similarities between remembering the past and imagining the future, theoretically important differences have also emerged; (3) the component processes that comprise the default network supporting memory-based simulations are beginning to be identified; and (4) this network can couple flexibly with other networks to support complex goal-directed simulations. We will conclude by considering briefly several other emerging points that will be important to expand on in future research.

 

There is a very interesting distinction made about time. We have past, present and future; we can imagine various time relationships such as imagining some time in the future from the prospective of looking back at it from even further into the future. But we can also abandon identifying a particular time when we imagine. For example we can simulate what it would be like to be in another’s shoes or what it would be like to be in a different place. Instead of time-traveling, we can space-travel or identity-travel. It seem that the evidence so far implies that future and atemporal imagined events are represented similarly. But there are differences between temporal and atemporal imaginings. I find this distinction very interesting and something I had not really thought about before.

 

Another interesting idea (which I have thought about) is discussed with evidence for it.

The constructive episodic simulation hypothesis states that a critical function of a constructive memory system is to make information available in a flexible manner for simulation of future events. Specifically, the hypothesis holds that past and future events draw on similar information and rely on similar underlying processes, and that the episodic memory system supports the construction of future events by extracting and recombining stored information into a simulation of a novel event. While this adaptive function allows past information to be used flexibly when simulating alternative future scenarios, the flexibility of memory may also result in vulnerability to imagination-induced memory errors, where imaginary events are confused with actual events. … a process of ‘‘scene construction’’ is critically involved in both memory and imagination. Scene construction entails retrieving and integrating perceptual, semantic, and contextual information into a coherent spatial context. Scene construction is held to be more complex than ‘‘simple’’ visual imagery for individual objects because it relies on binding together disparate types of information into a coherent whole.

 

There is much more of interest in this review. If you are interested in memory or simulations or the default network – read the original paper.

 

Here is the abstract:

During the past few years, there has been a dramatic increase in research examining the role of memory in imagination and future thinking. This work has revealed striking similarities between remembering the past and imagining or simulating the future, including the finding that a common brain network underlies both memory and imagination. Here, we discuss a number of key points that have emerged during recent years, focusing in particular on the importance of distinguishing between temporal and non-temporal factors in analyses of memory and imagination, the nature of differences between remembering the past and imagining the future, the identification of component processes that comprise the default network supporting memory-based simulations, and the finding that this network can couple flexibly with other networks to support complex goal-directed simulations. This growing area of research has broadened our conception of memory by highlighting the many ways in which memory supports adaptive functioning.

 

spacer

Schacter, D., Addis, D., Hassabis, D., Martin, V., Spreng, R., & Szpunar, K. (2012). The Future of Memory: Remembering, Imagining, and the Brain Neuron, 76 (4), 677-694 DOI: 10.1016/j.neuron.2012.11.001

spacer

Posted in memory | 1 Comment »

More about dendritic spines

Monday, December 10th, 2012

There is a good posting at CellularScale (here) where they look at a Nature paper by Fu and others, Repetitive motor learning induces coordinated formation of clustered dendritic spines in vivo. It is an interesting posting; have a read.

Here is the abstract from the original paper:

Many lines of evidence suggest that memory in the mammalian brain is stored with distinct spatiotemporal patterns. Despite recent progresses in identifying neuronal populations involved in memory coding, the synapse-level mechanism is still poorly understood. Computational models and electrophysiological data have shown that functional clustering of synapses along dendritic branches leads to nonlinear summation of synaptic inputs and greatly expands the computing power of a neural network. However, whether neighbouring synapses are involved in encoding similar memory and how task-specific cortical networks develop during learning remain elusive. Using transcranial two-photon microscopy, we followed apical dendrites of layer 5 pyramidal neurons in the motor cortex while mice practised novel forelimb skills. Here we show that a third of new dendritic spines (postsynaptic structures of most excitatory synapses) formed during the acquisition phase of learning emerge in clusters, and that most such clusters are neighbouring spine pairs. These clustered new spines are more likely to persist throughout prolonged learning sessions, and even long after training stops, than non-clustered counterparts. Moreover, formation of new spine clusters requires repetition of the same motor task, and the emergence of succedent new spine(s) accompanies the strengthening of the first new spine in the cluster. We also show that under control conditions new spines appear to avoid existing stable spines, rather than being uniformly added along dendrites. However, succedent new spines in clusters overcome such a spatial constraint and form in close vicinity to neighbouring stable spines. Our findings suggest that clustering of new synapses along dendrites is induced by repetitive activation of the cortical circuitry during learning, providing a structural basis for spatial coding of motor memory in the mammalian brain.

The ‘take home’ from the blog is this:

The authors explain two possible functions for these spine clusters:

“Positioning multiple synapses between a pair of neurons in close proximity allows nonlinear summation of synaptic strength, and potentially increases the dynamic range of synaptic transmission well beyond what can be achieved by random positioning of the same number of synapses.”

Meaning spines that are clustered and receive inputs from the same neuron have more power to influence the cell than spines further apart.

“Alternatively, clustered new spines may synapse with distinct (but presumably functionally related) presynaptic partners. In this case, they could potentially integrate inputs from different neurons nonlinearly and increase the circuit’s computational power. “

Meaning that maybe the spines don’t receive input from the same neuron, but are clustered so they can integrate signals across neurons more powerfully.

Posted in Uncategorized | 1 Comment »

Dendritic spines

Friday, December 7th, 2012

ScienceDaily has an item on a paper in Nature by Harnett and others, Synaptic amplification by dendritic spines enhances input cooperativity; a very interesting subject to me.

Not all that many years ago, neurons were thought of as simple switches and relays. Then they were thought of as logic gates; then little computers. But now we know that they were quite complicated. This paper looks at the spines on the dendrites of a neuron. A neuron has a ‘bush’ or ‘tree’ of dendrites. Along the length of these dendrite processes are short little spines and at the end of each spine are one or more synapses.

These tiny membranous structures protrude from dendrites’ branches; spread across the entire dendritic tree, the spines on one neuron collect signals from an average of 1,000 others. … Dendritic spines come in a variety of shapes, but typically consist of a bulbous spine head at the end of a thin tube, or neck. Each spine head contains one or more synapses and is located in very close proximity to an axon coming from another neuron. … Scientists have gained insight into the chemical properties of dendritic spines: receptors on their surface are known to respond to a number of neurotransmitters, such as glutamate and glycine, released by other neurons. … the spines’ incredibly small size — roughly 1/100 the diameter of a human hair …

Why do neurons have their incoming synapses at the end of little stalks. It isolates them chemically and electrically from other synapses. We can think of spines like the brackets in math and logical statements. The processing within the bracket is done before the result interacts with other terms. The spines also can amplify the signal before it leaves the spine. Further because the spines have high impedance they offer differing resistance depending on the frequency of the signal. The spines add an important layer of signal manipulation to the neuron.

Here is the abstract:

Dendritic spines are the nearly ubiquitous site of excitatory synaptic input onto neurons and as such are critically positioned to influence diverse aspects of neuronal signaling. Decades of theoretical studies have proposed that spines may function as highly effective and modifiable chemical and electrical compartments that regulate synaptic efficacy, integration and plasticity. Experimental studies have confirmed activity-dependent structural dynamics and biochemical compartmentalization by spines. However, there is a longstanding debate over the influence of spines on the electrical aspects of synaptic transmission and dendritic operation. Here we measure the amplitude ratio of spine head to parent dendrite voltage across a range of dendritic compartments and calculate the associated spine neck resistance for spines at apical trunk dendrites in rat hippocampal CA1 pyramidal neurons. We find that neck resistance is large enough (~500?M?) to amplify substantially the spine head depolarization associated with a unitary synaptic input by ~1.5- to ~45-fold, depending on parent dendritic impedance. A morphologically realistic compartmental model capable of reproducing the observed spatial profile of the amplitude ratio indicates that spines provide a consistently high-impedance input structure throughout the dendritic arborization. Finally, we demonstrate that the amplification produced by spines encourages electrical interaction among coactive inputs through a neck resistance-dependent increase in spine head voltage-gated conductance activation. We conclude that the electrical properties of spines promote nonlinear dendritic processing and associated forms of plasticity and storage, thus fundamentally enhancing the computational capabilities of neurons.

There is another aspect of spines to my mind. They allow glial cells to make better contact with synapses. They are also part of the complex processes in and around synapses.

Posted in Uncategorized | 1 Comment »

Top-down control in action

Tuesday, December 4th, 2012

The prefrontal cortex can select a rule to deploy in a particular situation. How is this done? The group of neurons that deploy a rule oscillate in synchrony when that rule is to be used. This synchrony explanation is becoming quite common. Synchrony is what produces functioning groups of neurons. Buschman at al have looked at rule selection in detail. However, I cannot access their paper. Fortunately two reviews of the paper are available (see citations).

 

Buschman used monkeys with electrodes implanted in their dorsolateral prefrontal cortex. They were taught to respond to the colour of a target and to the orientation of a target. They were presented with coloured targets that had an orientation. The decision for the monkey was which rule to use: colour or orientation discrimination. The monkeys were given a cue before the target that instructed them to use one rule or the other. In this way the researchers could see the events of choosing the rule.

 

The results –

The authors found that the local field potentials (LFPs) of a subset of the electrodes synchronized in the higher beta band (19–40 Hz) around stimulus onset when the color rule was applied. When the orientation rule was applied, LFPs from a different set of electrodes synchronized in the same frequency band. Crucially, when the more difficult color rule was applied, the researchers observed pre-stimulus synchronization in the alpha band (6–16 Hz) for electrodes showing a preference for the orientation rule.

The oscillatory activity had consequences for behavior: especially stronger alpha-band synchronization allowed the monkeys to perform the task faster. In line with the behavioral effects, the higher the anticipatory alpha power for the orientation ensemble, the higher the spike rate of the color-rule ensemble during stimulus presentation. Additionally it was demonstrated that neuronal spiking was phase-locked to beta oscillations. … The strong phase- locking between spikes and LFPs demonstrates that the timing of neuronal action potentials is determined by the phase of ongoing oscillations. As such, oscillations are intimately involved in controlling the dynamics underlying neuronal computations. Oscillations might not only be important for creating neuronal ensembles within regions, but also for communication between distant regions.

 

This is interpreted to mean that the executive functioning in rule selection involves beta wave synchrony. This synchrony gathers together what is needed for the use of that rule in a discrimination. The alpha synchrony appears to suppress the default (orientation) rule so that the non-default rule (colour) is easier to deploy.

 

This seems to be what top-down control looks like.

 

For those of you that can access it, the Buschman paper is also listed below.

 

spacer

Jensen, O., & Bonnefond, M. (2012). Prefrontal alpha- and beta-band oscillations are involved in rule selection Trends in Cognitive Sciences DOI: 10.1016/j.tics.2012.11.002

Engel, A. (2012). Rules Got Rhythm Neuron, 76 (4), 673-676 DOI: 10.1016/j.neuron.2012.11.003

Buschman TJ, Denovellis EL, Diogo C, Bullock D, & Miller EK (2012). Synchronous oscillatory neural ensembles for rules in the prefrontal cortex. Neuron, 76 (4), 838-46 PMID: 23177967

spacer

Posted in