This is post #8 of this series on the Brain (First Post and Last Post). As I mention every time, these posts are from a series of lectures by Prof. Jan Schnupp, and I want to make sure he is properly quoted and credited. Many parts are his lectures verbatim. However, for any errors, mistakes or inaccuracies in anything I write in here, I take all the credit. I probably misunderstood him or made it up. His course material is available here.
We've talked about synapses and how they help the brain perform calculations, but they do more than that. This is a really important concept. When a synapse connects two neurons, the strength of the current that flows between them depends on several factors. It largely depends on how many receptors are present, whether enough neurotransmitter is released to activate all or just some of the receptors, and how effectively each receptor opens and allows current to flow through.
Each synapse has a certain synaptic strength based on how it is built. Some synapses might be tiny with only a few receptors, allowing very little current through. Others might be much larger, letting a significant amount of current through each time they are activated. When a neuron integrates information from multiple inputs, not all synapses have the same "vote." Some synapses have a stronger influence because they have more receptors or stay open longer.
However, the strength of a synapse can change over time, which is a critical concept in neuroscience known as synaptic plasticity. If a synapse fires and allows a certain amount of current through, the next time it fires, that current might be larger or smaller. There are various reasons for this variability. One could be fatigue. If a synapse fires repeatedly, it must replenish its supply of vesicles and neurotransmitters. If it canโt keep up, the amount of neurotransmitter released will decrease, weakening the connection. This is a form of short-term plasticity.
People often talk about paired-pulse depression, where firing a synapse twice in quick succession results in a strong response the first time but a weaker response the second time. However, you can also get paired-pulse facilitation, which is somewhat paradoxical. In this case, if you fire a synapse twice in a row, the second response is actually stronger than the first. This could be due to residual calcium left over from the first pulse, which strengthens the second release of neurotransmitters.
While short-term plasticity is interesting and can lead to all sorts of fascinating phenomena, long-term plasticity is even more intriguing. Long-term changes in synaptic strength, such as long-term potentiation (LTP) or its counterpart, long-term depression (LTD), are thought to be the basis of long-term memory, learning, and how experience shapes the brain. The idea is that synapses can become permanently stronger or weaker over time.
Imagine you're a post-synaptic neuron receiving information from 10,000 other neurons. Some of these neurons are feeding you useful and important information, while others are just sending irrelevant noise. Over time, you strengthen the connections from neurons that provide meaningful input and weaken the ones that donโt. This process of enhancing or diminishing synaptic strength is how the brain adjusts and learns through experience.
However, as a post-synaptic neuron, how do you know which inputs are useful? Neuroscientists are still studying this. Some stimuli are inherently recognised as important, like tasty food, which is what psychologists call an unconditioned stimulus. No one needs to teach you that good food is desirableโthis is hardwired.
Now, imagine youโre a neuron in Pavlovโs dog. In the famous experiment, the dog was conditioned to associate the sound of a bell with food, causing it to salivate when the bell rang. As the neuron responsible for salivation, you're receiving signals from multiple sources. You smell the food (an unconditioned stimulus), but youโre also receiving input from other neurons, some of which signal that the bell is ringing.
If the bell-ringing neuron fires just before the unconditioned stimulus (the food), you strengthen that synapse due to temporal coincidence. This means you start to associate the bell with food, reinforcing the synapse that told you the bell was about to ring. This is the basis of associative learningโconnecting a neutral stimulus (the bell) with a positive one (the food). This concept is closely tied to the work of Donald Hebb, who proposed that neurons that fire together, wire together. (see a previous post prior to this series).
Itโs called Hebb's rule after Donald Hebb, a Canadian psychologist who first proposed that neurons strengthen their connections when they are activated together. According to this principle, neurons that are frequently activated simultaneously will reinforce their connections, leading to learning and memory formation through association. This can be observed in brain regions known to be important for memory, such as the hippocampus and neocortex. You can even take slices of neural tissue, stimulate synapses with specific temporal patterns, and observe that these synapses become stronger over time.
So, how does this work? It involves a particular type of synaptic receptor known as the NMDA receptor. The NMDA receptor is a type of ionotropic glutamate receptor, and you find many of them in areas of the brain like the hippocampus and the cerebral cortex. These receptors allow sodium and calcium to flow into the cell, but thereโs a catch.
The NMDA receptor is normally blocked by a magnesium ion, which sits in the receptor channel, preventing anything from passing through. The magnesium ion is positively charged and is held in place by the negative charge inside the cell. However, when the cell becomes depolarizedโmeaning itโs already somewhat excitedโthe magnesium ion is ejected, clearing the channel. Once the magnesium is gone, calcium can flow in, allowing the receptor to excite the cell, but only when the cell is already excited.
This influx of calcium is special because calcium triggers additional signaling pathways within the cell, contributing to long-term changes in synaptic strength. Other glutamate receptors, like AMPA or kainate receptors, typically allow sodium to flow through but not calcium, which is why NMDA receptors play a key role in synaptic plasticity and learning.
Basically, if I have a glutamatergic synapse with an NMDA receptor that gets activated just after the cell has already been excited, the NMDA receptor signals that something significant has occurred during an already active state. For example, in the classic Pavlovian scenario, you smell food, which depolarizes the cell, and then the bell rings (even though the order is reversed in this story). When the bell rings, calcium flows into the cell, which triggers an intracellular second messenger cascade. This process strengthens the synapse by inserting more AMPA receptors, making it more effective the next time itโs activated because it was engaged in this pre- and post-activation sequence.
How do we know this mechanism is tied to memory? A researcher in Edinburgh named Richard Morris studied memory extensively and developed a test known as the Morris water maze. In this experiment, a rat is placed in a pool of milky water, where platforms are hidden just below the surface. The rat swims around, trying to find the platform to rest. After a few trials, the rat learns where the platform is located and swims to that quadrant directly. However, if you administer a drug like AP5 or APV, which blocks NMDA receptors, the rat is unable to learn and will continue swimming aimlessly, unable to remember the platformโs location. This provides strong evidence that NMDA receptors are crucial for memory formation.
As I mentioned earlier, ethanol also affects these processes. Drinking alcohol in large amounts activates GABA receptors, making you drowsy. If you consume even larger quantities, ethanol blocks NMDA receptors, leading to memory impairmentsโhence the memory loss associated with heavy drinking. This demonstrates how NMDA receptors are involved in associative learning and how modifying the connections between neurons stores information and shapes the brain through experience.
This is an idea that we will hopefully revisit later in another post. It's a relatively recent finding that whether a synapse is strengthened or weakened may depend heavily on the timing of presynaptic excitation relative to postsynaptic excitation. If the synaptic input occurs just slightly after the presynaptic input, the synapse tends to strengthen; otherwise, it may weaken. This mechanism is quite common in the nervous system and has led to various theories about how it impacts brain function, particularly how it may enhance temporal precision in neural activity. The timing involved here is very briefโon the scale of tens of milliseconds. For a synapse to become strong and effective, it must reliably fire within a few milliseconds of the neuron itself becoming powerfully excited.
Great Post!.... Thanks for sharing!