This is post #7 of this series on the Brain (First Post and Last Post). As I mention every time, these posts are from a series of lectures by Prof. Jan Schnupp, and I want to make sure he is properly quoted and credited. Many parts are his lectures verbatim. However, for any errors, mistakes or inaccuracies in anything I write in here, I take all the credit. I probably misunderstood him or made it up. His course material is available here.
We've already discussed the various receptors and how they function. Whether directly or indirectly (if they're metabotropic), they alter the cell membrane's permeability to ions, which, in turn, affects the neuron's potential. Let's take a closer look at how this works.
The key point is that we want the postsynaptic neuron to receive information from potentially a large numbers of other neurons, integrate that information, and make sense of it. The neuromuscular junction we mentioned earlier is, in a way, an unusual synapse. If you have a muscle fiber and a motor neuron, when the motor neuron sends an impulse, every time that impulse reaches the muscle fiber, the muscle fiber twitches. In this scenario, when the nerve says "jump," the muscle jumps—every single time. This synapse is very strong and reliable, but that’s not how you’d want to design a brain.
In the brain, you want to be able to integrate and combine information. For example, you only want to "jump" if several specific conditions are met. You need a system that can integrate information from multiple inputs. How does the brain do this? It ensures that the depolarization created by an excitatory neuron is not, on its own, strong enough to trigger an action potential at the axon hillock. The axon hillock is packed with voltage-gated sodium channels, and these channels will only initiate an action potential if the membrane voltage becomes sufficiently depolarized. This depolarization causes a positive feedback loop, generating the nerve impulse.
There’s a threshold voltage at the axon hillock. If the post-synaptic current from a single synapse isn’t strong enough, the neuron won’t fire. One neuron might signal "jump," but the cell won’t respond unless the signal is strong enough or repeated. Two things can happen: either the neuron repeatedly signals "jump, jump, jump" (increasing the frequency), or multiple neurons simultaneously signal "jump."
These are different types of synaptic integration: temporal and spatial. Temporal integration occurs when a single synapse repeatedly fires in quick succession, while spatial integration involves multiple synapses firing at the same time. With spatial integration, several synapses open ion channels simultaneously, allowing currents to flow in. These currents combine, and if the total is large enough, the post-synaptic cell will fire. It’s like the cell is "counting votes" from excitatory synaptic inputs—if it receives enough, crossing a threshold, it responds and fires.
However, post-synaptic currents have a time course. It’s important to understand this time course. Receptors open briefly, allowing a current to flow in, but once they close, the cell will begin to repolarize as current flows back out through potassium leakage channels.
How long is the interval from one point to another? Well, it ranges from about 3 milliseconds to 300 milliseconds, depending on how long the channels stay open. Different channels have different dynamics. If the channels stay open long enough, a second depolarization can arrive before the first one has completely finished. In principle, action potentials can arrive at a rate of one every few milliseconds. The absolute limit is that you can’t have more than one per millisecond, due to the absolute refractory period. However, you could have a couple hundred action potentials in a single second, and these overlapping post-synaptic potentials could create a strongly depolarizing current in the cell.
In this way, you can achieve both temporal and spatial integration. While "integration" sounds like a complex term, in mathematical terms, it just refers to summing things up. Essentially, what’s happening here is that the neuron sums the currents it receives. A typical neuron in the central nervous system listens to many other neurons and adds up their input. These inputs can be positive (excitatory) or negative (inhibitory). The neuron combines them and assesses how this affects its membrane voltage overall. If the change in membrane voltage is enough to exceed the action potential threshold, the neuron will fire an action potential of its own.
It’s estimated that each neuron in your cortex receives input from about 7,000 to 10,000 synapses. If roughly 100 excitatory synapses fire at the same time, that’s enough to trigger a post-synaptic potential.
When 100 synapses fire, it's enough to trigger an action potential. That action potential, in turn, will be sent to 10,000 other neurons, providing about 1% of the activation needed for each of those neurons to fire their own action potential. This creates a potentially risky situation because, if each action potential contributes 1% of the required activation to 10,000 neurons, it results in a magnification factor of 100. A small impulse in the brain could quickly escalate into a widespread ripple of activity. However, we don't want such uncontrolled ripples; we need the brain to avoid becoming overexcited.
Of the 10,000 synapses a typical neuron in the brain receives, not all are excitatory—though excitatory synapses outnumber inhibitory ones by about five to one. The brain relies on sufficient inhibition to prevent a chain reaction where one action potential triggers another 100, which then trigger 10,000 more, and so on. Without this inhibitory control, the brain could spiral into excessive activity.
How do we prevent this? By ensuring there are powerful inhibitory feedback loops, even though inhibitory synapses are fewer in number. Inhibitory synapses are often positioned strategically close to the cell body or axon hillock, where they can exert a strong dampening effect. This placement allows inhibition to effectively "clamp down" on overexcitation, preventing the brain from becoming overstimulated. If the balance between excitation and inhibition fails, it can lead to conditions like epilepsy, where excessive excitation spreads uncontrollably.
To manage this, we use drugs that enhance inhibitory neurotransmission. For example, benzodiazepines and barbiturates interact with GABA receptors, increasing the flow of chloride ions into neurons. This enhances inhibition, making it less likely for someone to experience an epileptic seizure. The downside is that such drugs can cause drowsiness, so the dosage needs to be carefully managed to prevent seizures while maintaining alertness.
Inhibition is crucial not just for preventing excessive neural activity, but also for enabling synaptic "arithmetic." It allows the brain to add and subtract currents, helping it process and integrate complex information effectively.
Neurons act like little computers, performing calculations by summing inputs. Each neuron receives a certain number of excitatory inputs, which it adds up. It also receives inhibitory inputs, which it subtracts from the excitatory total. The neuron then compares the result to a threshold: if the remaining depolarization is strong enough to bring the axon hillock to threshold, the neuron fires an action potential. If it doesn’t reach the threshold, the neuron doesn’t fire.
This process happens automatically as the excitatory and inhibitory currents flow toward the soma and the axon hillock. The neuron effectively asks, "Is my depolarization strong enough to trigger an action potential?" If yes, it fires; if not, it doesn’t. This is the basic computation that all neurons perform, and it can be modeled using a "leaky integrate-and-fire" model, a common approach in computational neuroscience.
Many researchers are now trying to simulate brain activity on computers—an approach known as "in silico" modeling. For instance, there’s a researcher in Switzerland using a computer the size of a house, consuming vast amounts of power, to simulate a mere cubic millimeter of cortical tissue. His model is incredibly detailed and realistic, but it's debatable how much is gained from such complexity.