The neurons in your brain are large cells that contain a cell body and dendrites, branched extensions along which impulses received from other cells at the synapses are transmitted to the cell body. The biochemistry of how all this happens is still being elucidated.
Kenneth S Kosik, University of California Santa Barbara neuroscientist explains:
“It’s fairly well established scientifically that the learning units in the brain are the synapses. Many neuroscientists think that synaptic learning requires new proteins made locally right at the synapse, which acts as its own control center.”
The sheer number of synapses, however, makes it unlikely that all of them can make the RNA responsible for creating new proteins.
Kenneth S. Kosik UCSB’s Harriman Professor of Neuroscience and co-director of the campus’s Neuroscience Research Institute. decided to do the math, computing the actual copy numbers of RNA in cells, which are derived from two copies of DNA in a chromosome.
“Dendrites don’t actually have many RNAs, but they obviously have enough because they get the job done,” Kosik says. “What is a surprise is that they do it with a relative paucity of RNAs. That is, there are many synapses beyond the reach of any RNA and therefore those synapses are not accessible to plasticity. If you have a large portion of the brain that can’t engage in learning, then what’s going on here?”
Kosik noted that having a relatively small number of RNAs allows synapses to leverage increased dynamic range.
The best analogy is an audio system that functions with sound fidelity at low and high volumes. Dendrites need to function with fidelity when their inputs are few or many. Having a small number of
RNAs provides a quantitative space to enlarge the pool dynamically when traffic into the dendrite is high.
“The dynamic range allows dendrites to double or triple or even quadruple their learning capacity in accordance with the amount of information coming in,” Kosik explained. “It also allows for sparse coding.”
Another concept in neurobiology, sparse coding plays a role in how neurons process incoming information by using the representation of our memories and perceptions through the strong activation of a relatively small set of neurons. For each stimulus, this is a different activated subset utilized from the large pool of all available neurons.
Kosik explained the concept in terms of discerning odors. Too many odors exist for each one to have a unique pattern of firing neurons.
Rather, the brain creates small maps. One odor might have 10 neurons that encode it, seven of which also encode a different odor, creating an overlap.
“In the same way, this idea of the dendrites having a relatively few number of RNAs allows them to receive a fewer number of inputs,” Kosik said. “There are lot of inputs coming through the dendritic tree, but only a few of them are capable of learning. We call the type of learning that goes on in neurons plasticity. So the dendrite only learns from impulses where protein synthesis is available—an example of sparse coding.
The RNAs near synapses inform us as to those synapses which have undergone plasticity, but like learning itself, the RNAs are not static,” Kosik added. “The RNAs like where they are. They do a good job where they are, but they eventually degrade and have to be remade, and in so doing, they may not necessarily return to their original location. Those small changes may impair our access to a memory, but now another nearby synapse is open to novelty and has an opportunity to learn a new thing.”
Kosik’s UC Santa Barbara neurobiology lab focuses on the evolution of synapses that connect neurons and the genetics of Alzheimer’s disease. In particular, his team is interested in the underlying molecular basis of plasticity and how protein translation at synapses affect learning.