The Brain in Balance
A universal mathematical optimization principle regulates the computations the brain can do and how it does them.
This affects who we are as individuals and the narrative of our unique human experience. It might also lead to fundamentally new forms of machine learning and AI.
"The brain has no knowledge until connections are made between neurons. All that we know, all that we are, comes from the way our neurons are connected."
The brain makes use of a mathematically derived constraint principle we call the Refraction Ratio to optimize how it computes and processes information.
The algorithms of the brain evolved not just to accommodate this principle, but use it to encode, represent and process information.
Can shifts in the Refraction Ratio help explain what makes each of us unique as individuals? We are exploring this question. How we take in, experience and interact with the world and people around us.
And if significant deviations are responsible for changes in information processing in the brain that clinically manifest as neurodevelopmental spectrum disorders such as Autism, and neurodegenerative disorders.
We are also exploring how the Refraction Ratio in combination with sophisticated and abstract branches of mathematics, in particular Category Theory, can be used to model and explain complex emergent functions in the brain, such creativity, imagination, and inference.
What is complex emergence?
Complex emergent functions are properties that seem to 'emerge' from the biology and physiology of the brain in a way that exceeds what is understood about the details of relevant processes.
In other words, the whole is more than just the sum of the parts.
Complex systems, like the brain, are fundamentally different than merely complicated systems like, say, an airplane or a rocket ship. While a rocket ship is a very sophisticated and complicated piece of engineering and technology, an engineered blueprint exists somewhere to reveal how the placement and purpose of every bolt and screw contributes to the function of the whole object.
In a complex system, knowing how the individual pieces work does not by itself allow you to understand how the whole system works. How or why it does what it is capable of doing. These emergent functions or properties of the system cannot be explained or accounted for by understanding the individual parts in isolation. There is something that happens when those parts come together that result in functions and properties that exceed what a simple summation of the individual parts can accomplish.
To give an intuitive sense of what complex systems can do, another amazing example of a complex system are ant colonies. While the rules that individual ants follow are relatively simple and few, what the collective intelligence of the colony can accomplish is nothing short of miraculous. Take a look at the following video.
By taking advantage of what we are learning about the brain and how its algorithms optimize for the Refraction Ratio, we are leveraging the same mathematics to develop a conceptually new approach to data encoding and machine learning.
The core mathematical ideas and theory are allowing us to pursue engineering applications while we continue to study neurological disorders.
The algorithms that form the base code for this new machine learning are capable of encoding and classifying information and data without prior training or exposure to any conditioning data.
From an engineering functionality perspective, we are focused on exploring and developing algorithms for machine inference and problem solving that can constructively collaborate with humans.
From an engineering and machine learning application perspective, we are particularly interested in computational efficiency, space research and exploration, and national security applications. In all these these use cases, limited computational resources, incomplete and sparse data insufficient for training, and the need for genuine human-machine collaboration will be critical.
We are just getting started.
This is all a work in progress. The scientific implications, engineering applications, and impact on the human experience remains to be fully realized.
Over the next few sections we will explain in a bit more technical detail what the Refraction Ratio is, how the brain seems to use it as an optimization principle, some of the neuroscience methods and models we use, and how we are leveraging what we are learning about the brain to explore new machine learning models. So read on.
The Refraction Ratio explained.
What is the Refraction Ratio?
It is a mathematical principle we derived. It sets constraints and predicts optimal conditions for information processing by networks with a physical structural geometry on which information is flowing at finite speeds.
Brain networks fall into this category, and as such are subject to this principle. The Refraction Ratio determines how efficiently the brain is able to learn, represent, and process information.
Neurons are the main class of cells in the brain. They are responsible for representing and processing information.
Connected neurons form extensive networks that organize themselves into brain circuits.
And groups of brain circuits across different parts of the brain are connected into networks of brain regions.
The fundamental signaling unit in the brain is the action potential. An electrical event that acts as a single bit of information between connected and communicating neurons.
The ratio between the distances the signals have to travel over the convoluted geometry of brain networks relative to how fast they are traveling needs to be carefully balanced with how long it takes individual neurons to process the information they are receiving.
To achieve optimal computation and information processing this principle is critical. There needs to be a careful balance between how fast signals are traveling between neurons - the latencies created by the signaling speed over the distances traveled due to the network geometry - with how much time neurons need internally to process that information before they send out a signal in turn.
We call this principle the Refraction Ratio. The brain balances this ratio as an optimization principle to achieve optimal computation.
The Refraction Ratio reflects a balance between local constraints (i.e. how long each neuron needs to process information) versus global constraints (i.e. how long it takes for signals to travel between neurons in the network given its geometry and signaling speeds).
The Refraction Ratio was not something we discovered doing experiments on the brain or its neurons. It was a mathematical prediction we derived from a consideration about how the geometric structure of networks affect their dynamics.
An example of how mathematical theory can contribute to discoveries about the brain that can be translated into engineering and machine learning.
The theoretical mathematically derived ideal for the Refraction Ratio is unity. A ratio that approaches a value of one. This reflects the physical balance between local and global process on a geometric dynamic network.
It reflects a balance between the dynamics (i.e. signals and information propagating through the network) at a local (individual neuron - or node - scale) versus global whole network scale, given the physical constraints of the structural network geometry on which that dynamics is operating.
A deviation from the Refraction Ratio can dramatically affect how efficiently information can be represented and processed by a network. To the point that information processing can completely breakdown if the deviation is severe enough.
By measuring or computing the Refraction Ratio we can learn a lot about the specifics of the network being analyzed and studied. Including possibly how to reverse any breakdown in information processing if we know something about the physical details about how the network is constructed.
This is exactly the approach we are taking in using the Refraction Ratio to both understand and possibly treat neurodevelopmental and neurodegenerative disorders.
Since we know quite a lot about the genetic, molecular and neurophysiological details of what goes wrong in neurodevelopmental and neurodegenerative disorders, measuring the Refraction Ratio in the brains of affected individuals may suggest interventions and approaches to correct deviations from the ratio in order to improve clinical function.
What happens when the Refraction Ratio deviates significantly? Here’s an example.
This network simulation is a computationally simulated neurobiological network. (It consists of 100 neurons modeled as Izhekevitch neurons, a mathematical model of biological neurons.
This is a geometric network, in the sense that the physical distances between neurons matter. How fast signals travel over those distances create latencies, or delays.
Each neuron also has a refractory period. A period of time during which it is internally processing the information it receives and during which it cannot respond to other signals.
Here's what happens when the signaling speed is changed while everything else is left exactly the same ...
The graphic on the right is what's called a raster plot for the simulated biological neural network illustrated above. It captures the dynamics of the entire network in one image, and is a common way neuroscientists illustrate neural activity.
Along the vertical axis each neuron in the network is enumerated, from 1 to a 100, which was the size of the network we studied in this example.
Along the horizontal axis is time. In this case the length of time we ran the simulation, 4 seconds (which is the same as 4000 milliseconds).
For every neuron, every time there is a tick mark along the time axis it means that specific neuron fired.
We externally stimulated the network for the first 500 milliseconds, then observed the network to see how it was able to sustain inherent recurrent signaling in the absence of additional external stimulation. At the lowest signaling speed, we saw recurrent low-frequency periodic sustained activity.
But when we increased the signaling speed by a factor of 100, there was no signaling past the externally-driven stimulus period. All the activity died away.
Why is this the case? It is the consequence of a mismatch or deviation of the refraction ratio. When signals arrive to quickly, the neurons do not have enough time to recover from their refractory periods and the incoming signals do not have an opportunity to induce downstream activations. The activity in the entire network dies.
The Refraction Ratio in the Brain.
We then explored if real biological neurons use the Refraction Ratio as an optimization principle. It turns out that they do. Strikingly, nature has evolved the shape (morphology) of neurons, the conduction velocity (signaling speed) of its action potentials, and their refractory period to result in an almost theoretically perfect Refraction Ratio value.
We tested this in Basket cell neurons, a type of inhibitory neuron in the cortex. Dynamic signaling in branching axons depends on a trade-off between the time it takes action potentials to reach synaptic terminals - temporal cost - and the amount of cellular material associated with the wiring path length of the neuron’s morphology - material cost. The synaptic terminals are the end of the neuron where it passes the signal to other neurons.
Dynamic signaling on branching axons is important for rapid and efficient communication between neurons, so the cell’s ability to modulate its function to achieve efficient signaling is presumably important for proper brain function. Our results suggest that the convoluted paths taken by axons reflect a design compensation to slow down action potential latencies in order to optimize the Refraction Ratio.
We calculated the Refraction Ratio for over 11,000 axon branches across 56 neurons by taking high resolution digital reconstructions and breaking up the branches into segments. Knowing the conduction velocity across segments and calculating the refractory period for each segment we were able to compute the ratio for each segment for all the branches. Each branch in this case represented an individual data point for which we calculated the Refraction Ratio.
The median value of the Refraction Ratio across all 11,575 branches was 0.92. Remember that the theoretically predicted ideal for the ratio is unity (a value of 1).
We also calculated the median of all the medians for each of the 56 neurons. Because some neurons could have more axonal branches than others, any neuron that had a lot more than other neurons could possibly skew the results of the entire populations of branches taken independently. So we controlled for this by calculating the median of the ratio for the branches of each neuron and then looking at the median of each across the 56 neurons. The calculated Refraction Ratio again approached the theoretical ideal, with a calculated value of 0.91,
The class of neuron we looked at has evolved to maintain a nearly perfect Refraction Ratio as a way of optimizing its signaling dynamics and information processing.
We are are starting to test how the Refraction Ratio acts as an optimization principle in networks in the brain at larger scales of organization.
The reduced experimental model of the brain we are using to explore networks of neurons are human brain organoids. Starting with a skin fibroblast cells, using a carefully optimized chemical cocktail we can de-differentiate them back into a stem cell state and then re-differentiate the cells to become neurons.
These transformed neurons then naturally start forming connections and organize themselves into a pin sized structure in a culture dish that spontaneously anatomically and functionally resembles a simple brain. It is an amazing individually personalized model of the brain. And it has enough functional complexity to explore and calculate the Refraction Ratio. We are exploring this in both neurotypical individuals and different patient populations.
We are also measuring the Refraction Ratio in humans at the scale of the entire brain across networks of brain regions.
We have designed a visual task to test if the brain slows down visual information that needs to travel between cortical hemispheres relative to information processed only on one side. This will allow us to test if the Refraction Ratio is also preserved at the whole brain scale.
Initial preliminary data is suggesting that it is.
To do the necessary measurements we use a unique and highly sophisticated brain imaging machine that measures magnetoencephalograms (MEG). Orthogonal to every electric field, like the one produced by the electrical activity in the brain, is always a magnetic field. The MEG measures the magnetic fields produced by the brain. But because they are so small - about a billionth the size of the earth's magnetic field - this instrument needs to be incredibly sensitive and highly shielded or the fields simply get lost in the much larger magnetic field of the earth.
MEG has sub-action potential temporal resolution, comparable to electroencephalography (EEG), but also has MRI scale spatial resolution. (EEG has great temporal resolution but poor spatial resolution.)
The MEG machine at UCSD - only one of three on the West coast of the United States - is the most shielded MEG in the world. It sits behind a six layer Faraday cage that weighs 26 tons.
New Machine Learning.
By taking advantage of what we are learning about the brain and how its algorithms optimize for the Refraction Ratio, we are leveraging the same mathematics to develop a conceptually new approach to data encoding and machine learning.
The core mathematical ideas and theory are allowing us to pursue engineering applications while we continue to study neurological disorders.
The algorithms that form the base code for this new machine learning are capable of encoding and classifying information and data without prior training.
A piece of input data, like a picture, is transformed into a vector that then triggers activations of a subset of the input layer of a geometric artificial neural network. In our version of a neural network, properties like distances, latencies, and the Refraction Ratio matter, similar to how they are important in the brain.
The triggered input neurons set off a cascading dynamics through the geometric network that 'carves' out a dynamic path that is unique to the input data that triggered it. These dynamic patterns are then encoded as a single point in a metric space. And because it is a metric space distances between points can be measured and back end classification algorithms (such as k-nearest neighbors or even other neural networks) can be used to determine how related different points in the metric space are to each other.
Note that no training is required. If all you have is one piece of data, that encodes one point. If you have two, that's two points. If you have a million you'll have a well populated metric space to do post-encoding analyses on. There is learning taking place using spike timing dependent plasticity (STDP) at the level of the weights in the network as the dynamics converges to steady state. This takes a a few hundred milliseconds. But there is no need for training on a dataset.
Image recognition tasks with no training have classification accuracies as high at 97% on current results.
From an engineering perspective, we are interested in developing machines capable of inference and problem solving that can constructively collaborate with humans.
From an application perspective, we are particularly interested in computational efficiency, space research and exploration and national security applications. In all these these use cases, limited computational resources, incomplete and sparse data insufficient for training, and the need for genuine human-machine collaboration will be critical.
"Science does not know its debt to imagination."