# On the Simulation of Large Populations of Neurons

I recently found a 2000 paper titled “On the Simulation of Large Populations of Neurons” by A. Omurtag, B. W. Knight, and L. Sirovich. This paper shows two difference approaches to solving large populations of neurons. One can simulate their behavior computationally or one might choose to describe their behavior with some clever math. Today I want to discuss the clever math.

Note: I just recently installed the MathJax WordPress plugin to display LaTex so until I get use to it some equations might show up a little weird.

I would like to start with a well known equation used in fluid dynamics, the continuity equation for an incompressible flow. The fundamental requirement of incompressible flow is that the density, $rho$, is constant within an infinitesimal volume, $mathrm{d}mathbf{V}$, which moves with velocity, $mathbf{v}$. Thus: $m = int_vrho mathrm{d}mathbf{V}$

Conservation of mass requires that the time derivative of $m$ equal the mass flux, $mathbf{J}$, across the boundary of the volume. $frac{partial m}{partial t} = -oint_S mathbf{J}bulletmathrm{d}mathbf{S}$

Applying the divergence theorem grants us the following expression: $frac{partial m}{partial t} = -int_v(mathbf{bigtriangledown}bulletmathbf{J})mathrm{d}mathbf{V}$

Plugging the first equation for $m$ into the previous equation gives us: $int_vfrac{partial rho}{partial t}mathrm{d}mathbf{V} = -int_v(mathbf{bigtriangledown}bulletmathbf{J})mathrm{d}mathbf{V}$

Thus the integrands must be equal and we are left with the following result: $frac{partial rho}{partial t} = -(mathbf{bigtriangledown}bulletmathbf{J})$

This result facilitates more interesting conclusions in fluid dynamics, an exercise I will leave to the reader.

The reason I bring this up is because in the paper I’m reading the authors take a similar approach in analyzing large populations of neurons. Assuming that the population of neurons within a network is constant one can also write a continuity equation in a similar fashion. Although is the case of neural networks the vector of parameters are flowing in phase space.

I find the connection here to be quite enlightening yet somewhat forced at the moment. I still need to work with this approach to discover what merit it has. Just thought I’d share, I highly recommend looking through the paper.