20180919

Calculating the entropy of the Poisson distribution

The entropy of a Poisson distribution has no closed form. The entropy, in nats, is:

$$ \begin{aligned} H(\lambda) &= - \sum_{k=0..\infty} \Pr(k) \log\Pr(k) \\ &= - \sum_{k=0..\infty} \frac {\lambda^k e^{-\lambda}}{k!} \left[ k \log(\lambda) - \lambda - \log(k!) \right] \\ &= \lambda[1 - \log(\lambda)] + e^{-\lambda}\sum_{k=0..\infty} \frac{\lambda^k\log(k!)}{k!} \end{aligned} $$

These notes evaluate some numerical approximations to the entropy.

For high firing rates, the Poisson distribution becomes approximately Gaussian with $\mu=\sigma^2=\lambda$. Using the forumula for the entropy of the Gaussian, this implies

$$ H(\lambda) = \tfrac 1 2 \log(2 \pi e \lambda) + \mathcal{O}\left(\tfrac 1 \lambda \right) $$

Compare this to the fist few terms of the series expression, which is increasingly accurate for large $\lambda$, but diverges for small $\lambda$:

$$ H(\lambda) = \frac{1}{2}\log(2 \pi e \lambda) - \frac{1}{12 \lambda} - \frac{1}{24 \lambda^2}-\frac{19}{360 \lambda^3} + O\left(\frac{1}{\lambda^4}\right) $$

Another way to calculate the entropy for low rates is calculate it using a finite number of terms in the series expansion. For low rates, the probability of $k> \lambda + 4\sqrt{\lambda}$ events is neglegible, so we only need to sum a small number of terms.

$$ H(\lambda) \approx - \textstyle\sum_{k=0}^{\lceil \lambda + 4\sqrt{\lambda} \rceil} \frac {\lambda^k e^{-\lambda}}{k!} \left[ k \log(\lambda) - \lambda - \log(k!) \right] $$

Numerically, you can use the above calculation for $\lambda<1.78$, taking terms out to $k<\lceil \lambda + 4\sqrt{\lambda} \rceil$. For $\lambda\ge 1.78$, the third-order approximation for large $\lambda$ becomes accurate.

$$ H(\lambda) \approx \begin{cases} - \textstyle\sum_{k=0}^{\lceil \lambda + 4\sqrt{\lambda} \rceil} \frac {\lambda^k e^{-\lambda}}{k!} \left[k \log(\lambda) - \lambda - \log(k!)\right] & \lambda<1.78\\ \frac{1}{2}\log(2 \pi e \lambda) - \frac{1}{12 \lambda} - \frac{1}{24 \lambda^2}-\frac{19}{360 \lambda^3} & \lambda\ge 1.78 \\ \end{cases} $$

The relative error in this approximation is $|\hat H-H|/|H|<0.0016$

20180502

An alternative form of the Kalman update

The Kalman filtering update on Wikipedia is inefficient if the latent space dimension $n$ is higher-dimensional than the observation space dimension $m$. This is because the covariance update involves matrix multiplications in the full latent space, which has $\mathcal{O}(n^3)$ complexity. Here is a faster way (all notation as on Wikipedia): \begin{align} C_k &\gets H P_{k|k-1}\\ S_k &\gets R_k + C_k H^\top\\ K_k &\gets ( S_k^{-1} C_k )^\top\\ P_{k|k} &\gets P_{k|k-1} - K C_k \end{align} Where $C_k = H P_{k|k-1}$ is a $m\times n$ intermediate value. Complexity is limited by the $\mathcal{O}(m n^2)$ multiplication in the last step. This is in contrast to the standard update, which includes four $\mathcal{O}(n^3)$ multiplications in the covariance update. The speed up of $\mathcal{O}(n/m)$ seems minor, but this form of the update also optimizes constant factors and the speed-up was substantial my application. This Technical Note on Manipulating Multivariate Gaussian Distributions may be helpful.

Rotating the latent space so that the projection $H$ can be computed simply by dropping rows/columns also leads to significant speed-ups, as it removes two further matrix multiplications. In this form, the update uses only one $\mathcal{O}(m^2 n)$ and one $\mathcal{O}(mn^2)$ operation, in contrast to the eleven matrix multiplications in naively implementing the standard covariance update.

20170915

Numpy scalar types cheat-sheet (x86-64)

Saving this as note-to-self; The Numpy documentation is thorough but unfriendly for rapid scanning. Here's a cheat-sheet for Linux x86-64.

  • (BHIL) bhil : (u)int-8,16,32,64
  • efdg : float-16,32,64,80
  • FDG : complex 2×(32,64,80)
  • ? : Boolean
  • S# / U# : Length-# byte/Unicode strings
  • On Intel x86-64 the signed/unsigned pointer-length types intp p / uintp P are int64 / uint64 , as are longlong q / ulonglong Q .
  • All types are little-endian < by default (prefix > to specify big-endian)
  • Types can be suffixed by an integer to specify vector-types 

20170126

Dissociation between sustained single-neuron spiking and transient β-LFP oscillations in primate motor cortex

Chapter two of my thesis has just been published! Rule et al. 2017 [PDF] explores the neurophysiology of beta (β) oscillations in primates, especially how single-neuron activity relates to population activity reflected in local field potentials (a.k.a. "brain waves").

Beta (~20 Hz) oscillations occur in frontal cortex. We've known about them for about a century, but still don't understand how they work or what they do. β-wave activity is related to "holding steady", so to speak. 

Beta oscillations are dysregulated in Parkinson's, in which movements are slowed or stopped. Beta oscillations are also reduced relative to slow-wave activity in ADHD, a disorder associated with motor restlessness and hyperactivity.

I looked at beta oscillations during movement preparation, where they seem to play a role in stabilizing a planned movement. I found that single neurons had very little relationship to the β-LFP brain waves. However! This appears to be for a good reason: the firing frequencies of neurons store information about the upcoming movement, and neurons firing at different frequencies cannot phase-lock together into a coherent population oscillation.

Anyone who's played in an orchestra knows that when notes are just slightly out of tune, you get interference patterns called beats. The same thing is happening in the brain, where many neurons firing at slightly different "pitches" cause β-LFP fluctuations, even though the underlying neural activity is constant.

This result provides a new explanation for how β-waves can appear as "transients" during motor steady-state: the fluctuations are cased by "beating", rather than changes in the β activity in the individual neurons. This differs from the prevailing theory for the origin of β transients in more posterior brain regions.

Many thanks to Carlos Vargas-Irwin, John Donoghue, and Wilson Truccolo. You can grab the PDF here. Please cite as

Rule, M.E., Vargas-Irwin, C.E., Donoghue, J.P. and Truccolo, W., 2017. Dissociation between sustained single-neuron spiking and transient β-LFP oscillations in primate motor cortex. Journal of neurophysiology, 117(4), pp.1524-1543.


Optogenetic Stimulation Shifts the Excitability of Cerebral Cortex from Type I to Type II: Oscillation Onset and Wave Propagation

Our new paper, Stewart Heitmann et al. [PDF], is finally out! It's a collaboration between the theoretical neuroscientists Stewart Heitmann and Bard Ermentrout at the University of Pittsburgh, and the Truccolo lab at Brown University. 

This work could help us understand what happens when we stimulate cerebral cortex in primates using optogenetics. Modeling how the brain responds to stimulation is important for learning how to use this new technology to control neural activity.

Optogenetic stimulation elicits gamma (~50 Hz) oscillations, the amplitude of which grows with the intensity of light stimulation. However, traveling waves away from the stimulation site also emerge. It's difficult to reconcile oscillatory and traveling-wave dynamics in neural field models, but Heitmann et al. arrive at a surprising and testable prediction: 

The observed effects can be explained by paradoxical recruitment of inhibition at low levels of stimulation, which changes cortex from a wave-propagating medium to an oscillator. 

At higher stimulation levels, excitation overwhelms inhibition, giving rise to the observed gamma oscillations. Many thanks to Stewart Heitmann, Wilson Truccolo, and Bard Ermentrout. The paper can be cited as:

Heitmann, S., Rule, M., Truccolo, W. and Ermentrout, B., 2017. Optogenetic stimulation shifts the excitability of cerebral cortex from type I to type II: oscillation onset and wave propagation. PLoS computational biology, 13(1), p.e1005349.
 

 

Phase-triggered wave animation loops

The model predicts high-frequency (60-120 Hz) gamma oscillations within the stimulated area. It also predicts that these waves should propagate into the neighboring tissue, but at half this frequency. The result is that every-other wave in the stimulated area "sheds" a travelling wave into the areas nearby. 

A re-analysis of the data recorded in Lu et al. confirms that this phenomenon is also present in vivo. The GIFs below show the waves evoked by optogenetic stimulation. These loops represent an average of the phase-locked Local Field Potential (LFP) activity in a 4×4 mm patch of primate motor cortex. On the left, we see the broadband LFP, which contains contributions from both high- and low-frequency gamma oscillations. The middle and right panels show narrow-band filtered low- and high-gamma, respectively, revealing the 2:1 mode locking.

Javascript demos of the neural field model

A javascript demo of the neural field model in this paper is available on Github.  Move the slider at the base to change the intensity of the optogenetic stimulation.


 
 
I've also prepared a more comprehensive simulation that lets you explore how the neural field dynamics change as various parameters are altered: 

Many thanks again to Stewart Heitmann, Wilson Truccolo, and Bard Ermentrout