20110513

Correlation, mean-squared-error, mutual information, and signal-to-noise ratio for Gaussian random variables

I was reading a paper, and encountered a figure that showed the correlation, mutual information, and mean-squared prediction error, for a pair of time-series. This seemed a bit redundant. It turns out it was added to the paper on the request of a reviewer. If your data are jointly Gaussian, these all measure the same thing; no need to clutter a figure by showing all of them.

For a jointly Gaussian pair of random variables, correlation, root mean squared error, correlation, and signal to noise ratio, are all equivalent and can be computed from each-other. 

Some identities

Consider two time series $x$ and $y$ that can be well-approximated as jointly Gaussian. To simplify things, let $x$ and $y$ have zero mean and unit variance (the math still works out without this assumption, but its also easy to ensure by z-scoring the data). Also, let $n$ be a zero-mean unit-variance Gaussian random variable that captures noise, i.e. fluctuation in $y$ that cannot be explained by $x$.

Let's say we're interested in a linear relationship between $x$ and $y$:

\[y = ax + bn.\]

The linear dependence of $y$ on $x$ is summarized by a single parameter

Since the signal and noise are independent, their variances combine linearly:

\[\sigma^2_{y} = a^2 \sigma^2_{x} + b^2 \sigma^2_{n}.\]

The sum $a^2+b^2$ is constrained by the variances in $x$, $y$, and $n$. In this example we've assumed these are all 1, so

\[a^2+b^2=1.\]

Incorporate this constraint by defining $\alpha=a^2$ and writing 

\[\sigma^2_{y} = \alpha \sigma^2_{x} + (1-\alpha) \sigma^2_{n}\]

and

\[y = x\sqrt{\alpha} + n\sqrt{1-\alpha}.\]

(We'll show later that $\alpha$ is the squared Pearson correlation coefficient, i.e. it is the coefficient of determination.)

From this the signal-to-noise ratio and mutual information can be calculated

The Signal-to-Noise Ratio (SNR) is the ratio of the signal and noise contributions to $x$, and simplifies as

\[\text{SNR}=\frac{\sigma^2_{a x}}{\sigma^2_{b n}}=\frac{\alpha \sigma^2_x}{(1-\alpha) \sigma^2_n}=\frac{\alpha}{1-\alpha}.\]

On jointly Gaussian channels mutual information $I$ (in bits, is using $\log_2$) is a  monotonic function of SNR, and simplifies as:

\[I=\frac{1}{2}\log_2(1+\text{SNR})=\frac{1}{2}\log_2{\frac{\sigma^2_y}{\sigma^2_{b n}}}=\frac{1}{2}\log_2{\frac{\sigma^2_y}{(1-\alpha)\sigma^2_n}}=\frac{1}{2}\log_2{\frac{1}{1-\alpha}}.\]

Relationship between $a$, $b$, $alpha$, and Pearson correlation $\rho$

Since $x$ and $n$ are independent, the samples of $x$ and $n$ can be viewed as an orthonormal basis for the samples of $y$, with weights $a$ and $b$, respectively. This relates the gain parameters to correlation: the tangent of the angle between $y$ and $x$ is just ratio of the noise gain $b$ to the signal gain $a$:

\[\tan(\theta)=\frac{b}{a}=\frac{\sqrt{1-\alpha}}{\sqrt{\alpha}}\]

Then, $\tan(\theta)$ can be expressed in terms of the correlation coefficient $\rho$:

\[tan(\theta)=\frac{\sin(\theta)}{\cos(\theta)}=\frac{\sqrt{1-\cos(\theta)^2}}{\cos(\theta)}=\frac{\sqrt{1-\rho^2}}{\rho}\]

This implies that

\[\frac{\sqrt{1-\alpha}}{\sqrt{\alpha}}=\frac{\sqrt{1-\rho^2}}{\rho},\]

which implies that that $\alpha=\rho^2$, i.e. $a=\rho$. 

A few more identities

This can be used to relate correlation $\rho$ to SNR and mutual information:

\[\text{SNR}=\frac{\rho^2}{1-\rho^2}\] 

\[I=\frac{1}{2}\log_2{\frac{1}{1-\rho^2}}=-\frac{1}{2}\log_2(1-\rho^2)\]

If $\phi=\sqrt{1-\rho^2}$ is the correlation of $y$ and the noise $n$ (i.e. $\phi$ is the amplitude of the noise contribution to $y$), then information is simply $I=-\log_2(\phi)$. 

Mean squared error (MSE) is also related :

\[\text{MSE}=(1-\rho)^2+(1-\rho^2)=1-2\rho+1=2(1-\rho),\]

which implies that

\[\rho=1-\frac{1}{2}\text{MSE,}\]

and gives a relationship between mutual information and mean squared error:

\[I=-\frac{1}{2}lg(1-\rho^2)=-\frac{1}{2}\log_2(1-(1-\text{MSE}/2)^2)\]

These relationships between correlation $\rho$, mean squared error, mutual information, and signal to noise ratio, all increase monotonically. They all summarize the relatedness of $x$ and $y$. For purposes, e.g. of ranking a collection of $x$ in terms of how much they tell us about $y$, they are equivalent.

20110413

Limit of an infinite chain of first-order exponential smoothers

First-order exponential smoother

The simplest model how the voltage $x$ at a synapse responds to input $u$ is a first-order filter:

$$\tau \dot x = -x + u.$$

This corresponds to convolving signal $u(t)$ with exponential filter $\operatorname H(t) \exp(-t/\tau)$, where $\operatorname H(\cdot)$ is the Heaviside step function:

$$\begin{aligned}x(t) &= h(t) * u(t)\\h(t)&=\operatorname H(t) \exp(-t/\tau).\end{aligned}$$

The alpha function

A first-order filter has a discontinuous jump in response to an abrupt inputs (like spikes). A more realistic response is the "alpha function"  $t\cdot \exp(-t)$. The alpha function can be obtained by convolving two first decay functions (i.e. chaining together two first-order filters):

$$\begin{aligned}\tau \dot x_1 &= -x_1 + u\\\tau \dot x_2 &= -x_2 + x_1.\end{aligned}$$

This is sometimes written in the compact notation

$$ \left(\tau \frac{d }{dt} + 1\right)^2 x = u.$$

Higher orders

You can repeat this operation many times, obtaining responses with increasing smoothness. The family $t^n\cdot \exp(-t)$ reflects $n+1$ feed-forward variables coupled by exponential decay $\dot x_n=x_{n-1}-x_n$. The integral of $t^n \exp(-t)$ grows with $n$. To normalize, divide by $n! = \Gamma(n+1)$: 

$$h(t) = \operatorname H(t)  \frac {t^n}{\Gamma(n+1)}e^{-t}.$$



The response of $t^n\exp(-t)$ peaks at time $t=n$. Rescale time with $t\gets nt$ to get a peak response at $t=1$. To keep the integral of the response normalized when rescaling, multiply by $n$.

$$h(t) = \operatorname H(t) n \frac {(nt)^n}{\Gamma(n+1)}e^{-nt}$$


This is equivalent to choosing a time constant $\tau=1/n$ for each of the $n$ filtering stages. To place the peak response at time $t_0$, set $\tau = t_0/n$. 

This corresponds to a gamma distribution with $k=n+1$ and $\theta=1/n$. For large $n$ this approximates a Gaussian with $\mu=\frac {n+1}{n}$ and $\sigma^2=\frac{n+1}{n^2}$. As $n\to\infty$ this converges to a Dirac delta (impulse) distribution. 

To stabilize the variance instead of time-to-peak, rescale time by $1/\sqrt{n+1}$. This corresponds to a gamma distribution with $k=n+1$ and $\theta=1/\sqrt{k}$. The time-to-peak in this case diverges as $n\to\infty$. 

20110223

ZSH Colored prompt

Just a quick post because this was driving me crazy : how do you get a colored prompt on ZSH on ubuntu ?

Various suggestions online seemed to fail. It seemed like the escape sequences for getting colored varied wildly, and none worked for me.

I followed the instructions here for loading one of the default prompts. eg :
autoload -U promptinit
promptinit
prompt -l
prompt bigfade
Then, I typed
echo $PROMPT
to see what the proper escape sequences were ( they didn't look like anything provided online ). e.g.:
%B%F{blue}█▓▒░%B%F{white}%K{blue}%n@%m%b%k%f%F{blue}%K{black}░▒▓█%b%f%k%F{blue}%K{black}█▓▒░%B%F{white}%K{black} %D{%a %b %d} %D{%I:%M:%S%P}
%}%B%F{yellow}%K{black}/home/mrule>%b%f%k 
So, apparantly %B gives you bold, %F{colorname} sets the foreground color, and %b and %f return these to defaults ? Anyway, this will give a blue prompt:
%B%F{blue}%~$ %b%f

20110205

3D Printed Polyhedral Lamp

These are instructions for building a very bright lamp with 20 bulbs and a truncated icosahedral core. [Thingiverse entry].


Parts :
Materials:
  • electrical tape
  • super-glue ( I used Gorilla brand )
Tools:
  • Pliers
  • 3D printer
  • razor knife
  • wire cutters
  • wire strippers
  • Phillip's head screwdriver
Assembly:

First print out the indicated quantity of all printed parts.

More detailed assembly instructions for the lamp socket brackets can be found on the thingiverse page. Trim the bracket until the black socket rests flush inside. This is important, since we need the hexagonal cover plate to bond to both the bracket and the socket for a good fit.
The orientation of the socket within the bracket will matter later. The socket has a wide ridge. Align this ridge with a side of the bracket for 10 pieces. Align the ridge with a corner of the bracket for the other 10. Aligning randomly also works, as long as you don't align all sockets so that the wide parts face a side.

Print out 12 pentagonal pieces. All pieces have extra plastic to stabilize the hinge while printing. This can be removed easily with a razor knife.

Perform a test assembly with just the hexagonal pieces. Leave out the pentagons for now since they are hard to remove once assembled. Ensure that all light sockets fit properly and don't collide. You may have to experiment, rotating and swapping between pieces, to get everything to fit well. If all else fails you can tap apart one of the brackets and re-orient it.

Carefully unfold your test assembly into an as-linear-as-possible planar arrangement like below. The exact arrangement doesn't really matter, just so long as there isn't too much branching.

The lamp sockets clip onto 12 to 14 gauge electrical wire. The only 12 gauge wire I could find had too thick of insulation to work with these sockets. I used 16 gauge wire instead, which just barely works. Using scissors or a knife, separate one end of the lamp cord. Protect the ends with electrical tape. Starting at the far end, clamp the sockets to the cable in turn. The sockets are difficult to close, so I had to use pliers to get enough force.

Before you get excited and attach the plug to test everything, slide on the pentagonal hook piece over the cable. The top of the printed piece should be facing away from the assembly, toward the plug. I neglected to do this, and had to dis-assemble my plug to add this piece.

To assemble the plug, use needle-nose pliers to remove the orange stopper from the front of the plug. Remove the prongs. Thread the lamp cord through. Split and strip about 13mm from the end of each wire. Wrap the exposed wire around the bolts attached to the prongs, and tighten the bolts well. Replace the prongs and stopper.

Test each of your sockets. Turn everything over and plug in some lightbulbs. I did it the dangerous way by adding and removing bulbs ( I only had 2 at the time ) while the thing was plugged in. People that don't want to die should un-plug the setup while moving the bulbs. Better yet, order the bulbs with the rest of your parts and put them all in at once to test.

The next step is tricky. Unplug the setup and remove the bulbs. Turn over the setup. You are going to need to fold the pieces back into the polyhedral shape. The lamp cord is inflexible and resists folding, but bending each joint beforehand helps. Adding in the pentagons while folding provides more stability. As the polyhedron becomes more complete, it becomes more difficult to add pieces. If you're having trouble getting a hinge to mate, pry up slightly the side that is already in the polyhedron. The hinges come together more easily if pushed together from the side, rather than if pushed down from above.

When it was all done, the compressed cable overpowered the super-glue on a couple brackets, thankfully this mistake is easily fixed with more super-glue and some patience. You should end up with an object that looks more than a little bit like the detonation mechanism for an atomic bomb. The final assembly is very strong and the hinges will hold together without additional glue.

The last piece you'll insert is the one that contains the power cord and the rope or chain for hanging the lamp. I would attach rope or chain before you add this piece. Don't use polypropylene rope like I did, it doesn't hold knots. A chain would look nicer anyway.
Thats it. You're done. Hang the lamp somewhere, insert bulbs, and power up your own miniature sun.

20110104

Subject 3, Trial 4, "Walk on uneven terrain"

CMU motion capture dataset
Subject 3, Trial 4, "Walk on uneven terrain"
100 overlaid walkers
scale : Uniform(0.4,1.0), offset : Gaussian(μ=0,σ = [1 2 3 4 6 8])