20100422

Frequency Space Evolution of Pattern Forming Systems

Magnitudes of Fourier Coefficients for evolution of Wilson-Cowen Pattern formation undergoing periodic stimulation. First : Stripe domain converges to a single orientation in frequency space, while Second : Hexagon forming domain converged to.. well, hexagons.







Videos from today








20100421


... spontaneously active ? no structure, no noise, just heterogeneity. Basically just two populations (e&i) of Izh spike frequency adaptation neurons with heterogeneous parameters. Without heterogeneity the system is... less interesting ( regular spikes or no spikes at all ). Heterogeneous parameters may be better than injected noise in some situations ?


class CustomNeuron(Model):
def __init__(self,model_id=0,a=None,b=None,c=None,d=None,name="Izhikevich",noise=False):
Model.__init__(self,name)
newVar(self,'a','',lambda n:numpy.array(uniform(0.009,0.011)(n)).astype(numpy.float32))
newVar(self,'b','',lambda n:numpy.array(uniform(0.19,0.21)(n)).astype(numpy.float32))
newVar(self,'c','',lambda n:numpy.array(uniform(-70.0,-60.0)(n)).astype(numpy.float32))
newVar(self,'d','',lambda n:numpy.array(uniform(7.0,9.0)(n)).astype(numpy.float32))
newVar(self,'v',deqn('v','DT*(0.04*v*v+5*v+140-u+input_current)'),
lambda n:numpy.array(uniform(-50,-40)(n)).astype(numpy.float32))
newVar(self,'u',deqn('u','a*(b*v-u)'),lambda n:numpy.zeros(n).astype(numpy.float32))
self.spike_condition = 'v>30'
self.reset = ('d_v[idx_state] = c; d_u[idx_state] = u+d;')
self.dont_reset = ''
I'm contemplating generating a blog page for every experiment I do as a sort of online lab notebook.

20100420

I miss the old Apple black and white minimalist GUI.

I think this is what a segfault looks like in PyOpenCL


RuntimeError: clEnqueueReadBuffer failed: out of resources
WARNING: Failure executing file:
Python 2.5.2 (r252:60911, Jan 20 2010, 23:33:04)
Type "copyright", "credits" or "license" for more information.

IPython 0.8.4 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.

In [1]: quit()
PyOpenCL WARNING: a clean-up operation failed (dead context maybe?)
clReleaseMemObject failed: out of resources

notes from attempting to generate upstates :

Upstates/downstates are synchronous in that the population transisions in unison. upstates are asyncronous in that we don't want simple global oscillations. Inhibition must be able to prevent runaway excitation ( to some extent ) to make upstates semi-stable. In the firing rate model, insufficient inhibition causes the mean population current to resemble spikes. Slow, modulatory envelopes of gamma oscillations might be caused by a mistmatch of the refractory / recovery process timescale with the excitation time scale. Can a network have bistable character even if the individual neurons do not ? Results from firing rate models seem to indicate yes, but I have not been able to reproduce this in spiking models.

for some parameters :
even though upstates have variable ength, the time between them is constant and characterized by time-to-recovery from inhibition. the transision to upstate sometimes invovles emergence of small oscillations which gradually amplify. In well connected populations of e-i cells, the population dynamics often resemble single neuron dynamics, with spiking, refractoriness, and bursting characteristics. Random connections with random weights seem sufficient to generate chaotic up-down state switching in firing rates with either a single population or e and i cells. With separate e-i populations upstates seem to have more variability in the field potential amplitude ( but this may just be for the select amplitude ranges investigated ). See if you can generalize this to spiking models.

states observed in a locally connected firing rate model :

all-on
all-off
global oscillations
spiral waves
noisy
sptailly smoothed noise
travelling plane waves
static reaction diffusion patterns
orbits between N=2,3,4 reaction diffusion like patterns

e i noise adaptation driving
on + - -
off -
noise +
smooth + + +
travel + + +
spiral + +
global + + +
pattern + + -
shift + + +


since these seem to cover the parameter space, speculate as to which modes are most similar to asyncronous, chaotic, up-down state dynamics ? :

........propagating spontaneous aperiodic dynamic
on .........- ...........+ ..........- ........-
off ........- ...........- ..........- ........-
pattern ....- ...........+ ..........- ........-
noise ......- ...........- ..........+ ........+
spiral .....+ ...........+ ..........+ ........+
travel .....+ ...........+ ..........- ........+
bistable ...- ...........+ ..........- ........+
smooth .....+ ...........- ..........+ ........+
noise ......- ...........- ..........+ ........+
oscillation - ...........+ ..........- ........+


three likely candidates :
The spiral waves seem promising, travelling waves seem promising if they can be disrupted to become aperiodic and chaotic smoothed noise is promising if it can be created without explicite noise driving. Adding some long range connections to local model does trippy things. Can we represent ei netwok as graph flow ? The notion of e-local connectivity verses i-local connectivity ? observations : purely random without adaptation seems to have two modes :
static mode with weak inhibition
oscillator mode with strong inhibition and excitation
add in adaptation into this mix and you get chaotic up/dpwn like behavior.
the dynamics of a well connected e-i system very often resembles the dynamics of a simple 2D dynamical system.

disorganized notes

poggio and seung : connections between hypercolumns
lateral connectivity and receptive fields in V1, automatic organization
mesoscale cortical organization

systems level organization of cortext and how this constrains what information "space" is likely to be represented in a given region
understanding the coding "space" of specific brain areas
brodmann areas : different, and different for a reason ?
understanding the communication between different patches of cortex
understand the information transformations that a given patch of cortex can effect
understanding what the variations in cortical archetecture indicate about computational capabilities
define the cortical surface automorphism ( an N vector on a 2D manifold embedded in 3space )
invert neural code to generate stimuli that can reproduce give neural state
make a better model of hallucinations and the flicker geometric hallucinations
dynamical systems model of basal ganglia / motor control system

idea :
patch
computational entity
computational function
not linear per-se, some sort of information mapping
maybe a "rotation" like function that respects the local 2D organization
in addition to some computing nonlinearities
each patch of cortex represents some psychological space
which is strongly related to classical notions of space
that is, co-ordinate information relative to various degrees of freedom of the body and the environment
so, for all behaviorally relevant spaces, you have an embedding of that space somewhere in cortex
connected to other regions that provide information about that space
and require information about that space
and perhaps you align these representations based on prior experience to fill in missing data
its like, I want to perform a particular operation like convolution
but instread of brute forcing
I take the FFT
use a simpler transformation
then invert the FFT again to get the convolved result
there is some computation that cortex is good at doing
and mappings between regions transform spaces into representations that are natural for this computation

mar's theory of cortex :
(un?)supervisd learning of statistical structure of its inputs.

fingerprint notes

Use a stripe forming system with a hard boundary at the nail and at the joint, and a soft boundary on wrap around to back of finger. The boundary conditions stabilize three orientations of stripes, which must be solved in the middle. Whorls happen when there are too many stripes on one side to align properly.

Hardware adaptation to turn any tablet PC into an augmentative communication device :

- Two cannel audio out can be split into a whisper speaker & output speaker pair.
Use software to control the left/right channels of a stereo audio output independently. Enforce different volume levels either in software or with hardware. Ideally you would want to use the built-in computer speaker, I think, for voice output. Thats probably a little more tricky.

- Audio in ( microphone ) can serve as switch input. If microphone input is stereo, then two switch inputs may be possible.
This modification will require some sort of power source. I suggest using the USB ports for a small amount of power. The microphone inputs can be pulled low when the switch is open, and driven high when the switch is closed. This will create reliable switch input.

- A full-screen application running on the tablet PC touchscreen can fully duplicate dynavox system functionality. Full access to PC capabilities increases the flexibility of the device. Objects such as a USB infrared control module and additional microphone inputs could be added. I assume by default that the thing will run linux, although getting all the drivers working properly for linux on a tablet PC might be "fun"

some math related to odor binding

the hill equation for simple binding
b=x/(x+K)

can be inverted
x/K=b/(1-b)

if x/K is computed for a bound odor, you can classify it by taking the correlation with a stored odor.

assuming perfect noiseless conditions and perfect inversion of the binding equations, different presentations of the same odor at different concentrations will be perfectly (1.0) correlated

in practice, the range of representable values is limited by the range of firing rates.

in transmitting a vector x/K one can normalize it. any multiplication or mean shifting will not change the correlation.

I would like a way of computing a normalized x/K vector which correctly as possible preserves the correlation without actually computing an un-normalized value.

I will consider the similar problem of computing a correlation value using only unsigned 8 bit integer values. These also have limited range and precision.

attempt to define an optimal range scaling for a given odor at a given concentration. start with known values, and assume magic infinite information template to mach against. can we define a scaling to 256 values.

the general scaling of the x/K=M vector then is M*s+d or (M+d)*s (these are equivalent), followed by a rounding and clipping : byteclip((int)(M*s+d))

so I suppose we want to maximize the correlation between
M and
min(255,max(0,(int)(M*s+d)))

elementwise square error :
(Mi - min(255,max(0,(int)Mi*s+d)))**2

how do we characterize it ? there is
- quantization error for the scaled value
- error caused by clipping outlier values

expected error in quantization of uniform(0,1) values to 0,1 rounded values

integral 0 1 (x-round(x))**2 dx
symmetry
integral 0 1/2 x**2 dx
1/3x**3[0,1/2]
1/3(1/2)**3
1/3*1/8=1/24
if you use squared error
1/2(1/2)**2=1/8 for absolute deviation

now estimate the clipped values :
we need to define the distribution :

normal distributed pKd
passed through hill equation at concentration x

low clipped values : value squared
high clipped values : (valuse-255)**2

basically let your error be the squared deviation of the correlation from 1

a=(a-m(a))/s(a)
b=(b-m(b))/s(b)
correlation=dot(a,b)

or something
is there some way of skipping this analysis ?

is there some way of describing the family of distributions formed by the bound odors ?
parameterize on X, the fraction bound, as well as a normal distribution of pKd

B=X/(X+10**-pKd) is the new random variable, where I guess pKd is a normally distributed random variable. WLOG assume standard normal.

can we ignore the quantization error ? this depends,

so, what do we actually need to know from this distribution ?

meh, hard, lets just guess something? assume there exist a choice of parameters that minimize quantization errors.

so, we have the
s,b scaling data,
and we want to transform the computation of the inverse binding into that space

tools at our disposal :
ORN nonlinearity
ORN heterogeneity ( increase resolution )
these transmit a basic transformed version of binding to the glomeruli

then, at the glomeruli, we can access the whole vector. each transmission channel can only transmit 8 bit scalar.

the output of the glomeruli should be filtered vectors

matter programming


the ORN nonlinearity is slightly frustrating since it limits the bandwidth of information transfer to the glomerular layer where some sort of intelligent normalization might take place.

operations on b which result in linear or near linear operation on x/K ( or demonstrably nearly linear for a given vector, in a way that doesn't cause too many errors ? )
x/K = b/(1-b)
umm
ax/K=b/(1-b)
ax-axb=bK
ax=(K+ax)b
ax/(K+ax)=b
fine.. f.. solves for x/K, scale-shifts it, then solves for b
(ab/(1-b)+d)=b'/(1-b')
ab/(1-b)(1-b')+d-db'=b'
(ab-abb')+d-db+db'b-db'=b'-bb'
solve for b' in terms of b
ab-abb'+d-db+db'b-db'=b'-bb'
ab+d-db=b'-bb'+abb'+db'-db'b
(ab+d-db)=b'(1-b+ab+d-db)
b'=(d+ab-db)/(1+d-b+ab-db)
b'=(d+(a-d)b)/((1-b)+d+(a-d)b)
b'=1/(a(1-b)/b+1)
b'=1/(a(1-b)/b+1)

notes on separating upstates from downstates based on field amplitude histogram

A system transitioning between upstates and downstates should (hopefully)
exhibit a bimidal distribution in the membrane potentials of individual cells
in my networks I see also a bimodal distribution in firing rates
I want to distingquish between upstates and down states
I think that the trough between the two rate/potential peaks in the
distribution is the most logical choice
"things look better when I take the log of the rates"
so I will.
The mean is a not awful estimator but I think we can do better
sometime the peaks have different sizes and whatnot, and I'd like the fit to
not be biased by this.
parzen window approximation... might not be awful, hard to say

Notes on detecting repeated sequences in data

how to statistically extract sequences from a firing rate model ?

note that certain traveling plane waves ... are somewhat organized, but not necessarily repeating. it may be possible to get structured propagation that is not repeating for other network topologies ?

sequence autocorrelation can detect repeated firing patterns in a single unit.

translate "sequence" to "sequence of population rate vectors" and I think you're good. Besides, to define autocorrelation in time you need to take a window of T time-steps.

this could probably be done more quickly with bit-ops on a boolean spike vector


match vector size :
T timesteps
N units

number of matches :
K timesteps, squared

K*K*T*N is a bad number to be looking at.

pairwise metrics are many :
angle between vectors
Pearson's coefficient
RMS distance

we don't really want pearson's coefficient,maybe some sort of distance metric would be good.

we don't want pearson's coefficient ( propotional to dot product of Z scored ) if we want the mean/varience of the population to be important.

what about repeated subsequences ? like, if some fraction M of N units are participating in a repeating chain ? If the rest of the population in noisy it will be hard to detect this. Searching for subsets... might be bad

rememeber to check if your results could be generated by random chance

lets try RMS distance

you'll want to estimate probability of collisions ( again, easier on a bit vector )

grumble.. shall we try some point process modeling ?

motifs are low energy points in sequence space, orbits pass through them statistically more often ( gibb's sampleing ? energy of sequence can be inferred by its probability and vice versa )

can you look at the connectivity graph and assign energies to various sequences ? this would be good. It can probably be done for random walks. Can it be done for other systems ? Feed foreward networks with global uniform inhibition (normalized probability ) ... these look a lot like random walks.

if the inhibition is structured, or you need to use attenuation or other slow inhibitory process, I'm not really sure what to do.

ok, I like the idea of RMS distance (best i can think of). How do you simplify this computationally ? I can skip the "root" part, and the normalization ( un-normalized sum of squared deviations ), but this is the least computationally intensive part. Luckily, computing sum squared deviations can be parallelized fairly easily

trivially I can do it in either depth T*N
or depth K*K

if I estimate the time constants of the computation I can write an algorithm that switches strategy based on the shape of the problem. The T*N depth problem might use less memory, the K*K problem will rely on PyCuda built in reduction framework and that will result in annoying data copying (would have to write in-place reduction.. blech, no time for that )

Yeah, parallelize on K*K ( T*N depth ) .. this is the best.

Hmm... I think ... no, can I simplify this by doing pairwise time step differences first ?
hmm.. this generates a different K*K matrix, probably just need to do something simple...
like box convolve (2D) the output.

yes thats a simple plan.

seems like something is wrong though...t+1 doesn't line up like that, um...

yeah
something like this

def GPUPointAutoDistance(t,k,n,data):
'''
t=length of data in time
k=number of time bins to use
n=size of vector datapoints
data=t*n data matrix, n is inner dimension
'''
kern1=kernel('float *data, int n, int t, float *out',
'''
int outx = tid%t;
int outy = tid/t;
float *inx = &data[outx];
float *iny = &data[outy];
float sumsq = 0.0f;
for (int j=0; j
float a=inx[j]-iny[j];
sumsq+=a*a;
}
out[tid]=sumsq/n;
''')
kern2=kernel('float *in, int k, int w, int t, float *out',
'''
int index = (tid%w)+(tid/w)*t;
float sumsq = 0.0f;
for (int i=0; i
sumsq+=in[index];
index+=1+t;
}
out[tid]=__sqrtf(sumsq/k);
''')

yes.. this has complexity
K*K*(N+T), slightly better (much much better for large N)
if I have 2**16 units how much time can I store on the card ?
2**32 max
t*(4*2**16)+t*t*4
t*(2**18+t*4)==2**32
t*2**18+t*t*4-2**32==0
-2**32
13572.950011841582
thats not bad.


I think it should be possible to do a nice lazy GPU language (that is interpreted, if that matters). What you want to do is have functional operators actually just operate on a ... I think it is called the syntax tree ? building up programs, then these programs are compiled and executed when their results are required but not before. Also, apply some sort of transformation to the source code so that functionally identical operations actually look like the same program, then you can cache compiled binaries efficiently and reuse them. One problem I forsee is that manually micromanaging the memory behavior is a big part of getting performance on GPU

20100419

Are we approaching a system where all GUIs are written as a local webservers, letting browsers handle rendering ? With Chrome experimentally supporting 3D javascript, Internet Explorer attempting to support hardware accelerated rendering, and Sage sporting a webserver rather than a traditional GUI... looks like this could be the place.

20100413

I bet this exists

encrypted, private RSS/E-Mail protocol implemented on top of existing systems :

everyone has public/private key

RSS feed ( abstractly : text content )
per post encryption key
-- Can be implemented on any platform that generates RSS feeds

send out post-key encrypted with recipient public key to all recipient groups
can be implemented over :
-- E-mail
-- Facebook
-- AIM
-- Other messaging services
-- Recipient list could even be embedded in the post, though this seems wasteful

need to add reader functionality :
-- automatically unwrap key and decrypt post content

functionality :
-- secure RSS with graded access permissions

speculation

Viral gene therapy : three problems

-- immune response to viral vehicle
-- -- temporarily suppress immune system during treatment ?

-- carcinogenic effect
-- -- include tumor suppressor genes in construct
-- -- include system that can detect "bad" insertion points and
-- -- -- kill cell
-- -- -- fix cell
-- -- -- undo insertion

-- transmission to germ line
-- -- "oops" haha !

-- weird disruptions not otherwise specified (induced metabolic disorder )


-- -- I don't think this has been observed


-- assume insertion into only "open" genes
-- ignore genes that function adequately with a single copy ( read : all the ones that don't have to do with cancer )
-- include extra copies of these genes in the construct ?
-- -- might kill cells by increasing apoptosis / slowing division ?
-- -- might force the construct to be too large
-- -- might not need to supplement _all_ potentially disrupted genes, there are multiple fail-safes

20100410

Domain Name Squatters

So, you know those websites which register and spam a diversity of common typos on popular websites? Anyway, socialfreebies.com keeps showing up when I try to go to facebook. So, I redirected their website in /etc/hosts to the intended website

sudo echo "66.220.153.15 socialfreebies.com" >> /etc/hosts

which... at least lets me continue to make typos.

20100409

Wasting time with bash scripting

Netbeans was building Javadoc for me just fine 6 hours ago, and I figured out how to use javadoc from the command line about 2 hours ago, and I could have just pasted a list of packages in my project into a shell script and been done with everything about an hour ago.

But.. but... but...

I want to figure out how to make javadoc automatically document all packages in my source directory. Again, Netbeans does this but I'm trying to learn how to do coding projects efficiently in a simple linux environment.

So, I guess I want some way to automatically generate something like javadoc -d $DOC_DIR [list of automatically generated package references for a given source directory]

Solution : I can't actually express the full solution here because it involves strings that Blogger can't handle... that is, if I enter them into the post editor they will be interpreted as control characters. Anyway, here is the important bits :

mkdir $DOC
cd src
javadoc -d $DOC $(find . -type d | tr '\n' ' ' | sed -e 's/[./]//g' | sed -e 's/^ //g' | sed -e 's/ $//g' | tr ' ' '\n')
cp *.css $DOC


I bet there was some combination for flags for javadoc that would have made this significantly more simple, but I need to get more familiar with shell scripting.

Javadoc

I just put Perceptron up on bitbucket, and wanted to start documenting the code. Javadoc was able to generate reasonable looking documentation from what little comments were in the project, but I wasn't quite satisfied with the appearance of the rendered web-pages. So, I made a custom style sheet and ran a few commands to do some minor re-formatting.

I rather like the resulting format.

  • Style sheet hides redundant information in the member details section ( uses member type+name as title
  • Thin table edges
  • Script creates centered frame
  • I like these colors
  • Script removes empty vertical whitespace for members without documentation



  1 /* Javadoc style sheet 
2 * Revised by Michael Rule
3 * April 2010
4 * This is the sexiest Javadoc style sheet ever
5 */

6
7 body{
8 height:100%;
9 font-size: 95%;
10 background-color: #fff;
11 color:#403818
12 background-image: url("http://img.photobucket.com/albums/v234/MRule7404/frac5.jpg");
13 background-repeat: repeat-y;
14 background-attachment:
15 fixed;
16 }
17
18 /* You'll need to use some sort of text replacement to automatically insert <div> tages for
these wrappers within the body of your website */

19
20 #outer_wrapper {
21 height: 100%;
22 max-width: 870px;
23 min-width: 500px;
24 margin: 0 auto;
25 border-style: solid;
26 border-color: #ccc;
27 border-width: 1px;
28 border-radius: 8px;
29 -webkit-border-radius: 8px;
30 -moz-border-radius: 8px;
31 }
32
33 #inner_wrapper {
34 height:100%;
35 width:auto;
36 padding: 10px ;
37 border-style: solid;
38 border-color: #888;
39 border-width: 1px;
40 background : #ffffff;
41 line-height:1.4em;
42 border-radius: 7px;
43 -webkit-border-radius: 7px;
44 -moz-border-radius: 7px;
45 }
46
47 /* Headings */
48 h1 {
49 font-size: 70%
50 }
51
52 /* Table colors */
53 .TableHeadingColor { background: #ddd; color:#403818 }
54 .TableSubHeadingColor { background: #ccc; color:#605020 }
55 .TableRowColor { background: #eee; color:#000000 }
56
57 /* Font used in left-hand frame lists */
58 .FrameTitleFont { font-size: 80%; font-family: Helvetica, Arial, sans-serif; color:#403818 }
59 .FrameHeadingFont { font-size: 75%; font-family: Helvetica, Arial, sans-serif; color:#403818 }
60 .FrameItemFont { font-size: 75%; font-family: Helvetica, Arial, sans-serif; color:#403818 }
61
62 /* Navigation bar fonts and colors */
63 .NavBarCell1 { padding:0px 4px; background-color:#efd; color:#000000} /* Light mauve */
64 .NavBarCell1Rev { padding:0px 4px; background-color:#777; color:#000} /* Dark Blue */
65 .NavBarFont1 { font-family: Arial, Helvetica, sans-serif; color:#000;}
66 .NavBarFont1Rev { font-family: Arial, Helvetica, sans-serif; color:#fff;}
67
68 .NavBarCell2 { font-family: Arial, Helvetica, sans-serif; background-color:#FFFFFF; color:#000000}
69 .NavBarCell3 { font-family: Arial, Helvetica, sans-serif; background-color:#FFFFFF; color:#000000}
70
71 a {
72 color: #134;
73 text-decoration: none;
74 }
75 a:hover {
76 color: #008;
77 text-decoration: underline;
78 }
79 a:active {
80 color: #008;
81 text-decoration: none;
82 }
83 a:visited {
84 color: #552;
85 text-decoration: none;
86 }
87
88 table {
89 border-collapse:collapse;
90 border-color:#ccc;
91 margin: 0px 0px 0px 0px;
92 padding: 150px;
93 }
94
95 TR{
96 margin: auto;
97 }
98
99 TD{
100 line-height:1.2em;
101 }
102
103 HR{
104 border: none;
105 height: 1px;
106 background: #aa8;
107 }
108
109 H3{
110 display:none;
111 }

Competition for our GPU Spiking simulation Framework

http://amygdala.sourceforge.net/
http://www.hicomb.org/papers/HICOMB2010-02.pdf
http://www.opticsinfobase.org/abstract.cfm?URI=ao-49-10-B83
http://www.doc.ic.ac.uk/~akf/nemo/index.html

20100408

Python, Sphinx, and lambda functions

Some 80% of my code is generated by higher order functions or is declared as a lambda expression. Typically, this results in fewer total lines of code and good code reuse. The problem I am having is that these types of functions are not processed properly by the automatic documentation generator Sphinx.

This blog has a partial solution : directly assign values to __doc__ and __name__. This is, however, neither necessary nor sufficient to get the functionality we desire with Sphinx. You can document functions generated with higher order functions or lambda expressions like module variable by following them with a triple-quote string.

foo = lambda x:x
"""Identity function"""

Produces something like the following in the automatically generated documentation :

foo
Identity function

This is better, but not quite enough. We don't get the argument list, the some of the formatting is a bit odd. I'll let you know if I figure something out. At the moment I can't even figure out how to infer the argument list of a lambda function.



Update : the answer may have something to do with this :


  • It’s possible to override the signature for explicitly documented callable
    objects (functions, methods, classes) with the regular syntax that will
    override the signature gained from introspection:


    .. autoclass:: Noodle(type)

    .. automethod:: eat(persona)


    This is useful if the signature from the method is hidden by a decorator.



    New in version 0.4.


  • Compiling Python to GPU ?

    http://illinois.edu/calendar/Calendar?calId=2705&eventId=166322&ACTION=VIEW_EVENT

    20100406

    Problem of the Day : Generating Sphinx documentation from Python doc-strings

    Sphinx : a documentation generator for Python

    Problem : Sphinx documentation is not clear (to me) on how to generate documentation from your Python doc-strings. At least, no obvious tutorial exists.

    Possibly helpful links ( none of which actually solve this problem )

    Steps so far :

    First, I followed this tutorial to get Sphinx running and installed, and the basic example running. After this tutorial, you should have a working documentation directory that builds a minimal example of "index.rst" and "chapter1.rst"

    Second, I tried adding ".. automodule:: numpy" to the top of the demo "chapter1.rst" file. I was pleasantly surprised to see this work, more or less. This inserted some basic numpy documentation into the minimal "chapter1.rst" file. I guess you'd probably have to have numpy installed and working for this to work.

    Now what ? I think this means that I need to create a separate ".rst" file in the documentation source directory for every class, module, or source that I want to document.

    I also need to figure out how to package my python code into an importable library so Sphinx can see it like numpy. From the Pyhon documentation : A module is a file containing Python definitions and statements. So, I guess each of my .py script files will need a to be turned into a module, and then a separate .rst file for each .py source file will need to be added to my documentation source directory.

    All Python source files are automatically modules, with the name of the module as the name of the script file. Sphinx is somehow coupled to the iPython interpreter, I am told. If you can get your modules to be importable form iPython system-wide you'll be all set.

    There should be some way to just tell the Sphinx autodoc "here is a python file, please automatically pull documentation out of it". As far as I can tell, there is not. Specifying absolute paths does not work.

    This seems to work : add the directories containing your source files to the environment variable "PYTHONPATH". Then, at least the automodule feature will work if you pass it the name of your file ( without the name of extension ).

    export PYTHONPATH="~/ahh":$PYTHONPATH

    Turns out the the header stuff in the .rst file is necessary, otherwise Sphinx won't generate an entry for your module on the main index page. Also, "toctree" means "table of contents tree" ( I don't think this is explicitly stated in the Sphinx documentation ).

    Ok, this is great. automodule seems to work. It looks like you need to manually create a new ".rst" file for every module you want to document, register them in the table of contents tree in "index.rst", then add an automodule command in these files and make sure your file/module is system-wide importable ( in PYTHONPATH ). This isn't exactly what I had in mind when I heard the workd "autodoc", but maybe I can write a script called "autoautodoc" that automates the .. automation ? of ? documentation ??

    If you do something like this :

    .. automodule:: foo
    :members:

    all the class files inside the module get displayed as well.

    ok... thats nice. I guess I'll go write an auto-autodoc script now.


    edit : here is my auto-autodoc scrip, which produces output which can be piped back to bash to perform requisite operations.


    import re

    files = '''orix
    orix.matrix
    orix.statistics
    orix.device
    orix.function
    orix.plot
    orix.sequence
    orix.graph
    orix.cpuutil
    orix.logic
    orix.gpufun
    orix.convolution
    '''

    print "cd ~/orix/doc"
    print "rm *.rst orix.* mods.txt"

    files = files.split('\n')

    index = '''
    Welcome to orix's documentation!
    ================================

    Contents:

    .. toctree::
    :numbered:
    :maxdepth: 2

    '''

    for f in files:
    fstring = f+'.rst'
    foo = r'\n'.join([f,''.join(['=']*len(f)),".. automodule:: %s"%f," :members:"])
    print 'touch %s'%fstring
    print 'echo -e "%s" >> %s'%(foo,fstring)
    print r'echo -e "%s\n" >> mods.txt'%fstring
    index = index + ' ' + fstring + '\n'

    index = index + '''

    Indices and tables
    ==================

    * :ref:`genindex`
    * :ref:`modindex`
    * :ref:`search`
    '''

    index = r'\n'.join(index.split('\n'))
    index = r'\`'.join(index.split('`'))
    print 'echo -e "%s" >> index.rst'%index
    print 'make clean'
    print 'make latex'
    print 'make html'
    print 'cd _build/latex'
    print 'make all-pdf'
    print 'xpdf orix.pdf &'
    print "cd ~/orix/doc"
    print 'firefox ~/orix/doc/_build/html/index.html &'

    Hello World

    I am creating this blog to aggregate solutions to computer problems that I have found frustrating, or that have wasted large amounts of my time searching through Google for nonexistent help.