I think it should be possible to do a nice lazy GPU language (that is interpreted, if that matters). What you want to do is have functional operators actually just operate on a ... I think it is called the syntax tree ? building up programs, then these programs are compiled and executed when their results are required but not before. Also, apply some sort of transformation to the source code so that functionally identical operations actually look like the same program, then you can cache compiled binaries efficiently and reuse them. One problem I forsee is that manually micromanaging the memory behavior is a big part of getting performance on GPU

No comments:

Post a Comment