αχλε

Axle models a set of formal subjects that the author has encountered throughout his lifetime. They take the form of functioning code that allows the reader to experiment with alternative examples.

Although the primary aim of this code is education and clarity, scalability and performance are secondary goals.

The name “axle” was originally chosen because it sounds like “Haskell”. Given the use of UTF symbols, I tried spelling it using Greek letters to get “αχλε”. It turns out that this is the Etruscan spelling of Achilles

Achilles

(image context)

Follow @axledsl on Twitter.

References

Quanta

The first time I had the idea to group units into quanta was at NOCpulse (2000-2002). NOCpulse was bought by Red Hat, which open-sourced the code. There is still evidence of that early code online.

In a 2006 class given by Alan Kay at UCLA, I proposed a system for exploring and learning about scale. The idea occurred to me after reading a news article about a new rocket engine that used the Hoover Dam as a point of reference. I wound up implementing another idea, but always meant to come back to it.

Machine Learning

Based on many classes at Stanford (in the 90’s) and UCLA (in the 00’s), and more recently the Coursera machine learning course in the Fall of 2011. The inimitable Artificial Intelligence: A Modern Approach has been a mainstay throughout.

Statistics, Information Theory, Bayesian Networks, & Causality

The Information Theory code is based on Thomas Cover’s Elements of Information Theory,) and his EE 376A course.

I implemented some Bayesian Networks code in Java around 2006 while Adnan Darwiche class on the subject at UCLA. The Axle version is based on his book, Modeling and Reasoning with Bayesian Networks

Similarly, I implemented ideas from Judea Pearl UCLA course on Causality in Java. The Axle version is based on his classic text Causality

Game Theory

As a senior CS major at Stanford in 1996, I did some independent research with Professor Daphne Koller and PhD student Avi Pfeffer.

This work spanned two quarters. The first quarter involved using Koller and Pfeffer’s Gala language (a Prolog-based DSL for describing games) to study a small version of Poker and solve for the Nash equilibria. The second (still unfinished) piece was to extend the solver to handle non-zero-sum games.

The text I was using at the time was Eric Rasmusen’s Games and Information

Linguistics

Based on notes from Ed Stabler’s graduate courses on language evolution and computational linguistics (Lx 212 08) at UCLA.

Author

See the author page for more about the author.