Shannon entropy in Haskell

After having toyed with a few vaguely-functional languages over the past few years (Scala being a personal favourite), I decided to get my hands dirty with my old favourite from my undergrad days: Haskell. I fondly remember evenings of page-long lambda-calculus beta reductions in preparation for the final FP module exam. I seemed to be the only one in the year who enjoyed lambda calculus. The idea of describing what something is rather than the steps to achieve it (as in imperative programming) is very persuasive and elegant, and the `brutal purity’ of Haskell (a term I borrowed from John Carmack) forces you to think in this way.

But I digress. Part of my work over the past six months or so has related to information-theoretic measures. Although I’ve implemented efficient entropy computations in a couple of different architectures (raising a few interesting asides which I’ll probably write up in future posts), I had fun trying out a possibly naive shannon entropy definition in Haskell which I’m hoping will serve the purpose of reminding me of just how bad I was at the start of this learning experience!

I’m hoping that in time as I work my way through this excellent book that I can reduce these definitions somewhat. I’m guessing that there is a cheaper way to produce a binned distribution for example using whatever set types are available. I’m at least happy that  bins  is a function composition for now.

Leave a Reply