No, I don't mean like this, but rather, If you spent any time trying to figure out xkcd's Umwelt April Fool comic this year, you may be interested in the Haskell source code. They used all sorts of information about you, the browser you were using, the resolution of your screen, to the geocoding of the network address you came from, etc. to serve up a custom web comic.

Today, davean posted to github the code for waldo, the engine he wrote to drive that comic.

Alas, he was not kind enough to actually supply the code for the umwelt comic strip itself, so you'll still be left wondering if the internet managed to find all of the Easter eggs. (Are they still Easter eggs when you release something a week before Easter?) You may find the list of links below useful if you want to get a feel for the different responses it gave people.

[ Article | xkcd's Forum | Hacker News | /r/haskell ]

[Update: Jun 10, 9:09pm] davean just posted a rather insightful post mortem of the development of waldo that talks a bit about why xkcd uses Haskell internally.

I was contacted by someone who wanted to read my old catamorphism knol, despite the fact that Google Knol is no more.

Fortunately, while it was rather inconvenient that they shut down Google Knol completely, and I'll forever remember a knol as a "unit of abandonment", Google did provide a nice way to download at least your own user content and for that I am grateful.

I have fixed up the internal linkage as much as possible and have placed a copy of the original article below.

Catamorphisms: A Knol

Sadly, as I am not "Dark Magus", I am unable to download the Russian translation. If anyone knows how to contact him, I would love to obtain and preserve a copy of the translation as well.

In light of the burgeoning length of the ongoing record discussion sparked off by Simon Peyton-Jones in October, I would like to propose that we recognize an extension to Wadler's law (supplied in bold), which I'll refer to as the "Weak Record Conjecture" below.

In any language design, the total time spent discussing a feature in this list is proportional to two raised to the power of its position.

  • 0. Semantics
  • 1. Syntax
  • 2. Lexical syntax
  • 3. Lexical syntax of comments
  • 4. Semantics of records

(more...)

Andrej Bauer recently gave a really nice talk on how you can exploit side-effects to make a faster version of Martin Escardo's pseudo-paradoxical combinators.

A video of his talk is available over on his blog, and his presentation is remarkably clear, and would serve as a good preamble to the code I'm going to present below.

Andrej gave a related invited talk back at MSFP 2008 in Iceland, and afterwards over lunch I cornered him (with Dan Piponi) and explained how you could use parametricity to close over the side-effects of monads (or arrows, etc) but I think that trick was lost in the chaos of the weekend, so I've chosen to resurrect it here, and improve it to handle some of his more recent performance enhancements, and show that you don't need side-effects to speed up the search after all!

(more...)

Last time we derived an entailment relation for constraints, now let's get some use out of it.

Reflecting Classes and Instances

Most of the implications we use on a day to day basis come from our class and instance declarations, but last time we only really dealt with constraint products.

(more...)

Max Bolingbroke has done a wonderful job on adding Constraint kinds to GHC.

Constraint Kinds adds a new kind Constraint, such that Eq :: * -> Constraint, Monad :: (* -> *) -> Constraint, but since it is a kind, we can make type families for constraints, and even parameterize constraints on constraints.

So, let's play with them and see what we can come up with!

(more...)

As requested, here are the slides from Dan Doel's excellent presentation on Homotopy and Directed Type Theory from this past Monday's Boston Haskell.

I am very pleased to officially announce Hac Boston, a Haskell hackathon to be held January 20-22, 2012 at MIT in Cambridge, MA. The hackathon will officially kick off at 2:30 Friday afternoon, and go until 5pm on Sunday with the occasional break for sleep.

Everyone is welcome -- you do not have to be a Haskell guru to attend! Helping hack on someone else's project could be a great way to increase your Haskell skills.

If you plan on coming, please officially register, even if you already put your name on the wiki. Registration, travel, some information about lodging and many other details can now be found on the Hac Boston wiki.

We have confirmed space for about 30 people, so please register early! Beyond that we'll have to either seek additional space or close registration.

We're also looking for a few people interested in giving short (15-20 min.) talks, probably on Saturday afternoon. Anything of interest to the Haskell community is fair game---a project you've been working on, a paper, a quick tutorial. If you'd like to give a talk, add it on the wiki.

We look forward to seeing you at MIT!

Last time, I showed that we can build a small parsec clone with packrat support.

This time I intend to implement packrat directly on top of Parsec 3.

One of the main topics of discussion when it comes to packrat parsing since Bryan Ford's initial release of Pappy has been the fact that in general you shouldn't use packrat to memoize every rule, and that instead you should apply Amdahl's law to look for the cases where the lookup time is paid back in terms of repetitive evaluation, computation time and the hit rate. This is great news for us, since, we only want to memoize a handful of expensive combinators.

(more...)

You never heard of the Millenium Falcon? It's the ship that made the Kessel Run in 12 parsecs.

I've been working on a parser combinator library called trifecta, and so I decided I'd share some thoughts on parsing.

Packrat parsing (as provided by frisby, pappy, rats! and the Scala parsing combinators) and more traditional recursive descent parsers (like Parsec) are often held up as somehow different.

Today I'll show that you can add monadic parsing to a packrat parser, sacrificing asymptotic guarantees in exchange for the convenient context sensitivity, and conversely how you can easily add packrat parsing to a traditional monadic parser combinator library.

(more...)

Today I hope to start a new series of posts exploring constructive abstract algebra in Haskell.

In particular, I want to talk about a novel encoding of linear functionals, polynomials and linear maps in Haskell, but first we're going to have to build up some common terminology.

Having obtained the blessing of Wolfgang Jeltsch, I replaced the algebra package on hackage with something... bigger, although still very much a work in progress.

(more...)

In the last few posts, I've been talking about how we can derive monads and monad transformers from comonads. Along the way we learned that there are more monads than comonads in Haskell.

The question I hope to answer this time, is whether or not we turn any Haskell Comonad into a comonad transformer.

(more...)

Last time in Monad Transformers from Comonads I showed that given any comonad we can derive the monad-transformer

 
newtype CoT w m a = CoT { runCoT :: w (a -> m r) -> m r
 

and so demonstrated that there are fewer comonads than monads in Haskell, because while every Comonad gives rise to a Monad transformer, there are Monads that do not like IO, ST s, and STM.

I want to elaborate a bit more on this topic.

(more...)

Last time, I showed that we can transform any Comonad in Haskell into a Monad in Haskell.

Today, I'll show that we can go one step further and derive a monad transformer from any comonad!

(more...)

Today I'll show that you can derive a Monad from any old Comonad you have lying around.

(more...)

Last time, I said that I was going to put our cheap new free monad to work, so let's give it a shot.

(more...)

Last time, I started exploring whether or not Codensity was necessary to improve the asymptotic performance of free monads.

This time I'll show that the answer is no; we can get by with something smaller.

(more...)

A couple of years back Janis Voigtländer wrote a nice paper on how one can use the codensity monad to improve the asymptotic complexity of algorithms using the free monads. He didn't use the name Codensity in the paper, but this is essentially the meaning of his type C.

I just returned from running a workshop on domain-specific languages at McMaster University with the more than able assistance of Wren Ng Thornton. Among the many topics covered, I spent a lot of time talking about how to use free monads to build up term languages for various DSLs with simple evaluators, and then made them efficient by using Codensity.

This has been shown to be a sufficient tool for this task, but is it necessary?

(more...)

A couple of days ago, I gave a talk at Boston Haskell about a shiny new speculative evaluation library, speculation on hackage, that I have implemented in Haskell. The implementation is based on the material presented as "Safe Programmable Speculative Parallelism" by Prakash Prabhu, G Ramalingam, and Kapil Vaswani at last month's PLDI.

I've uploaded a copy of my slides here:

* Introducing Speculation [PowerPoint | PDF]

(more...)

I've put together a poll about when folks would like to have the next meeting, if you're going to be in the area, please help us pick a date and time that works for you!

http://www.doodle.com/ux8mqaa9h2tngf6k

(more...)

« Previous PageNext Page »